CN113095222A - Pedestrian comparison identification method in pedestrian re-identification field - Google Patents

Pedestrian comparison identification method in pedestrian re-identification field Download PDF

Info

Publication number
CN113095222A
CN113095222A CN202110391669.8A CN202110391669A CN113095222A CN 113095222 A CN113095222 A CN 113095222A CN 202110391669 A CN202110391669 A CN 202110391669A CN 113095222 A CN113095222 A CN 113095222A
Authority
CN
China
Prior art keywords
pedestrian
container
feature
matching
orientation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110391669.8A
Other languages
Chinese (zh)
Inventor
王春雨
殷光强
李耶
候少麒
常益凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110391669.8A priority Critical patent/CN113095222A/en
Publication of CN113095222A publication Critical patent/CN113095222A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian comparison identification method in the field of pedestrian re-identification, which comprises the following specific steps: 1) inputting a picture to a pedestrian detection network; 2) pedestrian detection network extracting fragments of all pedestrians in input picture
Figure DDA0003016970080000011
Wherein a represents the input picture of the a-th frame, and i represents the ith pedestrian in the picture of the a-th frame; 3) building debris for storing all pedestrians
Figure DDA0003016970080000012
Container C ofxX is 1,2,3 …, where x represents the xth container; 4) fragments of the detected pedestrian
Figure DDA0003016970080000013
Sending the pedestrian feature extraction network to perform feature extraction and orientation extraction; 5) placing the extracted features and orientation into corresponding containers CxX is 1,2,3 …, and then the extracted features are compared with container CxAnd x is 1,2,3 …, and if the x is matched with the existing features, the features and the count in the corresponding container are updated, so that the comparison and the identification of the pedestrians can be carried out in real time, and the aims of firstly judging the same person and then identifying which person is in the video clip aiming at the continuous frames of the same person are fulfilled.

Description

Pedestrian comparison identification method in pedestrian re-identification field
Technical Field
The invention relates to the field of computer vision in artificial intelligence, in particular to a pedestrian comparison identification method in the field of pedestrian re-identification.
Background
Pedestrian Re-identification (Person Re-id) mainly distinguishes the identity of a pedestrian from different cameras, namely, given a pedestrian image, the pedestrian image under the cross-equipment is searched. The camera aims to make up the visual limitation of the existing fixed camera, can be combined with a pedestrian detection/pedestrian tracking technology, and can be widely applied to the fields of intelligent video monitoring, intelligent security and the like.
Research is carried out aiming at pedestrian re-identification based on a real-time video, continuous tracks of pedestrians can appear in the video, and then the target is judged according to the continuous tracks.
In the face of a continuous pedestrian frame sequence in a video, the existing pedestrian re-identification is to compare the features of pedestrians in each frame with those of pedestrians needing to be compared, if the features are calculated according to 25 frames per 1 second, the comparison is carried out 25 times per second, so that the same person in the video of 1 second exists, but the person is taken out and compared 25 times, and then the person is judged 25 times, so that two problems exist: firstly, the extracted pedestrian feature is inaccurate, because only part of the pedestrian feature exists in each frame, and compared with a continuous video, the pedestrian feature is lost; secondly, the time consumption is large, and the number of the 1-second content is 25, and is more than 25, so that great redundancy exists.
Disclosure of Invention
The invention aims to provide a pedestrian comparison and identification method in the field of pedestrian re-identification, which meets the requirement of real-time video, can perform pedestrian comparison and identification in real time and achieves the purpose of firstly judging the same person and then identifying which person is in the video clip aiming at the continuous frames of the same person.
The invention is realized by the following technical scheme: a pedestrian comparison identification method in the field of pedestrian re-identification comprises the following specific steps:
1) inputting a picture to a pedestrian detection network;
2) pedestrian detection network extracting fragments of all pedestrians in input picture
Figure BDA0003016970060000021
Wherein a represents the input picture of the a-th frame, and i represents the ith pedestrian in the picture of the a-th frame;
3) building debris for storing all pedestrians
Figure BDA0003016970060000022
Container C ofxX is 1,2,3 …, where x represents the xth container;
4) fragments of the detected pedestrian
Figure BDA0003016970060000023
Sending the pedestrian feature extraction network to perform feature extraction and orientation extraction;
5) placing the extracted features and orientation into corresponding containers CxX is 1,2,3 …, and then the extracted features are compared with container CxAnd x is matched with the existing characteristics in 1,2 and 3 …, and if matched, the characteristics and the count in the corresponding container are updated.
In order to further realize the invention, the following arrangement mode is adopted: when matching is carried out in the step 5), the containers C which are not matched are subjected to matchingx,x=1,2,3…, only the pedestrian attribute miss of the container is updated.
In order to further realize the invention, the following arrangement mode is adopted: if the pedestrian attribute miss of a container is updated 2 times in 5 frame input pictures, this container C is deletedx,x=1,2,3…。
In order to further realize the invention, the following arrangement mode is adopted: if a certain container CxWhere x is 1,2,3 … matches the upper 4 frames in 5 consecutive input pictures, then this container C is removedxX is 1,2,3 …, and the container C is connected toxFeatures with x ═ 1,2,3 … and features with corresponding orientations in the three-dimensional feature table
Figure BDA0003016970060000031
And matching, and if the matching is successful, updating the feature corresponding to the ID orientation in the three-dimensional feature table, wherein n represents the nth ID, and t represents the orientation.
In order to further realize the invention, the following arrangement mode is adopted: the three-dimensional feature table is used for storing features of each ID in different orientations, and each feature is defined as f e RDIn which D represents the dimension of the feature, oriented in the direction of t ∈ R1In this case, t-0 represents a back direction, t-1 represents a forward direction, and t-2 represents a lateral direction.
In order to further realize the invention, the following arrangement mode is adopted: each ID in the three-dimensional feature table is formed by V epsilon RD*TWhere T-3 represents the number of species oriented.
In order to further realize the invention, the following arrangement mode is adopted: the container CxThe pedestrian attribute feature is stored in the range of x 1,2,3 …, and includes the number of times count on matching, the number of times miss on non-matching, the depth feature, the orientation of the pedestrian, and the pedestrian ID tag label.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) aiming at the comparison and identification of the identity of the pedestrian, the invention designs a three-dimensional feature storage table (three-dimensional feature table) for storing the normalized features of each ID in different orientations. And introducing a container concept to realize a strategy of matching and confirming 4 frames in 5 continuous frames, matching the confirmed features with the features in the three-dimensional feature storage table, updating the features with the corresponding ID in the downward direction if the matching is successful, and initializing a new ID and a new direction if the matching is not successful.
(2) The cosine similarity is adopted to calculate the similarity between the features, under the condition that the resolution is 1920 x 1080, when the ID in the three-dimensional feature storage table reaches 500, the identification (IIS) rate can reach 93.6%, the FPS rate can reach 26, and the requirement of real-time pedestrian retrieval is met.
(3) The invention adopts continuous frame sequence instead of single frame, so the pedestrian feature is more accurate.
(4) The invention adopts a container tracking mode, thereby improving the comparison accuracy.
Drawings
FIG. 1 is a diagram of a feature storage and comparison process.
Detailed Description
The present invention will be described in further detail with reference to examples, but the embodiments of the present invention are not limited thereto.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Example 1:
the invention designs a pedestrian comparison and identification method in the field of pedestrian re-identification, which meets the real-time video requirement, can carry out pedestrian comparison and identification in real time, and achieves the purposes of firstly judging the same person and then identifying which person is in the video clip aiming at the continuous frames of the same person, and the method comprises the following specific steps as shown in a combined figure 1:
1) inputting a picture to a pedestrian detection network;
2) pedestrian detection network extracting fragments of all pedestrians in input picture
Figure BDA0003016970060000041
Wherein a represents the input picture of the a-th frame, and i represents the ith pedestrian in the picture of the a-th frame;
3) building debris for storing all pedestrians
Figure BDA0003016970060000051
Container C ofxX is 1,2,3 …, where x represents the xth container; each detected pedestrian creates a container CxAnd x is 1,2,3 …, where x represents the xth container. Each container CxThe pedestrian attribute characteristics of the corresponding pedestrian are stored in x ═ 1,2 and 3 …, and include the times count on matching (such as the cou in the connector C table), the times miss on non-matching (such as the mis in the connector C table), the depth feature (such as the fea in the connector C table), the orientation of the pedestrian (such as the ori in the connector C table) and the pedestrian id tag label (such as the lab in the connector C table);
4) fragments of the detected pedestrian
Figure BDA0003016970060000052
Sending the pedestrian feature extraction network to perform feature extraction and orientation extraction;
5) placing the extracted features and orientation into corresponding containers CxX is 1,2,3 …, and then the extracted features are compared with container CxAnd x is matched with the existing characteristics in 1,2 and 3 …, and if matched, the characteristics and the count in the corresponding container are updated.
Example 2:
the present embodiment is further optimized based on the above embodiment, and the same parts as those in the foregoing technical solution will not be described herein again, and further to better implement the present invention, the following setting manner is particularly adopted: when matching is carried out in the step 5), the containers C which are not matched are subjected to matchingxAnd x is 1,2,3 …, only the pedestrian attribute miss of the container is updated.
Example 3:
the present embodiment is further optimized based on any of the above embodiments, and the same parts as those in the foregoing technical solutions will not be described herein again, and in order to further better implement the present invention, the following setting modes are particularly adopted: if the pedestrian attribute miss of a container is updated 2 times in 5 frame input pictures, this container C is deletedx,x=1,2,3…。
Example 4:
the present embodiment is further optimized based on any of the above embodiments, and the same parts as those in the foregoing technical solutions will not be described herein again, and in order to further better implement the present invention, the following setting modes are particularly adopted: if a certain container CxWhere x is 1,2,3 … matches the upper 4 frames in 5 consecutive input pictures, then this container C is removedxX is 1,2,3 …, and the container C is connected toxFeatures with x ═ 1,2,3 … and features with corresponding orientations in the three-dimensional feature table
Figure BDA0003016970060000061
And matching, and if the matching is successful, updating the feature corresponding to the ID orientation in the three-dimensional feature table, wherein n represents the nth ID, and t represents the orientation.
Example 5:
the present embodiment is further optimized based on any of the above embodiments, and the same parts as those in the foregoing technical solutions will not be described herein again, and in order to further better implement the present invention, the following setting modes are particularly adopted: the three-dimensional feature table is used for storing features of each ID in different orientations, and each feature is defined as f e RDIn which D represents the dimension of the feature, oriented in the direction of t ∈ R1In this case, t-0 represents a back direction, t-1 represents a forward direction, and t-2 represents a lateral direction.
Example 6:
the present embodiment is further optimized based on any of the above embodiments, and the same parts as those in the foregoing technical solutions will not be described herein again, and in order to further better implement the present invention, the following setting modes are particularly adopted: each ID in the three-dimensional feature table is formed by V epsilon RD*TWhere T-3 represents the number of species oriented.
Example 7:
in the embodiment, a three-dimensional feature storage Table (three-dimensional feature Table) 3D Pooled Table is designed, as shown in Table in FIG. 1, for storing features of each ID in different orientations, and each feature is used as f e RDIn which D represents the dimension of the feature, oriented in the direction of t ∈ R1In this case, t-0 represents a back direction, t-1 represents a forward direction, and t-2 represents a lateral direction. Each ID in Table is formed by V e RD*TWherein T-3 represents the number of species oriented; meanwhile, a comparison strategy for realizing multi-frame confirmation by a container storage mode is also designed. Features and orientations of different IDs are stored in different containers, the containers also contain a status of feature matching, if a container matches 4 frames in 5 consecutive frames, the container status is changed to confirmation, and the features in the container are compared with the features of pedestrians in the same orientation in the feature table.
As shown in the process diagram of feature storage and comparison in fig. 1, (a) pedestrian features with the same ID and different orientations are stored in (b) a 3D Pooled Table; (c) container in (3) stores the latest state of the Container after processing each frame of image, and replaces the corresponding ID and the corresponding downward-oriented feature in the 3D Pooled Table when the condition is satisfied. Wherein ID refers to the identity of the pedestrian, and origin refers to the Orientation or direction of the pedestrian; the 3D Pooled Table refers to a 3-dimensional feature storage Table; container means a Container.
With reference to fig. 1, in the method, when performing pedestrian comparison identification:
when the 1 st frame of picture comes in, firstly, the picture is deducted out through a pedestrian detection networkFragments of all pedestrians in the frame picture
Figure BDA0003016970060000071
Wherein i represents the ith pedestrian in the 1 st frame picture; each detected pedestrian creates a container CxAnd x is 1,2,3 …, where x represents the xth container. Each container CxAnd x is 1,2,3 …, storing pedestrian attribute features of the corresponding pedestrian, wherein the pedestrian attribute features include a count of times of matching (e.g., cou in the connector C table), a miss of times of not matching (e.g., mis in the connector C table), a depth feature (e.g., fea in the connector C table), orientation of the pedestrian (e.g., ori in the connector C table), and a pedestrian ID tag label (e.g., lab in the connector C table). Then sending the detected fragments of the pedestrian into a pedestrian feature extraction network for feature extraction and orientation extraction, and putting the extracted features and orientation into a corresponding container CxAnd x is 1,2,3 ….
When the a-th frame picture comes in, firstly, the fragments of all pedestrians in the frame picture are deducted through the pedestrian detection network
Figure BDA0003016970060000072
Wherein, a represents the input picture of the a-th frame, and i represents the ith pedestrian in the picture of the a-th frame. Then, sending the detected fragments of the pedestrians into a pedestrian feature extraction network to extract features
Figure BDA0003016970060000073
And orientation
Figure BDA0003016970060000074
Figure BDA0003016970060000075
And extracting the extracted features
Figure BDA0003016970060000076
And container CxMatching the existing features in 1,2,3 …, and updating the corresponding container C if matchingxAnd x is 1,2,3 …. For no matchContainer C for fitmentxAnd x is 1,2,3 …, only updating the pedestrian attribute miss of the container, and if the pedestrian attribute miss of the container is updated 2 times in 5 frames, deleting the container CxAnd x is 1,2,3 …. If a certain container CxX-1, 2,3 … matches 4 frames in 5 consecutive frames, then the container C is removedxX is 1,2,3 …, and the container C is connected toxX is 1,2,3 … and the corresponding orientation in the table
Figure BDA0003016970060000081
Figure BDA0003016970060000083
Matching, wherein n represents the nth ID and t represents the orientation; if the matching is successful, the features in the table corresponding to the ID orientation are updated.
And (3) experimental verification:
experimental verification was performed by studying the effect of resolution and the number of IDs in the 3D storage table (three-dimensional feature table) on the ID recognition rate and FPS:
the resolution size affects the accuracy and speed of detection, while the number of IDs in the 3D memory table affects the accuracy and speed of alignment, preferably IIR and FPS were tested for ID numbers of 100, 200, 300, 400 and 500 and resolutions 1920 x 1080, 1280 x 720 and 720 x 480, respectively, and the results are shown in table 1.
TABLE 1 relationship between identity comparison accuracy and speed as a function of video resolution and ID storage
Figure BDA0003016970060000082
Wherein the evaluation index is as follows: the FPS is adopted to evaluate the detection speed of the whole process, and meanwhile, the method of recording ID Identification Rate (IIR) is adopted to evaluate the accuracy. IIR is the number of correct recognitions/total number of correct recognitions. The number of correct identification represents the number of correct comparison in the identity comparison process; the total times represents the total comparison times in the identity comparison process.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications and equivalent variations of the above embodiments according to the technical spirit of the present invention are included in the scope of the present invention.

Claims (8)

1. A pedestrian comparison identification method in the field of pedestrian re-identification is characterized by comprising the following steps: comprises the following steps:
1) inputting a picture to a pedestrian detection network;
2) pedestrian detection network extracting fragments of all pedestrians in input picture
Figure FDA0003016970050000011
Wherein a represents the input picture of the a-th frame, and i represents the ith pedestrian in the picture of the a-th frame;
3) building debris for storing all pedestrians
Figure FDA0003016970050000012
Container C ofxX is 1,2,3 …, where x represents the xth container;
4) fragments of the detected pedestrian
Figure FDA0003016970050000013
Sending the pedestrian feature extraction network to perform feature extraction and orientation extraction;
5) placing the extracted features and orientation into corresponding containers CxX is 1,2,3 …, and then the extracted features are compared with container CxAnd x is matched with the existing characteristics in 1,2 and 3 …, and if matched, the characteristics and the count in the corresponding container are updated.
2. The method according to claim 1, wherein the method comprises the following steps: when matching is carried out in the step 5), the containers C which are not matched are subjected to matchingxAnd x is 1,2,3 …, only the pedestrian attribute miss of the container is updated.
3. The method according to claim 2, wherein the method comprises the following steps: if the pedestrian attribute miss of a container is updated 2 times in 5 frame input pictures, this container C is deletedx,x=1,2,3…。
4. The method for comparing and identifying the pedestrian in the pedestrian re-identification field according to any one of claims 1 to 3, wherein: if a certain container CxWhere x is 1,2,3 … matches the upper 4 frames in 5 consecutive input pictures, then this container C is removedxX is 1,2,3 …, and the container C is connected toxFeatures with x ═ 1,2,3 … and features with corresponding orientations in the three-dimensional feature table
Figure FDA0003016970050000014
And matching, and if the matching is successful, updating the feature corresponding to the ID orientation in the three-dimensional feature table, wherein n represents the nth ID, and t represents the orientation.
5. The method according to claim 4, wherein the method comprises the following steps: the three-dimensional feature table is used for storing features of each ID in different orientations, and each feature is defined as f e RDIn which D represents the dimension of the feature, oriented in the direction of t ∈ R1In this case, t-0 represents a back direction, t-1 represents a forward direction, and t-2 represents a lateral direction.
6. The method according to claim 4, wherein the method comprises the following steps: each ID in the three-dimensional feature table is formed by V epsilon RD*TWhere T-3 represents the number of species oriented.
7. The method according to claim 4, wherein the method comprises the following steps: the container CxAnd x is 1,2,3 …, and the pedestrian attribute feature includes the number of times count on matching and no matchingMiss, depth feature, orientation of pedestrian, and pedestrian ID tag label.
8. The method for comparing and identifying the pedestrian in the pedestrian re-identification field according to any one of claims 1 to 3, 5 and 6, wherein: the container CxThe pedestrian attribute feature is stored in the range of x 1,2,3 …, and includes the number of times count on matching, the number of times miss on non-matching, the depth feature, the orientation of the pedestrian, and the pedestrian ID tag label.
CN202110391669.8A 2021-04-13 2021-04-13 Pedestrian comparison identification method in pedestrian re-identification field Withdrawn CN113095222A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110391669.8A CN113095222A (en) 2021-04-13 2021-04-13 Pedestrian comparison identification method in pedestrian re-identification field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110391669.8A CN113095222A (en) 2021-04-13 2021-04-13 Pedestrian comparison identification method in pedestrian re-identification field

Publications (1)

Publication Number Publication Date
CN113095222A true CN113095222A (en) 2021-07-09

Family

ID=76676371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110391669.8A Withdrawn CN113095222A (en) 2021-04-13 2021-04-13 Pedestrian comparison identification method in pedestrian re-identification field

Country Status (1)

Country Link
CN (1) CN113095222A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN111291633A (en) * 2020-01-17 2020-06-16 复旦大学 Real-time pedestrian re-identification method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344787A (en) * 2018-10-15 2019-02-15 浙江工业大学 A kind of specific objective tracking identified again based on recognition of face and pedestrian
CN111291633A (en) * 2020-01-17 2020-06-16 复旦大学 Real-time pedestrian re-identification method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YE LI等: "A Multi-task Joint Framework for Real-time Person Search", 《网页在线公开:HTTPS://ARXIV.ORG/ABS/2012.06418V1》 *

Similar Documents

Publication Publication Date Title
US11393103B2 (en) Target tracking method, device, system and non-transitory computer readable medium
CN110288627B (en) Online multi-target tracking method based on deep learning and data association
WO2017000466A1 (en) Method and system for tracking moving target based on optical flow method
Zhang et al. Keyframe detection for appearance-based visual SLAM
CN111582126B (en) Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion
CN114155284A (en) Pedestrian tracking method, device, equipment and medium based on multi-target pedestrian scene
Lee et al. Place recognition using straight lines for vision-based SLAM
CN112232241A (en) Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN108364302B (en) Unmarked augmented reality multi-target registration tracking method
CN110796074A (en) Pedestrian re-identification method based on space-time data fusion
Tian et al. Scene Text Detection in Video by Learning Locally and Globally.
CN112419497A (en) Monocular vision-based SLAM method combining feature method and direct method
CN111696133B (en) Real-time target tracking method and system
CN112381132A (en) Target object tracking method and system based on fusion of multiple cameras
CN111460884A (en) Multi-face recognition method based on human body tracking
CN114255434A (en) Multi-target tracking method and device
CN112926379A (en) Method and device for constructing face recognition model
CN111932582A (en) Target tracking method and device in video image
CN111167731B (en) Product sorting method, product sorting system and intelligent sorting robot
CN116946610B (en) Method and device for picking up goods in intelligent warehousing system
CN113888603A (en) Loop detection and visual SLAM method based on optical flow tracking and feature matching
CN116364064B (en) Audio splicing method, electronic equipment and storage medium
CN116580333A (en) Grain depot vehicle tracking method based on YOLOv5 and improved StrongSORT
CN113095222A (en) Pedestrian comparison identification method in pedestrian re-identification field
CN109241342B (en) Video scene retrieval method and system based on depth clues

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210709