CN113689475A - Cross-border head trajectory tracking method, equipment and storage medium - Google Patents

Cross-border head trajectory tracking method, equipment and storage medium Download PDF

Info

Publication number
CN113689475A
CN113689475A CN202110995893.8A CN202110995893A CN113689475A CN 113689475 A CN113689475 A CN 113689475A CN 202110995893 A CN202110995893 A CN 202110995893A CN 113689475 A CN113689475 A CN 113689475A
Authority
CN
China
Prior art keywords
track
pedestrian
trajectory
target
cross
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110995893.8A
Other languages
Chinese (zh)
Inventor
吴勇
贺强
赵国治
钱学成
李金龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Merchants Bank Co Ltd
Original Assignee
China Merchants Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Merchants Bank Co Ltd filed Critical China Merchants Bank Co Ltd
Priority to CN202110995893.8A priority Critical patent/CN113689475A/en
Publication of CN113689475A publication Critical patent/CN113689475A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a cross-border head trajectory tracking method, equipment and a storage medium, wherein the cross-border head trajectory tracking method comprises the following steps: the method comprises the steps of carrying out target detection on each picture frame shot by a plurality of cameras to obtain target detection results corresponding to each target pedestrian, tracking and identifying each target detection result in each camera respectively to generate track segments of preset picture frame numbers of each target pedestrian in different cameras, re-identifying each track segment based on a preset target feature extraction model to obtain feature information corresponding to each track segment, merging each track segment in each camera and each track segment among different cameras based on the feature information and world coordinates corresponding to each track segment to obtain a merged track corresponding to each target pedestrian, and matching each merged track with each pedestrian track in a pedestrian track library to obtain a global track corresponding to each target pedestrian. The method and the device solve the technical problems that track disorder and track loss easily occur in cross-border tracking.

Description

Cross-border head trajectory tracking method, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a cross-border head trajectory tracking method, device, and storage medium.
Background
With the development of the internet and the popularization of video monitoring technology, target tracking applied in the field of video monitoring gradually becomes one of the hot spots in the field of computer vision research. Tracking the moving track of the target object generally requires acquiring an image of a monitoring area of a camera, performing target detection on the image to identify a target, and tracking the identified target object so as to obtain a complete track of the target object. In some scenes with larger orientations, due to the limited field of view of a single camera, the cooperation of multiple cameras is required to perform global coverage of the monitored area. However, as the number of cameras increases, when multiple cameras analyze images for cross-camera target tracking to determine global pedestrian trajectories, phenomena such as trajectory confusion and trajectory loss are likely to occur.
Disclosure of Invention
The present application mainly aims to provide a cross-border head trajectory tracking method, device and storage medium, and aims to solve the technical problems in the prior art that a trajectory is easily disordered and lost during cross-border head trajectory tracking.
In order to achieve the above object, the present application provides a cross-border head trajectory tracking method, including:
acquiring picture frames of a plurality of camera shooting areas in a store;
performing target detection on each picture frame to obtain target detection results corresponding to each target pedestrian, wherein the target detection results comprise pedestrian detection results;
tracking and identifying the detection result of each target in each camera respectively to generate track segments of the number of preset frame numbers of each target pedestrian in different cameras;
re-identifying each track segment based on a preset target feature extraction model to obtain feature information corresponding to each track segment;
merging each track segment in each camera and each track segment among different cameras based on the characteristic information corresponding to each track segment and the world coordinates corresponding to each track segment to obtain a merged track corresponding to each target pedestrian;
and respectively matching each combined track with each pedestrian track in a pedestrian track library based on the combined track corresponding to each target pedestrian to obtain the global track corresponding to each target pedestrian.
The present application further provides a cross-border head trajectory tracking device, cross-border head trajectory tracking device is a virtual device, cross-border head trajectory tracking device includes:
the acquisition module is used for acquiring the picture frames of a plurality of camera shooting areas in a store;
the target detection module is used for carrying out target detection on each picture frame to obtain a target detection result corresponding to each target pedestrian, wherein the target detection result comprises a pedestrian detection result;
the tracking module is used for respectively tracking and identifying each target detection result in each camera to generate track segments of preset frame numbers of each target pedestrian in different cameras;
the re-recognition module is used for re-recognizing each track segment based on a preset target feature extraction model to obtain feature information corresponding to each track segment;
the track merging module is used for merging each track segment in each camera and each track segment among different cameras based on the characteristic information corresponding to each track segment and the world coordinate corresponding to each track segment to obtain a merged track corresponding to each target pedestrian;
and the matching module is used for respectively matching each merging track with each pedestrian track in the pedestrian track library based on the merging track corresponding to each target pedestrian to obtain the global track corresponding to each target pedestrian.
The present application further provides a cross-border head trajectory tracking device, the cross-border head trajectory tracking device is an entity device, the cross-border head trajectory tracking device includes: the system comprises a memory, a processor and a cross-border head track tracking program stored on the memory, wherein the cross-border head track tracking program is executed by the processor to realize the steps of the cross-border head track tracking method.
The present application further provides a storage medium, which is a readable storage medium, on which a cross-border head trajectory tracking program is stored, where the cross-border head trajectory tracking program is executed by a processor to implement the steps of the cross-border head trajectory tracking method as described above.
The present application also provides a computer program product comprising a computer program which, when executed by a processor, performs the steps of the cross-border trajectory tracking method as described above.
The application provides a cross-border head track tracking method, equipment and a storage medium, the method comprises the steps of firstly obtaining picture frames of a plurality of camera shooting areas in a store, further carrying out target detection on the picture frames to obtain target detection results corresponding to target pedestrians, further carrying out tracking identification on the target detection results in the cameras respectively to generate track segments of preset picture frame numbers corresponding to the target pedestrians in different cameras, further carrying out re-identification on the track segments based on a preset target feature extraction model to obtain feature information corresponding to the track segments, further combining the track segments in the cameras and the track segments among the different cameras based on the feature information corresponding to the track segments and world coordinates corresponding to the track segments, the method obtains the combined track corresponding to each target pedestrian, realizes track splicing based on characteristic information, can take multi-dimensional characteristics of pedestrians into account, thereby completely and accurately determining the global track, further, respectively matches each combined track with each pedestrian track in a pedestrian track library based on the combined track corresponding to each target pedestrian, obtains the global track corresponding to each target pedestrian, realizes the purpose of considering the multi-dimensional characteristics of pedestrians and the global property of different pedestrian tracks on a time-space domain based on the characteristic information and world coordinates, further ensures the accuracy and reliability of track splicing, overcomes the technical defects that the phenomena of track disorder, track loss and the like easily occur when a plurality of cameras analyze images to perform cross-border head target tracking to determine the global pedestrian track in the prior art along with the increase of the number of the cameras, thereby improving the accuracy of cross-border target tracking.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic flowchart illustrating a first embodiment of a cross-border trajectory tracking method according to the present application;
FIG. 2 is a schematic flowchart illustrating a cross-border trajectory tracking method according to a second embodiment of the present application;
FIG. 3 is a schematic flowchart illustrating a cross-border trajectory tracking method according to a third embodiment of the present application;
FIG. 4 is a schematic flowchart illustrating a fourth embodiment of the cross-border trajectory tracking method according to the present application;
fig. 5 is a schematic structural diagram of a cross-border trajectory tracking device of a hardware operating environment related to the cross-border trajectory tracking method in the embodiment of the present application.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In a first embodiment of the cross-border trajectory tracking method according to the present application, referring to fig. 1, the cross-border trajectory tracking method includes:
step S10, acquiring picture frames of a plurality of camera shooting areas in a store;
in this embodiment, it should be noted that the cross-border trajectory tracking method may be applied to a market, a school, and other scenes, and preferably, it is assumed that the cross-border trajectory tracking method is applied to a market, and the plurality of cameras are corresponding cameras deployed inside a store.
Step S20, carrying out target detection on each picture frame to obtain target detection results corresponding to each target pedestrian, wherein the target detection results comprise pedestrian detection results;
in this embodiment, the pedestrian detection result is position information of the target pedestrian in the image frame, and the position information includes an upper left corner coordinate and a lower right corner coordinate of the detection frame corresponding to the target pedestrian in the image frame, for example, the upper left corner coordinate and the lower right corner coordinate may be expressed as: (xmin, ymin, xmax, ymax).
Performing target detection on each picture frame to obtain a target detection result corresponding to each target pedestrian, wherein the target detection result includes a pedestrian detection result, and specifically, preprocessing each picture frame corresponding to each camera to enable the size corresponding to each picture frame to reach the input size required by the model, and then inputting each picture frame into a preset pedestrian detection model, wherein the preset pedestrian detection model includes a YoloV4 model and a deep neural network model corresponding to other pedestrian detections, so as to obtain pedestrian detection results corresponding to each picture frame, and in addition, when the preset face detection model detects that a face exists in the picture frame, a face detection result corresponding to the preset face detection model is obtained.
Step S30, tracking and identifying each target detection result in each camera respectively, and generating track segments of each target pedestrian in preset frame numbers corresponding to different cameras;
in this embodiment, it should be noted that the track segment includes the time when the target pedestrian appears, the position information corresponding to each frame of the target pedestrian, and the number corresponding to the camera.
And respectively tracking and identifying the target detection results in the cameras to generate track segments of preset frame numbers of the target pedestrians corresponding to different cameras, specifically, inputting the pedestrian detection results into a preset pedestrian tracking model to obtain pedestrian tracking information corresponding to the target pedestrians.
The step of respectively tracking and identifying each target detection result in each camera and generating track segments of each target pedestrian in preset frame numbers corresponding to different cameras comprises the following steps:
step S31, tracking the pedestrian detection result in each camera based on a preset pedestrian tracking model, and obtaining pedestrian tracking information corresponding to each target pedestrian;
in this embodiment, it should be noted that the pedestrian tracking information includes position information of the target pedestrian in the frame and a unique identification corresponding to the target pedestrian in the camera.
The method includes tracking pedestrian detection results in each camera based on a preset pedestrian tracking model to obtain pedestrian tracking information corresponding to each target pedestrian, and specifically inputting the pedestrian detection results in each camera into the preset pedestrian tracking model to track and identify each target pedestrian to obtain the pedestrian tracking information corresponding to each target pedestrian.
Step S32, based on the pedestrian tracking information, generating track segments of preset frame numbers corresponding to different cameras for each target pedestrian.
In this embodiment, based on the pedestrian tracking information, a track segment of a preset frame number corresponding to each target pedestrian at different cameras is generated, and specifically, based on the pedestrian tracking information corresponding to the picture frame in each camera, if the number of the picture frames reaches the preset frame number, a track segment of the preset frame number corresponding to each target pedestrian at different cameras is generated.
Step S40, re-identifying each track segment based on a preset target feature extraction model, and obtaining feature information corresponding to each track segment;
in this embodiment, it should be noted that the feature information includes pedestrian feature information corresponding to a pedestrian detection result, and when a face detection result exists in the track segment, the feature information also includes face feature information corresponding to the face detection result.
Specifically, firstly, pedestrian feature data corresponding to each camera is collected and marked in advance, iterative training is carried out on the feature extraction model to be trained based on the pedestrian feature data, the preset target feature extraction model is obtained, and then re-recognition is carried out on each track segment based on the preset target feature extraction model, so that feature information corresponding to each track segment is obtained.
Step S50, merging the track segments in the cameras and the track segments among different cameras based on the characteristic information corresponding to the track segments and the world coordinates corresponding to the track segments to obtain merged tracks corresponding to the target pedestrians;
in this embodiment, it should be noted that the merging includes merging track segments within the same camera and merging track segments between different cameras, and the world coordinates are mapped to world coordinates by pixel two-dimensional coordinates.
Merging each track segment in each camera and each track segment between different cameras based on the characteristic information corresponding to each track segment and the world coordinate corresponding to each track segment to obtain a merged track corresponding to each target pedestrian, specifically, mapping the pixel coordinates of the cameras to real world coordinates through perspective transformation, respectively calculating the similarity of the same lens track between each track segment in each camera according to the characteristic information corresponding to each track segment in each camera in each picture frame, namely, calculating the similarity between different track segments in the same camera, and further, calculating the similarity of different lens tracks corresponding to each track segment between different cameras based on the characteristic information and the world coordinate corresponding to each track segment in each picture frame in different cameras, that is, calculating the similarity between the trajectory segments of different cameras, further, after obtaining the similarity between the same shot trajectory and the similarity between different shot trajectories, performing an optimal solution on the similarity between the same shot trajectory and the similarity between different shot trajectories, so as to combine the trajectory segments of the cameras and the trajectory segments of the different cameras, thereby obtaining a combined trajectory corresponding to each target pedestrian, and thus implementing the identification tracking between multi-target cross-border cameras.
And step S60, respectively matching each merged track with each pedestrian track in the pedestrian track library based on the merged track corresponding to each target pedestrian to obtain a global track corresponding to each target pedestrian.
In this embodiment, based on the merged trajectory corresponding to each target pedestrian, the merged trajectory is respectively matched with the trajectories of the pedestrians in the pedestrian trajectory library to obtain the global trajectory corresponding to each target pedestrian, and specifically, the following steps are performed for each merged trajectory:
each track segment has a corresponding camera number, the track segments corresponding to the camera numbers are respectively extracted from the combined track, to calculate the Hungarian cost value between each of the trajectory fragments and each of the pedestrian trajectories in the pedestrian trajectory library, and then matching each track segment with each pedestrian track according to a Hungarian matching algorithm one by one, selecting the pedestrian track with the largest matching number as the pedestrian track to be associated, and if the pedestrian tracks with the same matching number exist, selecting the pedestrian track with the minimum Hungarian cost value as the pedestrian track to be associated according to the Hungarian cost values respectively corresponding to the pedestrian tracks with the same matching number, and then, carrying out association and combination on the pedestrian trajectory to be associated and the combined trajectory to obtain a global trajectory corresponding to the target pedestrian, wherein the Hungarian cost value solving mode is as follows:
Figure BDA0003233809230000071
wherein, Cost (Γ)ij) Representing a track segment ΓiAnd a pedestrian trajectory gammaj' cost value, Γj' represents the pedestrian track corresponding to the entrance-exit store pedestrian track library,
Figure BDA0003233809230000072
representing a track segment ΓiIn the feature information of the a-th frame,
Figure BDA0003233809230000073
representing the pedestrian trajectory gammaj'In the feature information of the b-th frame,
Figure BDA0003233809230000074
representing a track segment ΓiAt the location of the world coordinates of the a-th frame,
Figure BDA0003233809230000075
representing the pedestrian trajectory gammaj'At the location of the world coordinates of the b-th frame,
Figure BDA0003233809230000076
representing a track segment ΓiAt the time of the occurrence of the a-th frame,
Figure BDA0003233809230000077
representing the pedestrian trajectory gammaj'At the b-th frame occurrence time, | | | | | non-calculation2Express a binary equationNumber, M1Representing a track segment ΓiNumber of frames of (M)2Representing the pedestrian trajectory gammaj'The number of frames of, inf represents a very large value, τ3The preset speed threshold of the target pedestrian is expressed, it should be further noted that the movement speed of the pedestrian is limited, that is, the pedestrians exceeding a certain distance do not belong to the same person under a certain time condition, and therefore, when the ratio of the position of the corresponding world coordinate in each picture frame based on each trajectory segment and the pedestrian trajectory to the occurrence time in each picture frame exceeds the preset speed threshold, the hungarian cost value is an extremely large value, that is, it can be determined that the target pedestrian corresponding to the pedestrian trajectory and the target pedestrian corresponding to the trajectory segment do not belong to the same target pedestrian.
Additionally, when matching between a certain track segment in the combined track and each pedestrian track in the pedestrian track library fails, whether the feature information in the track segment with the matching failure has face feature information is judged, and if yes, matching is carried out according to the face feature information and the face feature information corresponding to each pedestrian track.
The embodiment of the application provides a cross-border head track tracking method, which includes the steps of firstly obtaining picture frames of a plurality of camera shooting areas in a store, further carrying out target detection on the picture frames, obtaining target detection results corresponding to target pedestrians, further respectively carrying out tracking identification on the target detection results in the cameras, generating track segments of preset picture frame numbers corresponding to the target pedestrians in different cameras, further carrying out re-identification on the track segments based on a preset target feature extraction model, obtaining feature information corresponding to the track segments, further combining the track segments in the cameras and the track segments among different cameras based on the feature information corresponding to the track segments and world coordinates corresponding to the track segments, the method obtains the combined track corresponding to each target pedestrian, realizes track splicing based on characteristic information, can take multi-dimensional characteristics of pedestrians into account, thereby completely and accurately determining the global track, further, respectively matches each combined track with each pedestrian track in a pedestrian track library based on the combined track corresponding to each target pedestrian, obtains the global track corresponding to each target pedestrian, realizes the purpose of considering the multi-dimensional characteristics of pedestrians and the global property of different pedestrian tracks on a time-space domain based on the characteristic information and world coordinates, further ensures the accuracy and reliability of track splicing, overcomes the technical defects that the phenomena of track disorder, track loss and the like easily occur when a plurality of cameras analyze images to perform cross-border head target tracking to determine the global pedestrian track in the prior art along with the increase of the number of the cameras, thereby improving the accuracy of cross-border target tracking.
Further, referring to fig. 2, based on the first embodiment of the present application, in another embodiment of the present application, before the step of acquiring the picture frames of the shooting areas of the multiple cameras in the store, the cross-border trajectory tracking method includes:
step A10, acquiring an entrance video frame corresponding to an entrance camera shooting area;
in this embodiment, it should be noted that the entrance camera is a camera deployed in the entrance area of the store.
Step A20, based on a preset face detection model, carrying out face detection on the entry video frame to obtain an entry face detection result;
in this embodiment, based on a preset face detection model, face detection is performed on each entry video frame in the entry video to obtain an entry face detection result corresponding to each entry video frame, and specifically, each entry video frame is input into the preset face detection model to perform face detection on each target pedestrian in the entry video frame to obtain an entry face detection result corresponding to each entry video frame, where the face detection model includes a RetinaFace model, other face detection models, and the like.
Step A30, based on a preset pedestrian detection model, carrying out pedestrian detection on the entrance video frame to obtain an entrance pedestrian detection result;
in this embodiment, it should be noted that the entrance pedestrian detection result is coordinate position information of the target pedestrian in the entrance video frame.
And carrying out pedestrian detection on the inlet video frames based on a preset pedestrian detection model to obtain inlet pedestrian detection results, and specifically, carrying out pedestrian detection on each video frame through the preset pedestrian detection model to obtain each inlet pedestrian detection result.
Step A40, tracking an entrance pedestrian detection result corresponding to the entrance video frame to obtain entrance pedestrian tracking information;
in this embodiment, the entrance pedestrian detection result corresponding to the entrance video frame is tracked to obtain entrance pedestrian tracking information, and specifically, the entrance pedestrian detection result in the entrance camera is input into a preset pedestrian tracking model to track and identify each target pedestrian, so as to obtain entrance pedestrian tracking information corresponding to each target pedestrian.
Step A50, correlating the entrance pedestrian tracking information with the entrance face detection result to generate pedestrian tracks corresponding to the target pedestrians;
in this embodiment, the entrance pedestrian tracking information and the entrance face detection result are associated to generate a pedestrian track corresponding to each target pedestrian, specifically, because a face frame of a face detection result pair is located in a pedestrian frame corresponding to a pedestrian detection result, an intersection ratio corresponding to the face frame and the pedestrian frame is calculated, when the intersection ratio is greater than a preset intersection ratio threshold value, it is determined that the face frame and the pedestrian frame are the same target pedestrian, and the entrance pedestrian tracking information corresponding to the tracking of the pedestrian frame is associated with the face frame, so as to generate a pedestrian track corresponding to each target pedestrian.
In step a60, if the pedestrian trajectory corresponding to each target pedestrian is a store-entering trajectory, each pedestrian trajectory is stored in a pedestrian trajectory library.
In this embodiment, it should be noted that the pedestrian trajectory obtained by the portal camera shooting has two cases, namely, a target pedestrian entering the store and a target pedestrian leaving the store, and the pedestrian trajectory database is a pedestrian trajectory database storing the pedestrian trajectories corresponding to the target pedestrians in the store.
If the pedestrian trajectory corresponding to each target pedestrian is a store-entering trajectory, storing each pedestrian trajectory into a pedestrian trajectory library, specifically, executing the following steps for the pedestrian trajectory corresponding to each target pedestrian:
judging whether the pedestrian track corresponding to the target pedestrian is a store-entering track or a store-leaving track according to the starting time and the ending time of the pedestrian track, and if the pedestrian track is the store-entering track, storing the pedestrian track into a pedestrian track library.
The embodiment of the application provides a cross-border head track tracking method, namely, an entrance video frame corresponding to an entrance camera shooting area is obtained, then face detection is carried out on the entrance video frame based on a preset face detection model to obtain an entrance face detection result, meanwhile, pedestrian detection is carried out on the entrance video frame based on the preset pedestrian detection model to obtain an entrance pedestrian detection result, further, the entrance pedestrian detection result corresponding to the entrance video frame is tracked to obtain entrance pedestrian tracking information, the entrance pedestrian tracking information and the entrance face detection result are correlated to generate pedestrian tracks corresponding to target pedestrians, if the pedestrian tracks corresponding to the target pedestrians are store-in tracks, the pedestrian tracks are stored in a pedestrian track library, and pedestrian data shot according to the entrance camera is realized, and acquiring the pedestrian track corresponding to each target pedestrian, and judging whether the target pedestrian is a pedestrian entering the store according to the pedestrian track so as to update the pedestrian track corresponding to the pedestrian entering the store in real time to a current pedestrian track library.
Further, referring to fig. 3, in another embodiment of the present application, based on the first embodiment of the present application, the step of combining the track segments in the cameras and the track segments between different cameras based on the feature information corresponding to each track segment and the world coordinates corresponding to each track segment to obtain a combined track corresponding to each target pedestrian includes:
step B10, converting the pixel coordinates corresponding to each track segment into world coordinates;
in this embodiment, the pixel coordinates corresponding to each track segment are converted into world coordinates, specifically, the internal reference and the external reference of each camera are calibrated by a camera calibration tool, and the pixel coordinates of each camera are mapped to real world coordinates by perspective transformation.
Step B20, respectively calculating the similarity of the same shot track between each track segment in each camera based on the characteristic information corresponding to each track segment in each camera in each picture frame;
in this embodiment, it should be noted that the same-shot trajectory similarity is a corresponding similarity between trajectory segments under one camera.
Respectively calculating the similarity of the same shot track between the track segments in each camera based on the characteristic information corresponding to each track segment in each camera in each picture frame, specifically, executing the following steps for each track segment in each camera:
and respectively calculating the similarity of the feature information corresponding to each track segment in the camera pairwise to obtain the corresponding same-shot track similarity between each track segment in the camera.
The step of calculating the similarity of the same-shot trajectory between the trajectory segments in each camera based on the feature information corresponding to each trajectory segment in each camera in each picture frame includes:
step B21, respectively calculating euclidean distances between the trajectory segments in each camera based on the feature information corresponding to each trajectory segment in each camera in each picture frame, and obtaining the similarity of the same-lens trajectories.
In this embodiment, it should be noted that each of the cameras at least includes one trackA segment having a track composition corresponding to a predetermined frame number, for example, where each track segment is Γ ═ Γ12,...,ΓNWherein each said track segment comprises an M frame Γi={L1,L2,...,LMAnd (c) the step of (c) in which,
Figure BDA0003233809230000114
Figure BDA0003233809230000115
is expressed as gammaiThe track corresponding to the information of the j-th frame, ΓiThe method comprises the steps of representing the ith track, c representing a camera number, t representing the time of appearance of a target pedestrian, f representing feature information extracted by re-recognition of a pedestrian detection result, f' representing face feature information extracted when the target pedestrian has a face, and l representing the world coordinate of a world coordinate system mapped by the pixel coordinate corresponding to the pedestrian detection result through perspective transformation, and further, it should be noted that the same person cannot appear in picture frames corresponding to different track segments within the same time, so that when picture frames with equal time exist between different track segments in the camera, the similarity between the different track segments is 0.
Respectively calculating Euclidean distances between the track segments in each camera based on the characteristic information corresponding to each track segment in each camera in each picture frame to obtain the similarity of the same-lens track, and specifically, executing the following steps for each camera:
if no picture frame with equal time exists between different track segments, respectively calculating Euclidean distances between characteristic information in each picture frame corresponding to different track segments, so as to obtain the similarity of the same-lens track between different track segments, wherein a formula corresponding to the calculated Euclidean distances between the track segments is as follows:
Figure BDA0003233809230000111
wherein, S (gamma)ij) Representing the similarity of the same lens track under the same camera,
Figure BDA0003233809230000116
representing the trajectory ΓiAnd a track gammajThere are frames that are equal in time,
Figure BDA0003233809230000112
representing the trajectory ΓiIn the feature of the a-th frame,
Figure BDA0003233809230000113
representing the trajectory ΓjIn the b-th frame, | | | | | non-calculation2Denotes a two-norm, M1Representing the trajectory ΓiNumber of frames of (M)2Representing the trajectory ΓjThe number of frames.
Step B30, calculating different lens track similarities corresponding to the track segments among different cameras based on the characteristic information and the world coordinates respectively corresponding to the track segments in different cameras in each picture frame;
in this embodiment, it should be noted that the different lens trajectory similarities are trajectory similarities corresponding to trajectory segments in different cameras.
Calculating different lens track similarities corresponding to the track segments between different cameras based on the feature information and the world coordinate respectively corresponding to the track segments in different cameras in each picture frame, specifically, calculating the feature similarity corresponding to the track segments between different cameras based on the feature information respectively corresponding to the track segments in different cameras in each picture frame, calculating the position similarity corresponding to the track segments between different cameras based on the world coordinate respectively corresponding to the track segments in different cameras and a preset world coordinate difference value, and determining the different lens track similarities corresponding to the track segments between different cameras based on the feature similarities, the position similarities and a preset similarity weight.
The step of calculating the similarity of different lens tracks corresponding to each track segment between different cameras based on the feature information and the world coordinates respectively corresponding to each track segment in different cameras in each picture frame comprises:
step B31, calculating the feature similarity corresponding to each track segment among different cameras based on the feature information corresponding to each track segment in different cameras in each picture frame;
in this embodiment, based on feature information corresponding to each track segment in different cameras in each of the picture frames, a feature similarity corresponding to each track segment between different cameras is calculated, and specifically, the following steps are performed for track segments between any two cameras:
based on the feature information corresponding to each track segment in different cameras in each picture frame and the frame number corresponding to each track segment, respectively calculating the Euclidean distance between the feature information of each track segment in each picture frame, and obtaining the feature similarity corresponding to each track segment between different cameras.
Figure BDA0003233809230000121
Wherein the content of the first and second substances,
Figure BDA0003233809230000122
representing the trajectory ΓiIn the feature of the a-th frame,
Figure BDA0003233809230000123
representing the trajectory ΓjIn the b-th frame, | | | | | non-calculation2Denotes a two-norm, M1Representing the trajectory ΓiNumber of frames of (M)2Representing the trajectory ΓjThe number of frames.
Step B32, calculating the corresponding position similarity of each track segment among different cameras based on the world coordinate corresponding to each track segment in different cameras and a preset world coordinate difference value;
in this embodiment, it should be noted that the preset world coordinate difference is a difference allowed by the target pedestrian between the world coordinates corresponding to different track segments at the same time, and includes a minimum value and a maximum value allowed by the target pedestrian in the preset world coordinate difference.
Calculating the position similarity corresponding to each track segment between different cameras based on the world coordinate and the preset world coordinate difference value respectively corresponding to each track segment in different cameras, specifically, executing the following steps between any two cameras:
calculating the position distance of each track segment in different cameras between the world coordinates corresponding to each picture frame in the same time, obtaining the comparison relationship between the position distance and the minimum value and the maximum value allowed by the difference value of the preset world coordinates based on the position distance, and further obtaining the position similarity corresponding to each track segment between the different cameras, wherein the position similarity is calculated as follows:
Figure BDA0003233809230000131
Figure BDA0003233809230000132
wherein the content of the first and second substances,
Figure BDA0003233809230000133
representing a track segment ΓiAt the a-th frame and track segment ΓjThe penalty reward function at the b-th frame,
Figure BDA0003233809230000134
representing a track segment ΓiAt the location of the world coordinates of the a-th frame,
Figure BDA0003233809230000135
representing a track segment ΓjIn the position of the world coordinate of the frame b, | | | | | non-woven phosphor2Represents twoNorm, M1Representing the trajectory ΓiNumber of frames of (M)2Representing the trajectory ΓjNumber of frames,. tau1、τ2And the minimum value and the maximum value allowed by the target pedestrian in the preset world coordinate difference are shown.
And step B33, determining different lens track similarities corresponding to track segments among different cameras based on the feature similarities, the position similarities and the preset similarity weight.
In this embodiment, it should be noted that the preset similarity weight is a preset weight threshold.
Determining different lens trajectory similarities corresponding to the trajectory segments between the different cameras based on the feature similarities, the position similarities and preset similarity weights, and specifically calculating different lens trajectory similarities corresponding to the trajectory segments between the different cameras based on the feature similarities and the preset similarity weights corresponding to the position similarities, wherein a different lens trajectory similarity calculation formula is as follows:
S(Γij)=η·Sf+(1-η)·Sl
wherein S isfRepresenting feature similarity, SlRepresenting the position similarity, eta represents the preset similarity weight, and eta belongs to [0,1 ]]。
And step B40, merging the track segments in the cameras based on the similarity of the same lens and the similarity of the different lens tracks to obtain the merged track corresponding to the target pedestrians.
In this embodiment, based on each of the same-shot similarities and each of the different-shot-track similarities, each track segment in each of the cameras is merged to obtain a merged track corresponding to each of the target pedestrians, and specifically, each of the same-shot similarities and each of the different-shot-track similarities are solved to obtain a track similarity matrix, and then based on the track similarity matrix, each of the track segments is merged to obtain a merged track corresponding to each of the target pedestrians.
The step of combining the track segments in the cameras based on the similarity of the same lens and the similarity of the different lens tracks to obtain a combined track corresponding to each target pedestrian includes:
step B41, solving each same shot similarity and each different shot track similarity respectively to obtain a track similarity matrix;
in this embodiment, each of the same-shot similarities and each of the different-shot trajectory similarities are solved to obtain a trajectory similarity matrix, and specifically, each of the same-shot similarities and each of the different-shot similarities are combined and solved to obtain the trajectory similarity matrix.
Step B42, decomposing and optimizing the track similarity matrix to obtain a target optimization matrix;
in this embodiment, it should be noted that the processing modes corresponding to the decomposition optimization include singular value decomposition, other matrix decomposition, and the like.
Decomposing and optimizing the track similarity matrix to obtain a target optimization matrix, specifically decomposing the track similarity matrix to obtain the target optimization matrix based on the number of tracks and the number of target pedestrians corresponding to the track similarity matrix, for example, if the number of tracks is N and the number of target pedestrians is K, a singular value corresponding to the track similarity matrix may be decomposed into an aN×K{0, 1} matrix of (a)N×KThe row indicates that there are N tracks, and for the ith row, the K column indicates the probability that the ith track belongs to K targets, and further, it should be noted that the specific steps of decomposition and optimization are as follows:
Figure BDA0003233809230000151
s.t A∈{0,1}N×K
AIK=IN
wherein the content of the first and second substances,
Figure BDA0003233809230000152
represents the optimized A matrix, A*Representing the optimal solution for the solution of the optimization matrix equation, A represents an N K {0, 1} matrix, N represents the total number of traces, K represents the number of targets, s.t represents constraints, AIK=INIndicating that the track is correct and belongs to only one object, IKAnd INRepresenting K x 1 and N x 1 matrices of all 1,Tand expressing the transposition of the matrix, and obtaining an optimization equation after simplification:
Figure BDA0003233809230000153
wherein A' represents an iterative matrix to be optimized, S is a track similarity matrix, T represents matrix transposition, and alpha belongs to [0,1 ]]A penalty factor is represented which is a function of,
Figure BDA0003233809230000154
sqrt (), +, respectively represent multiplication, evolution, division of the corresponding element of the matrix.
And step B43, merging the track segments based on the target optimization matrix to obtain a merged track corresponding to each target pedestrian.
In this embodiment, based on the target optimization matrix, the trajectory segments are merged to obtain a merged trajectory corresponding to each target pedestrian, and specifically, according to the target optimization matrix, the trajectory segments corresponding to each camera are respectively merged to obtain a merged trajectory corresponding to each target pedestrian.
The embodiment of the application provides a cross-border head trajectory tracking method, that is, pixel coordinates corresponding to each track segment are converted into world coordinates, and then similarity of the same lens trajectory among the track segments in each camera is calculated respectively based on characteristic information corresponding to each track segment in each image frame in each camera, so that the purpose of calculating the similarity of different track segments under the same camera according to the characteristic information to perform track splicing is achieved, and further, similarity of different lens trajectories corresponding to the track segments among different cameras is calculated based on the characteristic information and the world coordinates corresponding to each track segment in each image frame in different cameras, so that the cross-border head trajectory tracking method is further provided. Based on the similarity of the same lens and the similarity of the tracks of different lenses, the track segments in the cameras are combined to obtain the combined track corresponding to the target pedestrians, the positions corresponding to the characteristic information and the world coordinates are realized, the characteristics of the pedestrians and the global performance of the tracks of different pedestrians on a time-space domain are considered at the same time, and the integrity of the similarity is ensured, so that the track segments are spliced according to the similarity, the combined track corresponding to the target pedestrians can be accurately obtained, and the technical defects that the phenomena of track disorder, track loss and the like are easy to occur when a plurality of camera analysis images are subjected to cross-border target tracking to determine the global track of the pedestrians along with the increase of the number of the cameras in the prior art are overcome.
Further, referring to fig. 4, based on the first embodiment in the present application, in another embodiment of the present application, the step of respectively matching each merged trajectory with each pedestrian trajectory in the pedestrian trajectory library based on the merged trajectory corresponding to each target pedestrian to obtain a global trajectory corresponding to each target pedestrian includes:
step C10, respectively matching each track segment in each combined track with each pedestrian track in the pedestrian track library to obtain a track matching list corresponding to each combined track;
in this embodiment, it should be noted that each track segment in each merged track has a corresponding camera number.
Respectively matching each track segment in each merged track with each pedestrian track in the pedestrian track library to obtain a track matching list corresponding to each merged track, specifically, executing the following steps for each merged track:
and extracting the track segments corresponding to each camera from the combined tracks, further calculating the Hungarian cost value between the track segments corresponding to each camera and the tracks of the pedestrians in the pedestrian track library, further matching the track segments with the tracks of the pedestrians one by one, obtaining track matching lists corresponding to the combined tracks, and further obtaining track matching lists corresponding to the combined tracks.
Step C20, selecting pedestrian tracks to be associated respectively corresponding to the combined tracks from the pedestrian tracks in the track matching lists according to a preset selection mode;
in this embodiment, the pedestrian tracks to be associated respectively corresponding to each merging track are respectively selected from the pedestrian tracks in each track matching list according to a preset selection manner, and specifically, the following steps are performed for the track matching list corresponding to each merging track:
and selecting the pedestrian track with the largest matching success quantity as the pedestrian track to be associated from the track matching list corresponding to the combined track, and if the pedestrian tracks with the same matching success quantity exist, comparing the Hungarian cost values corresponding to the pedestrian tracks with the same quantity, and further selecting the pedestrian track with the smallest cost value as the pedestrian track to be associated.
And step C30, performing association matching on the pedestrian trajectory to be associated and each combined trajectory corresponding to each combined trajectory to obtain a global trajectory corresponding to each target pedestrian.
In this embodiment, the pedestrian trajectory to be associated and each merged trajectory corresponding to each merged trajectory are associated and matched to obtain a global trajectory corresponding to each target pedestrian, and specifically, the following steps are performed for each merged trajectory:
and performing association matching on the combined track and the to-be-associated pedestrian track corresponding to the combined track to obtain a global track corresponding to the target pedestrian, so that cross-lens multi-target identification tracking is realized.
The embodiment of the application provides a cross-border head trajectory tracking method, that is, matching each trajectory segment in each merged trajectory with each pedestrian trajectory in the pedestrian trajectory library respectively to obtain a trajectory matching list corresponding to each merged trajectory respectively, further selecting a pedestrian trajectory to be associated corresponding to each merged trajectory from each pedestrian trajectory in each trajectory matching list according to a preset selection mode, further performing association matching on the pedestrian trajectory to be associated corresponding to each merged trajectory and each merged trajectory to obtain a global trajectory corresponding to each target pedestrian, thereby realizing matching and association according to the merged trajectories corresponding to a plurality of cameras in a store and the pedestrian trajectory in the pedestrian trajectory library corresponding to an entrance camera, thereby obtaining the global trajectory corresponding to each target pedestrian, namely, the global tracks of all the target pedestrians entering and existing in the store are obtained, and cross-shot multi-target recognition and tracking are achieved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a cross-border trajectory tracking device of a hardware operating environment according to an embodiment of the present application.
As shown in fig. 5, the cross-border trajectory tracking device may include: a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002. The communication bus 1002 is used for realizing connection communication between the processor 1001 and the memory 1005. The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a memory device separate from the processor 1001 described above.
Optionally, the cross-head trajectory tracking device may further include a rectangular user interface, a network interface, a camera, RF (Radio Frequency) circuitry, a sensor, audio circuitry, a WiFi module, and so forth. The rectangular user interface may comprise a Display screen (Display), an input sub-module such as a Keyboard (Keyboard), and the optional rectangular user interface may also comprise a standard wired interface, a wireless interface. The network interface may optionally include a standard wired interface, a wireless interface (e.g., WIFI interface).
Those skilled in the art will appreciate that the cross-head trajectory tracking device configuration shown in FIG. 5 does not constitute a limitation of a cross-head trajectory tracking device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 5, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, and a cross-border trajectory tracking program. The operating system is a program that manages and controls the hardware and software resources of the cross-border tracking device, supporting the operation of the cross-border tracking program as well as other software and/or programs. The network communication module is used to implement communication between the components within the memory 1005 and with other hardware and software in the cross-border trajectory tracking system.
In the cross-border trajectory tracking device shown in fig. 5, the processor 1001 is configured to execute a cross-border trajectory tracking program stored in the memory 1005 to implement the steps of the cross-border trajectory tracking method described in any one of the above.
The specific implementation of the cross-border trajectory tracking device of the present application is substantially the same as the embodiments of the cross-border trajectory tracking method described above, and is not described herein again.
The present application further provides a cross-border head trajectory tracking device, the cross-border head trajectory tracking device includes:
the acquisition module is used for acquiring the picture frames of a plurality of camera shooting areas in a store;
the target detection module is used for carrying out target detection on each picture frame to obtain a target detection result corresponding to each target pedestrian, wherein the target detection result comprises a pedestrian detection result;
the tracking module is used for respectively tracking and identifying each target detection result in each camera to generate track segments of preset frame numbers of each target pedestrian in different cameras;
the re-recognition module is used for re-recognizing each track segment based on a preset target feature extraction model to obtain feature information corresponding to each track segment;
the track merging module is used for merging each track segment in each camera and each track segment among different cameras based on the characteristic information corresponding to each track segment and the world coordinate corresponding to each track segment to obtain a merged track corresponding to each target pedestrian;
and the matching module is used for respectively matching each merging track with each pedestrian track in the pedestrian track library based on the merging track corresponding to each target pedestrian to obtain the global track corresponding to each target pedestrian.
Optionally, the cross-border trajectory tracking device is further configured to:
acquiring an entrance video frame corresponding to a shooting area of an entrance camera;
based on a preset face detection model, carrying out face detection on each entry video frame in the entry video to obtain an entry face detection result corresponding to each entry video frame;
based on a preset pedestrian detection model, carrying out pedestrian detection on the entrance video frame to obtain an entrance pedestrian detection result;
tracking an entrance pedestrian detection result corresponding to the entrance video frame to obtain entrance pedestrian tracking information;
correlating the entrance pedestrian tracking information with the entrance face detection result to generate pedestrian tracks corresponding to all target pedestrians;
and if the pedestrian track corresponding to each target pedestrian is the store-entering track, storing each pedestrian track into a pedestrian track library.
Optionally, the tracking module is further configured to:
tracking the pedestrian detection result in each camera based on a preset pedestrian tracking model to obtain pedestrian tracking information corresponding to each target pedestrian;
and generating track segments of preset frame numbers of the target pedestrians corresponding to different cameras based on the pedestrian tracking information.
Optionally, the trajectory merging module is further configured to:
converting the pixel coordinates corresponding to each track segment into world coordinates;
respectively calculating the similarity of the same lens track among all track segments in each camera based on the characteristic information corresponding to all track segments in each camera in each picture frame;
calculating different lens track similarities corresponding to the track sections among different cameras based on the characteristic information and the world coordinates respectively corresponding to the track sections in the different cameras in each picture frame;
and combining the track segments in the cameras based on the similarity of the same lens and the similarity of the different lens tracks to obtain a combined track corresponding to each target pedestrian.
Optionally, the trajectory merging module is further configured to:
and respectively calculating Euclidean distances between the track segments in each camera based on the characteristic information corresponding to each track segment in each camera in each picture frame, so as to obtain the similarity of the same-lens track.
Optionally, the trajectory merging module is further configured to:
calculating the feature similarity corresponding to each track segment among different cameras based on the feature information corresponding to each track segment in each picture frame in different cameras;
calculating the position similarity corresponding to each track segment among different cameras based on the difference value of the world coordinate corresponding to each track segment in different cameras and a preset world coordinate;
and determining different lens track similarities corresponding to the track segments among different cameras based on the feature similarities, the position similarities and the preset similarity weight.
Optionally, the trajectory merging module is further configured to:
solving the similarity of the same shot and the similarity of the different shot tracks respectively to obtain a track similarity matrix;
decomposing and optimizing the track similarity matrix to obtain a target optimization matrix;
and merging the track segments based on the target optimization matrix to obtain a merged track corresponding to each target pedestrian.
Optionally, the matching module is further configured to:
respectively matching each track segment in each combined track with each pedestrian track in the pedestrian track library to obtain a track matching list corresponding to each combined track;
respectively matching each track segment in each combined track with each pedestrian track in the pedestrian track library to obtain a track matching list corresponding to each combined track;
selecting pedestrian tracks to be associated respectively corresponding to the combined tracks from the pedestrian tracks in the track matching lists according to a preset selection mode;
and performing association matching on the pedestrian track to be associated and each combined track corresponding to each combined track to obtain the global track corresponding to each target pedestrian.
The specific implementation of the cross-border trajectory tracking device of the present application is substantially the same as the embodiments of the cross-border trajectory tracking method described above, and will not be described herein again.
The present application provides a storage medium, which is a readable storage medium, and the readable storage medium stores one or more programs, which can be further executed by one or more processors to implement the steps of the cross-border trajectory tracking method described in any one of the above.
The specific implementation of the readable storage medium of the present application is substantially the same as the embodiments of the cross-border trajectory tracking method, and is not described herein again.
The present application provides a computer program product, and the computer program product includes one or more computer programs, which can also be executed by one or more processors for implementing the steps of the cross-border trajectory tracking method described in any one of the above.
The specific implementation of the computer program product of the present application is substantially the same as that of the above-mentioned cross-border trajectory tracking method, and is not described herein again.
The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are included in the scope of the present application.

Claims (10)

1. A cross-border trajectory tracking method, comprising:
acquiring picture frames of a plurality of camera shooting areas in a store;
performing target detection on each picture frame to obtain target detection results corresponding to each target pedestrian, wherein the target detection results comprise pedestrian detection results;
tracking and identifying the detection result of each target in each camera respectively to generate track segments of the number of preset frame numbers of each target pedestrian in different cameras;
re-identifying each track segment based on a preset target feature extraction model to obtain feature information corresponding to each track segment;
merging each track segment in each camera and each track segment among different cameras based on the characteristic information corresponding to each track segment and the world coordinates corresponding to each track segment to obtain a merged track corresponding to each target pedestrian;
and respectively matching each combined track with each pedestrian track in a pedestrian track library based on the combined track corresponding to each target pedestrian to obtain the global track corresponding to each target pedestrian.
2. The cross-border trajectory tracking method according to claim 1, wherein before the step of acquiring the frame of the picture of the shooting area of the plurality of cameras in the store, the cross-border trajectory tracking method comprises:
acquiring an entrance video frame corresponding to a shooting area of an entrance camera;
based on a preset face detection model, carrying out face detection on the entrance video frame to obtain an entrance face detection result;
based on a preset pedestrian detection model, carrying out pedestrian detection on the entrance video frame to obtain an entrance pedestrian detection result;
tracking an entrance pedestrian detection result corresponding to the entrance video frame to obtain entrance pedestrian tracking information;
correlating the entrance pedestrian tracking information with the entrance face detection result to generate pedestrian tracks corresponding to all target pedestrians;
and if the pedestrian track corresponding to each target pedestrian is the store-entering track, storing each pedestrian track into a pedestrian track library.
3. The cross-border trajectory tracking method according to claim 1, wherein the step of tracking and identifying each target detection result in each camera and generating a trajectory segment of a preset frame number corresponding to each target pedestrian in different cameras comprises:
tracking the pedestrian detection result in each camera based on a preset pedestrian tracking model to obtain pedestrian tracking information corresponding to each target pedestrian;
and generating track segments of preset frame numbers of the target pedestrians corresponding to different cameras based on the pedestrian tracking information.
4. The cross-border head trajectory tracking method according to claim 1, wherein the step of combining the trajectory segments in the cameras and the trajectory segments between different cameras based on the feature information corresponding to each trajectory segment and the world coordinates corresponding to each trajectory segment to obtain a combined trajectory corresponding to each target pedestrian comprises:
converting the pixel coordinates corresponding to each track segment into world coordinates;
respectively calculating the similarity of the same lens track among all track segments in each camera based on the characteristic information corresponding to all track segments in each camera in each picture frame;
calculating different lens track similarities corresponding to the track sections among different cameras based on the characteristic information and the world coordinates respectively corresponding to the track sections in the different cameras in each picture frame;
and combining the track segments in the cameras based on the similarity of the same lens and the similarity of the different lens tracks to obtain a combined track corresponding to each target pedestrian.
5. The cross-border trajectory tracking method according to claim 4, wherein the step of calculating the similarity of the shot-to-shot trajectories between the trajectory slices in each camera based on the feature information corresponding to the trajectory slices in each camera in each frame of the picture comprises:
and respectively calculating Euclidean distances between the track segments in each camera based on the characteristic information corresponding to each track segment in each camera in each picture frame, so as to obtain the similarity of the same-lens track.
6. The cross-border trajectory tracking method according to claim 4, wherein the step of calculating the similarity of different lens trajectories corresponding to the trajectory slices between different cameras based on the feature information and the world coordinates respectively corresponding to the trajectory slices in different cameras in each frame of the picture comprises:
calculating the feature similarity corresponding to each track segment among different cameras based on the feature information corresponding to each track segment in each picture frame in different cameras;
calculating the position similarity corresponding to each track segment among different cameras based on the difference value of the world coordinate corresponding to each track segment in different cameras and a preset world coordinate;
and determining different lens track similarities corresponding to the track segments among different cameras based on the feature similarities, the position similarities and the preset similarity weight.
7. The cross-border head trajectory tracking method according to claim 4, wherein the step of combining the trajectory segments in each camera based on each of the same-shot similarities and each of the different-shot trajectory similarities to obtain a combined trajectory corresponding to each of the target pedestrians comprises:
solving the similarity of the same shot and the similarity of the different shot tracks respectively to obtain a track similarity matrix;
decomposing and optimizing the track similarity matrix to obtain a target optimization matrix;
and merging the track segments based on the target optimization matrix to obtain a merged track corresponding to each target pedestrian.
8. The cross-border head trajectory tracking method according to claim 1, wherein the step of matching each merged trajectory with each pedestrian trajectory in the pedestrian trajectory library based on the merged trajectory corresponding to each target pedestrian to obtain a global trajectory corresponding to each target pedestrian comprises:
respectively matching each track segment in each combined track with each pedestrian track in the pedestrian track library to obtain a track matching list corresponding to each combined track;
selecting pedestrian tracks to be associated respectively corresponding to the combined tracks from the pedestrian tracks in the track matching lists according to a preset selection mode;
and performing association matching on the pedestrian track to be associated and each combined track corresponding to each combined track to obtain the global track corresponding to each target pedestrian.
9. A cross-border head trajectory tracking device, characterized in that the cross-border head trajectory tracking device comprises: a memory, a processor, and a cross-border trajectory tracking program stored on the memory,
the cross-border trajectory tracking program is executed by the processor to implement the steps of the cross-border trajectory tracking method according to any one of claims 1 to 8.
10. A storage medium being a readable storage medium having a cross-border head trajectory tracking program stored thereon, the cross-border head trajectory tracking program being executed by a processor to implement the steps of the cross-border head trajectory tracking method according to any one of claims 1 to 8.
CN202110995893.8A 2021-08-27 2021-08-27 Cross-border head trajectory tracking method, equipment and storage medium Pending CN113689475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110995893.8A CN113689475A (en) 2021-08-27 2021-08-27 Cross-border head trajectory tracking method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110995893.8A CN113689475A (en) 2021-08-27 2021-08-27 Cross-border head trajectory tracking method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113689475A true CN113689475A (en) 2021-11-23

Family

ID=78583456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110995893.8A Pending CN113689475A (en) 2021-08-27 2021-08-27 Cross-border head trajectory tracking method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113689475A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619832A (en) * 2022-12-20 2023-01-17 浙江莲荷科技有限公司 Multi-camera collaborative multi-target track confirmation method, system and related device
CN117576167A (en) * 2024-01-16 2024-02-20 杭州华橙软件技术有限公司 Multi-target tracking method, multi-target tracking device, and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079600A (en) * 2019-12-06 2020-04-28 长沙海格北斗信息技术有限公司 Pedestrian identification method and system with multiple cameras
CN112529939A (en) * 2020-12-15 2021-03-19 广州云从鼎望科技有限公司 Target track matching method and device, machine readable medium and equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079600A (en) * 2019-12-06 2020-04-28 长沙海格北斗信息技术有限公司 Pedestrian identification method and system with multiple cameras
CN112529939A (en) * 2020-12-15 2021-03-19 广州云从鼎望科技有限公司 Target track matching method and device, machine readable medium and equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619832A (en) * 2022-12-20 2023-01-17 浙江莲荷科技有限公司 Multi-camera collaborative multi-target track confirmation method, system and related device
CN115619832B (en) * 2022-12-20 2023-04-07 浙江莲荷科技有限公司 Multi-camera collaborative multi-target track confirmation method, system and related device
CN117576167A (en) * 2024-01-16 2024-02-20 杭州华橙软件技术有限公司 Multi-target tracking method, multi-target tracking device, and computer storage medium
CN117576167B (en) * 2024-01-16 2024-04-12 杭州华橙软件技术有限公司 Multi-target tracking method, multi-target tracking device, and computer storage medium

Similar Documents

Publication Publication Date Title
Liu et al. Overview and methods of correlation filter algorithms in object tracking
US20200074178A1 (en) Method and system for facilitating recognition of vehicle parts based on a neural network
CN110033018B (en) Graph similarity judging method and device and computer readable storage medium
JP6650657B2 (en) Method and system for tracking moving objects in video using fingerprints
JP2014071902A5 (en)
JP2021533506A (en) Systems and methods for video anomaly detection and storage media
CN113689475A (en) Cross-border head trajectory tracking method, equipment and storage medium
US20230060211A1 (en) System and Method for Tracking Moving Objects by Video Data
Gu et al. Embedded and real-time vehicle detection system for challenging on-road scenes
CN113673505A (en) Example segmentation model training method, device and system and storage medium
CN108875500B (en) Pedestrian re-identification method, device and system and storage medium
CN115115825A (en) Method and device for detecting object in image, computer equipment and storage medium
US20220300774A1 (en) Methods, apparatuses, devices and storage media for detecting correlated objects involved in image
CN114168768A (en) Image retrieval method and related equipment
Wang et al. Non-local attention association scheme for online multi-object tracking
Yokota et al. A revisited visual-based geolocalization framework for forensic investigation support tools
CN113887384B (en) Pedestrian track analysis method, device, equipment and medium based on multi-track fusion
Li et al. Research on hybrid information recognition algorithm and quality of golf swing
CN117011566A (en) Target detection method, detection model training method, device and electronic equipment
CN114445787A (en) Non-motor vehicle weight recognition method and related equipment
AlShehri et al. Detecting Threats from Live Videos using Deep Learning Algorithms.
CN108694347B (en) Image processing method and device
Nazarkevych et al. Method of Dataset Filling and Recognition of Moving Objects in Video Sequences based on YOLO.
CN117523282B (en) Anchor-free example-level human body part detection method and device and storage medium
CN117523428B (en) Ground target detection method and device based on aircraft platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination