CN112560617B - Large-scene pedestrian track tracking method based on array camera - Google Patents

Large-scene pedestrian track tracking method based on array camera Download PDF

Info

Publication number
CN112560617B
CN112560617B CN202011417713.XA CN202011417713A CN112560617B CN 112560617 B CN112560617 B CN 112560617B CN 202011417713 A CN202011417713 A CN 202011417713A CN 112560617 B CN112560617 B CN 112560617B
Authority
CN
China
Prior art keywords
pedestrian
detection
tracking
sequence
box
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011417713.XA
Other languages
Chinese (zh)
Other versions
CN112560617A (en
Inventor
张华�
赵利林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Iexcel Cloud Data Technology Co ltd
Original Assignee
Jiangsu Iexcel Cloud Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Iexcel Cloud Data Technology Co ltd filed Critical Jiangsu Iexcel Cloud Data Technology Co ltd
Priority to CN202011417713.XA priority Critical patent/CN112560617B/en
Publication of CN112560617A publication Critical patent/CN112560617A/en
Application granted granted Critical
Publication of CN112560617B publication Critical patent/CN112560617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a large-scene pedestrian track tracking method based on an array camera, which comprises the following steps of: according to the small lens distribution of the array camera, determining the small lens number and the overlapping area of the image data acquired by each adjacent small lens; respectively carrying out pedestrian detection on the acquired image data through a pedestrian detection model to obtain a set Q of N groups of pedestrian detection frames Box cur Simultaneously storing a pedestrian detection frame Box; acquiring a feature vector corresponding to the pedestrian and the motion information of the pedestrian through the detection frame Box; tracking the motion information and the feature vector acquired by the detection frame Box to acquire a pedestrian tracking sequence HashMap, and transmitting a detection frame sequence Q by using the adjacent small lens of the previous frame near Updating the pn value of the pedestrian number in the detection frame Box in the overlapping area in the tracking sequence HashMap by the n value of the detection frame Box, and connecting the tracking sequences HashMap of a plurality of small lenses in series according to the pn value and the n value to obtain the pedestrian track between two adjacent small lenses; the tracking target is more accurate.

Description

Large-scene pedestrian track tracking method based on array camera
Technical Field
The invention relates to the technical field of pedestrian track tracking methods, in particular to a large-scene pedestrian track tracking method based on an array camera.
Background
Target tracking is a hotspot problem in the field of computer vision, and pedestrian track tracking is the most common technology and is widely applied to important fields such as video monitoring and security protection. The pedestrian track tracking algorithm can track the appointed target in the monitoring in real time, and acquire the moving track of the pedestrian in the modes of target detection, position prediction and data association. After the occurrence of the case, the police personnel can analyze the monitoring video, and find out the track of a certain suspected person by utilizing a tracking algorithm.
One difficulty with target tracking is cross-border head tracking. When a pedestrian target moves from one lens to another, it is necessary to determine whether or not the pedestrians in the two lenses are the same person based on factors such as appearance characteristics, gait characteristics, behavior characteristics, and the like of the pedestrians. These techniques of determining the same person based on characteristics are currently not mature enough, and particularly in a large scene, it becomes very difficult to accurately track the trajectory of each person as the pedestrian target moves across multiple shots.
Chinese patent publication No. CN111476827a discloses a target tracking method, system, electronic device and storage medium, which uses the working principle of world coordinates to track. The position and size of the detection bounding box of the same object at different cameras will vary even with world coordinates, so determining the unique world coordinates of each object is erroneous, especially if the two objects are closely spaced.
Chinese patent publication No. CN108051777a discloses a target tracking method, a device and an electronic apparatus, where the target tracking method uses the working principle of a sensor to track, so that the requirement on hardware is high, but the accuracy of tracking the target by the sensor is poor.
As a result, it is necessary to provide a large scene pedestrian trajectory tracking method based on an array camera to overcome the above-described problems.
Disclosure of Invention
The invention aims to provide a large-scene pedestrian track tracking method based on an array camera, which uses the construction characteristics of the cameras, and combines the position information and the feature vector of pedestrians to determine that two targets in different cameras are identical, so that the accuracy of target tracking is improved.
In order to achieve the above purpose, the invention adopts a technical scheme that: a large scene pedestrian track tracking method based on an array camera comprises the following steps: the method comprises the steps of firstly, determining a small lens number and an overlapping area of overlapping image data acquired by each adjacent small lens according to small lens distribution of an array camera; step two, respectively carrying out pedestrian detection on the acquired image data by using a small lens through a pedestrian detection model to obtain a set Q of N groups of pedestrian detection frames Box (x, y, w, h, c, N, t, cn, pn, P) cur Simultaneously storing a pedestrian detection frame Box;thirdly, acquiring and storing the motion information of the pedestrians and the feature vectors corresponding to the pedestrians through the detection frame Box; a fourth step of tracking the motion information and the feature vector acquired by the detection boxes to acquire a pedestrian tracking sequence HashMap, wherein the tracking sequence HashMap refers to sequences of a plurality of detection boxes with the same pedestrian number as n; fifth step, using the detection frame sequence Q transmitted from the last frame of adjacent shots near Updating the pn value of the pedestrian number in the detection frame Box in the overlapping area in the tracking sequence HashMap by the n value of the detection frame Box, and connecting the tracking sequences HashMap of a plurality of small lenses in series according to the pn value and the n value to obtain the pedestrian track between two adjacent small lenses; wherein, in the detection frame Box (x, y, w, h, c, n, t, cn, pn, P), x, y, w and h respectively represent the abscissa, the ordinate and the width and the height of the upper left corner of the detection boundary frame BBox; c represents the shot number; n represents a pedestrian number, initialized to 0; t represents a time stamp of a video frame; cn represents the adjacent shot number, and is determined according to the position of the BBox, and when the detection boundary box BBox is not in the overlapping area, the value is 0, and when the detection boundary box BBox is in the overlapping area, the value is the adjacent shot number; pn represents the number of pedestrians entering an adjacent shot, and is initially 0; p represents a feature vector corresponding to the pedestrian detection bounding box BBox.
Compared with the prior art, the large-scene pedestrian track tracking method based on the array camera has the beneficial effects that: the method has the advantages that the structural characteristics of the cameras are utilized, and two targets in different cameras are determined to be identical by combining the position information and the feature vector of the pedestrian, so that the accuracy of target tracking is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments will be briefly described below, in which:
FIG. 1 is a flow chart of a large scene pedestrian trajectory tracking method based on an array camera of the present invention;
fig. 2 is a schematic diagram of a structure of forming an overlapping area when each small lens in the array camera of the present invention captures an image.
Detailed Description
The present invention will now be described in detail with reference to the accompanying drawings, but it should be emphasized that the following embodiments are merely exemplary and are not intended to limit the scope and application of the present invention.
Referring to fig. 1, the invention discloses a large-scene pedestrian track tracking method based on an array camera, which comprises the following steps:
the first step, according to the small lens distribution of the array camera, determining the small lens number and the overlapping area of the collected image data between each two adjacent small lenses.
Specifically, the plurality of small lens arrays form an array camera, in this embodiment, taking 12 small lenses as an example, and other embodiments may be any one of 3-100 small lenses. The shots are numbered according to the order, and images acquired by two adjacent shots are overlapped to form an overlapping area. The width of the overlapping area is determined according to the actual use condition, and is generally within 10% of the width of the small lens.
Fig. 2 is a schematic diagram of a structure of an overlapping area formed when each small lens in the array camera captures an image. Schematic diagram of listed 12 lenslet arrays, adjacent lenslets of lenslet A1 are A2, A5; adjacent shots of the shot A2 are A1, A3 and A6; the adjacent shots of the shot A6 are A2, A5, A7 and A10. The overlapping area portion can be seen by both the lenslet 1 and the lenslet 2. The overlapping area between every two shots is not the same.
The second step, through a pedestrian detection model, using N small lenses of the array camera to respectively detect pedestrians on the acquired image data to obtain a set Q of N groups of detection frames Box (x, y, w, h, c, N, t, cn, pn, P) cur And storing the pedestrian detection frame Box at the same time.
Specifically, pedestrian detection is carried out on the collected image data at the front end of the small lens, a pedestrian detection model is formed based on an SSD algorithm, and a Convolutional Neural Network (CNN) model is adopted for optimizing a pedestrian target, wherein x, y, w and h respectively represent the abscissa, the ordinate and the width and the height of a detection boundary box BBox upper left corner; c represents the shot number; n represents a pedestrian number, initialized to 0; t represents a time stamp of a video frame; cn represents the adjacent shot number, and is determined according to the position of the BBox, and when the detection boundary box BBox is not in the overlapping area, the value is 0, and when the detection boundary box BBox is in the overlapping area, the value is the adjacent shot number; pn represents the pedestrian number entered into the adjacent shot, and is initially 0.P represents a feature vector corresponding to the pedestrian detection bounding box BBox.
And thirdly, acquiring and storing the motion information of the pedestrians through a plurality of detection boundary boxes BBox. Wherein the motion information comprises a motion speed and a motion direction;
a fourth step of tracking the detection frame Box by utilizing the motion information and the appearance information of the pedestrians to obtain a pedestrian tracking sequence HashMap;
specifically, a tracking algorithm S is utilized to combine a set Q of detection boxes of the current small lens cur Tracking the sequence HashMap to obtain a set Q of detection boxes of the current shot cur Detection frame sequence Q matched with tracking sequence HashMap match Newly appeared pedestrian detection frame sequence Q new . And updating the matched detection frame sequence Q in the current shot match N values of the detection boxes in the sequence are pedestrian numbers corresponding to the tracking sequence HashMap; q (Q) new The n value of the Box of the middle detection frame is sequentially added with 1.
The tracking sequence HashMap refers to sequences of detection boxes with the same n value. Each shot has a tracking sequence HashMap. Each frame of image obtained by the small lens can obtain a group of pedestrian detection frames Box, and then a pedestrian detection frame set Q of the current frame of the small lens is also formed cur And all pedestrian detection Box contents stored by the small lens can form a tracking sequence HashMap. Wherein Q is cur A detection frame sequence representing the current frame, Q near Representing the sequence of detection frames transmitted by the adjacent shots of the previous frame. New pedestrian detection frame sequence Q new Set Q of detection boxes for current shot cur The matched detection frame sequence Q is removed match And (5) collecting all pedestrian detection frames.
The tracking sequence HashMap contains a set value of detection boxes with the same pedestrian number key and n value, and whenever a certain detection Box of a frame of image of a current small lens is matched with the tracking sequence HashMap, the n value of the detection Box is changed into a key value matched in the tracking sequence HashMap, and the detection Box is added into the value corresponding to the key value.
Tracking is performed on the basis of pedestrian detection by adopting a deepsort tracking algorithm, motion information of a pedestrian target is obtained according to pedestrian detection boundary boxes BBox (x, y, w and h), and then, for each detection boundary box BBox, a characteristic vector P of each detection boundary box BBox is extracted to serve as appearance information.
In detail, motion information is acquired: predicting the position of each pedestrian object in the current image according to the historical motion information of each pedestrian object through the speed and the motion direction of the pedestrian, marking the set of detection boxes corresponding to the predicted position in the current image as a prediction Box sequence, and marking the set Q of the detection boxes of the current small lens as a prediction Box sequence cur Matching with the predicted frame sequence in the following matching mode: and calculating the distance between BBoxs by using the cross-over ratio IOU and selecting the optimal matching result by using the Hungary algorithm. For the matched target, updating the historical motion information of the pedestrian target; for the unmatched pedestrian target, the target is considered as a new pedestrian, and a detection Box of the new pedestrian is added into the tracking sequence HashMap.
Appearance information is acquired: according to the feature vector P in the Box of the current small lens detection frame, calculating cosine similarity between the feature vector P and the feature vector P of a plurality of pedestrian targets appearing closest to each pedestrian number key in the tracking sequence HashMap, and confirming the pedestrian number key which is the best match.
Combining the motion information and the appearance information, the set Q of the detection frame Box of the current small lens can be obtained cur Detection frame sequence Q matched with tracking sequence HashMap match Newly appeared pedestrian detection frame sequence Q new
Updating Q by using pedestrian number key of matched tracking sequence HashMap match N value of the Box of the detection frame matched with the Box (namely the detection frameN value of Box = pedestrian number key), and adds this matched Box to the value corresponding to key in the HashMap of the tracking sequence. New pedestrian detection frame sequence Q new Also adding the pedestrian detection frame sequence Q into a tracking sequence HashMap, establishing a new pedestrian number key, adding 1 to the maximum key value in the current tracking sequence HashMap, and detecting the new pedestrian detection frame sequence Q new The n value of the detection box in (c) is correspondingly updated to a new key value.
Tracking a value corresponding to each pedestrian number key in the sequence HashMap, and storing a detection frame Box of the latest 100 frames of images; if the value corresponding to a certain pedestrian number key is not updated in the continuous multi-frame images, the key is deleted from the HashMap of the tracking sequence.
Fifth step, using the detection frame sequence Q transmitted from the last frame of adjacent shots near The n value of the detection Box in the middle detection Box updates the pedestrian number pn value in the detection Box in the overlapping region in the tracing sequence HashMap. And connecting the tracking sequences HashMap of the plurality of small lenses in series according to the pn value and the n value to obtain the track of the pedestrian between two adjacent small lenses.
For example: detection frame sequence Q transmitted by adjacent shots of previous frame near There is one detection Box0 (x 0, y0, w0, h0, c0, n0, cn0, pn0, P0), for the key value in each tracking sequence HashMap, the detection Box with the value added last time is found, and those detection boxes with cn value equal to c0 are selected from these detection boxes, assuming there are K boxes 1, box2, … boxes K. And calculating the cosine similarity D between the intersection ratio IOU of Box0 and the detection boundary boxes BBox (x, y, w, h) of the K boxes and the feature vector P, and calculating the Box J closest to the detection Box0 in the K detection boxes by combining the IOU and the D. The pn value in this detection box j is updated to n0.
And sixthly, transmitting the detection Box in the overlapping area of the current small lens to a plurality of small lenses adjacent to the detection Box.
Specifically, the transfer is performed by means of Remote Procedure Call (RPC).
Seventh step, the matched detection frame sequence Q after the current shot is updated match And the newly appeared pedestrian detection frame sequence Qnew is transmitted to the back endIs stored on the imaging server of (c).
Specifically, a Remote Procedure Call (RPC) mode is adopted to make the matched detection frame sequence Q match And the newly appeared pedestrian detection frame sequence Qnew is transmitted to the imaging server at the back end.
Eighth step, the imaging server receives the matched detection frame sequence Q match And the newly-appearing pedestrian detection frame sequence Qnew is stored in a history table of the database, and then detection frames Box within a certain period of time are combined, and the database is updated.
The history table records the time and place of occurrence of all pedestrians. The fields included are: id, x, y, w, h, c, n, t, cn, pn, P, id are self-increasing fields, P field represents an index of the feature vector feature table, and description of other fields corresponds to description of the detection Box. The feature table represents 128-dimensional feature vectors of the pedestrian detection box. The fields included are: id, f1, f2, … f128.
When the detection frame is recorded in the database and a certain pedestrian number i is not appeared in the continuous m frames of images, the feature vector P of the detection frame Box in a certain period of time is clustered once to combine the pedestrian numbers with tracking errors caused by errors. For example, the detection frame with the number i is merged into the detection frame Box sequence with the number j, and after merging, the number of the detection frame with the number i is updated to be j. After merging, the feature vector P of the class centroid and the pedestrian number n in the clustering result are stored in a classFeature table.
The classFeature table represents the pedestrian number and the feature vector of the centroid point of the category of pedestrians. The fields included are: id, nid, P. nid denotes a pedestrian number, and P denotes an index of the feature vector feature table.
The merging operation can also correlate pedestrians which reappear after disappearing for a period of time. The imaging server in the background has complete video of the whole video monitoring picture, and the database records the time stamp and the detection frame of the pedestrian.
When a person is found to be suspicious when the video is needed to be reviewed, the outline of the person is selected by a frame, the outline is stored into a picture, then the characteristic vector of the person in the picture is calculated, the characteristic vector of which tracking sequence in a database class feature table is found out to be the closest to the person, the number of the person which is the closest to the person is obtained, all records of the person can be found out from a history table, and then the running track of the person is depicted in a monitoring picture.
The array camera composed of a plurality of small lenses has a very wide monitoring range and a very long monitoring distance. The invention is based on an array camera, and can track the running track of each pedestrian across the lens in a large scene.
Of course, those skilled in the art will recognize that the above-described embodiments are for illustrative purposes only and are not meant to be limiting, and that changes and modifications of the above-described embodiments are intended to be within the scope of the appended claims, as long as they are within the true spirit of the invention.

Claims (8)

1. The large-scene pedestrian track tracking method based on the array camera is characterized by comprising the following steps of:
the method comprises the steps of firstly, determining a small lens number and an overlapping area of overlapping image data acquired by each adjacent small lens according to small lens distribution of an array camera;
step two, respectively carrying out pedestrian detection on the acquired image data through a pedestrian detection model to obtain a set Q of N groups of pedestrian detection frames Box (x, y, w, h, c, N, t, cn, pn, P) cur Simultaneously storing a pedestrian detection frame Box;
thirdly, acquiring and storing the motion information of the pedestrians and the feature vectors corresponding to the pedestrians through the detection frame Box;
a fourth step of tracking the motion information and the feature vector acquired by the detection boxes to acquire a pedestrian tracking sequence HashMap, wherein the tracking sequence HashMap refers to sequences of a plurality of detection boxes with the same pedestrian number as n;
fifth step, using the detection frame sequence Q transmitted from the last frame of adjacent shots near N of middle detection frame BoxUpdating the pn value of the pedestrian number in the detection Box in the overlapping region of the tracking sequence HashMap by the value, and connecting the tracking sequences HashMap of a plurality of small lenses in series according to the pn value and the n value to obtain the pedestrian track between two adjacent small lenses;
wherein, in the detection frame Box (x, y, w, h, c, n, t, cn, pn, P), x, y, w and h respectively represent the abscissa, the ordinate and the width and the height of the upper left corner of the detection boundary frame BBox; c represents the shot number; n represents a pedestrian number, initialized to 0; t represents a time stamp of a video frame; cn represents the adjacent shot number, and is determined according to the position of the BBox, and when the detection boundary box BBox is not in the overlapping area, the value is 0, and when the detection boundary box BBox is in the overlapping area, the value is the adjacent shot number; pn represents the number of pedestrians entering an adjacent shot, and is initially 0; p represents a feature vector corresponding to the pedestrian detection bounding box BBox.
2. The method for tracking pedestrian trajectories in a large scene based on an array camera as set forth in claim 1, wherein in the fourth step, the set Q of detection boxes combined with the current shot is set cur Tracking the sequence HashMap to obtain a set Q of detection boxes of the current shot cur Detection frame sequence Q matched with tracking sequence HashMap match Newly appeared pedestrian detection frame sequence Q new The method comprises the steps of carrying out a first treatment on the surface of the And updating the matched detection frame sequence Q in the current shot match N values of the detection boxes in the sequence are pedestrian numbers corresponding to the tracking sequence HashMap; q (Q) new The n value of the Box of the middle detection frame is sequentially added with 1.
3. The array camera-based large-scene pedestrian track tracking method as claimed in claim 2, wherein the tracking sequence HashMap contains a set value of detection boxes with the same pedestrian number key and n value, and whenever a detection Box of a frame image of a current small lens is matched with the tracking sequence HashMap, the n value of the detection Box is changed into a matching key value in the tracking sequence HashMap, and the detection Box is added into the value corresponding to the key value.
4. The method of tracking pedestrian trajectories in a large scene based on an array camera as set forth in claim 3, wherein in the fourth step, a set of detection boxes corresponding to the predicted positions in the current image is recorded as a predicted Box sequence, and a set Q of detection boxes of the current shot is recorded as a predicted Box sequence cur Matching with the predicted frame sequence in the following matching mode: calculating the distance between BBoxs by using the intersection ratio IOU and selecting an optimal matching result by using a Hungary algorithm, and updating the historical motion information of the pedestrian target for the matched target; for the unmatched pedestrian target, the target is considered as a new pedestrian, and a detection Box of the new pedestrian is added into the tracking sequence HashMap.
5. The method for tracking the pedestrian track in the large scene based on the array camera as claimed in claim 3, wherein in the fourth step, the cosine similarity between the feature vector P and the feature vectors P of the pedestrian targets appearing in the plurality of pedestrian numbers key in the tracking sequence HashMap is calculated according to the feature vector P in the current small lens detection Box, and the matched pedestrian number key is confirmed.
6. The array camera-based large scene pedestrian trajectory tracking method of claim 1, further comprising:
a sixth step of transmitting the detection frame Box positioned in the overlapping area of the current small lens to a plurality of small lenses adjacent to the detection frame Box;
seventh step, the matched detection frame sequence Q after the current shot is updated match And the newly-appeared pedestrian detection frame sequence Qnew is transmitted to an imaging server at the rear end for storage;
eighth step, the imaging server receives and stores the matched detection frame sequence Q match And the newly-appearing pedestrian detection frame sequence Qnew is combined with the detection frame Box in the preset time period, and the database is updated.
7. The method for tracking the large-scene pedestrian track based on the array camera as claimed in claim 6, wherein when a certain pedestrian number i does not appear in the continuous m-frame images, the feature vector P of the detection Box in a certain period of time is clustered once, the pedestrian numbers with errors caused by tracking are combined, and after the combination is completed, the feature vector of the centroid-like feature vector and the pedestrian number n in the clustering result are stored in a classFeature table.
8. The method for tracking the large-scene pedestrian track based on the array camera as claimed in claim 7, wherein when a person is found to be suspicious when the video needs to be reviewed, the outline of the person is selected by a frame, the outline is stored into a picture, then the feature vector P of the person in the picture is calculated, which tracking sequence of the person is closest to the feature vector of the tracking sequence in the database classification table is found, the pedestrian number which is the most similar to the person is obtained, and all records of the occurrence of the person are found from the history table.
CN202011417713.XA 2020-12-05 2020-12-05 Large-scene pedestrian track tracking method based on array camera Active CN112560617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011417713.XA CN112560617B (en) 2020-12-05 2020-12-05 Large-scene pedestrian track tracking method based on array camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011417713.XA CN112560617B (en) 2020-12-05 2020-12-05 Large-scene pedestrian track tracking method based on array camera

Publications (2)

Publication Number Publication Date
CN112560617A CN112560617A (en) 2021-03-26
CN112560617B true CN112560617B (en) 2023-08-01

Family

ID=75059300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011417713.XA Active CN112560617B (en) 2020-12-05 2020-12-05 Large-scene pedestrian track tracking method based on array camera

Country Status (1)

Country Link
CN (1) CN112560617B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113282782B (en) * 2021-05-21 2022-09-09 三亚海兰寰宇海洋信息科技有限公司 Track acquisition method and device based on multi-point phase camera array
CN113536915A (en) * 2021-06-09 2021-10-22 苏州数智源信息技术有限公司 Multi-node target tracking method based on visible light camera
CN114511592A (en) * 2022-01-21 2022-05-17 海纳云物联科技有限公司 Personnel trajectory tracking method and system based on RGBD camera and BIM system
CN118072000A (en) * 2024-04-17 2024-05-24 中国科学院合肥物质科学研究院 Fish detection method based on novel target recognition algorithm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8913791B2 (en) * 2013-03-28 2014-12-16 International Business Machines Corporation Automatically determining field of view overlap among multiple cameras
US9245196B2 (en) * 2014-05-09 2016-01-26 Mitsubishi Electric Research Laboratories, Inc. Method and system for tracking people in indoor environments using a visible light camera and a low-frame-rate infrared sensor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704824A (en) * 2017-09-30 2018-02-16 北京正安维视科技股份有限公司 Pedestrian based on space constraint recognition methods and equipment again
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN110378259A (en) * 2019-07-05 2019-10-25 桂林电子科技大学 A kind of multiple target Activity recognition method and system towards monitor video

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
检测优化的多伯努利视频多目标跟踪算法;杨金龙;程小雪;缪佳妮;张光南;;计算机科学与探索(第10期);全文 *

Also Published As

Publication number Publication date
CN112560617A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112560617B (en) Large-scene pedestrian track tracking method based on array camera
CN113269098B (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
Bai et al. Traffic anomaly detection via perspective map based on spatial-temporal information matrix.
Milan et al. MOT16: A benchmark for multi-object tracking
CN112836640B (en) Single-camera multi-target pedestrian tracking method
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
CN111832457B (en) Stranger intrusion detection method based on cloud edge cooperation
RU2531876C2 (en) Indexing method of video data by means of card
Kohl et al. The mta dataset for multi-target multi-camera pedestrian tracking by weighted distance aggregation
Zhu et al. Multi-drone-based single object tracking with agent sharing network
KR101788225B1 (en) Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing
CN111429484A (en) Multi-target vehicle track real-time construction method based on traffic monitoring video
CN111860352B (en) Multi-lens vehicle track full tracking system and method
CN110796074B (en) Pedestrian re-identification method based on space-time data fusion
CN112069969A (en) Method and system for tracking highway monitoring video mirror-crossing vehicle
CN111310728B (en) Pedestrian re-identification system based on monitoring camera and wireless positioning
CN106600628A (en) Target object identification method and device based on infrared thermal imaging system
Zhang et al. Exploiting Offset-guided Network for Pose Estimation and Tracking.
CN116403139A (en) Visual tracking and positioning method based on target detection
Bashar et al. Multiple object tracking in recent times: A literature review
Choe et al. Traffic analysis with low frame rate camera networks
CN106023252A (en) Multi-camera human body tracking method based on OAB algorithm
Peng et al. Tracklet siamese network with constrained clustering for multiple object tracking
CN113627497B (en) Space-time constraint-based cross-camera pedestrian track matching method
CN115908508A (en) Coastline ship real-time tracking method based on array camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant