CN111914690A - Method for tracking target object in video recognition for medium and long periods - Google Patents

Method for tracking target object in video recognition for medium and long periods Download PDF

Info

Publication number
CN111914690A
CN111914690A CN202010680657.2A CN202010680657A CN111914690A CN 111914690 A CN111914690 A CN 111914690A CN 202010680657 A CN202010680657 A CN 202010680657A CN 111914690 A CN111914690 A CN 111914690A
Authority
CN
China
Prior art keywords
tracking
queue
detection period
target
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010680657.2A
Other languages
Chinese (zh)
Other versions
CN111914690B (en
Inventor
邓少冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Mix Intelligent Technology Co ltd
Original Assignee
Xi'an Mix Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Mix Intelligent Technology Co ltd filed Critical Xi'an Mix Intelligent Technology Co ltd
Priority to CN202010680657.2A priority Critical patent/CN111914690B/en
Publication of CN111914690A publication Critical patent/CN111914690A/en
Application granted granted Critical
Publication of CN111914690B publication Critical patent/CN111914690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a method for tracking a target object in a video identification medium and long periods, which adopts at least two detection periods with increasing length for detection, wherein each detection period corresponds to a tracking object list respectively, the unique ID of each tracking object is recorded, each tracking object of each detection period corresponds to a tracking queue respectively, and each element of the tracking queue corresponds to tracking point information of the target object respectively. The invention realizes the addition of tracking points and the correction of target object identification through the multi-stage tracking queue, and has steady medium-long term target tracking capability for the target identification condition of high missing detection.

Description

Method for tracking target object in video recognition for medium and long periods
Technical Field
The invention belongs to the technical field of computer vision, and relates to a tracking method of a target object in a longer time span.
Background
In the computer vision technology, target detection is to process each frame of picture in a video and identify the type and position of a target object contained in the picture. Target tracking is to know the position change of an identified target object in the time process, namely, whether the identified target object is the same tracking object or not is determined for different frames.
For convenience of separation, an object obtained by target detection in a single frame of a picture is hereinafter referred to as a target object, and an object obtained by associating target objects in different frames by target tracking is hereinafter referred to as a tracking object. Both the target object and the tracked object correspond to real objects of the real world. One real object corresponds to one or more tracking objects, and one tracking object corresponds to a plurality of target objects.
The target tracking is divided into single target tracking and multi-target tracking, the single target tracking is to track one target object in the frame picture, and only one target object is detected by the target detection, so that the inter-frame target object association is realized. The multi-target tracking means that a frame picture contains a plurality of target objects, the target objects may belong to different categories, and the multi-target tracking realizes inter-frame association of the target objects respectively.
The target tracking is widely applied to scenes such as security monitoring, automatic driving and the like, and has two aspects of significance:
A. correlation of the front and rear states of the target object: for example, in the previous frame, the target object is a person and can detect the face, so the identity of the person can be obtained through face recognition, and in the subsequent frame, if the person is found to be doing illegal things, although the face cannot be shot at the moment of illegal action occurrence, the identity of the illegal action person can be determined through the face recognition of the face picture in the previous frame if the target object can be determined to be the same person as the target object of the face in the previous frame through target tracking;
B. identifying a target motion track: the positions of the centers of the target objects in the continuous frames in the frame pictures are connected to form a trajectory line of the motion change of the target, and whether the trajectory line collides with a set area or the target or the collision speed of the trajectory line is detected, so that whether an abnormal behavior mode exists in the target can be detected, for example, whether the target crosses a forbidden zone or crosses a fence, and the like.
In actual video recognition, it is not easy to obtain stable target tracking, mainly for two reasons:
A. the target objects in the video are mutually shielded, and the like, so that the target objects disappear, or a plurality of objects are crossed and woven, and the tracking failure is caused;
B. target detection omission: the target tracking depends on the target detection result of each frame, the more stable the target detection is, the less missing detection is, and the better the target tracking is. In actual projects, no matter target detection by a traditional algorithm or target detection by a deep learning method is adopted, the phenomena of unstable target detection effect and more missed detections often occur.
In target tracking, each target appearing for the first time is assigned a unique tracking object ID, and the ID is used in the subsequent tracking process. If the tracking is lost due to occlusion or missing detection, the lost time exceeds the set threshold, and the ID is discarded. When the target object is detected again, the system will assign a new ID. Thus, although the same real object is obtained, the ID changes due to the loss of tracking. When measuring the tracking effect, the industry has an index of the number of ID switch times (ID switch): also known as ID sw., reflects the number of times the trace failed. The ideal ID switch in the tracking algorithm should be 0.
The relatively influential multi-target tracking technology that has emerged in recent years is the SORT and its modified version, DeepSort.
SORT is all called Simple Online And real Tracking, And combines Kalman filtering Tracking And Hungarian assignment algorithm. The Kalman filtering is used for predicting the motion track of a target object, and the Kalman filtering is used for assuming that different inter-frame motions of the target are linear motions which are independent of the motions of other objects and cameras, so that the prediction is also linear prediction. And the hungarian assignment is to allocate the next frame position predicted by the target object to the result of actual target detection, thereby obtaining the position of the target object in the next frame. The sortt does not use the target appearance features to correlate the inter-frame detected target objects, but uses only the position and size of the detection frame for motion estimation and data correlation of the target. In addition, the occlusion problem is not considered, the target re-identification is not carried out through the appearance characteristics of the target, and the SORT core is high in surrounding processing speed and can be applied in real time.
The deep appearance descriptor is added in the target object matching process by deep sort, so that the matching effect is better.
The existing target tracking technology supports the correlation of target detection objects between the front frame and the rear frame of a video according to motion position prediction and characterization characteristics, but if the target detection is blocked or missed, and the like, the target cannot be detected for a long time, the prior art considers that the tracking is lost, and the target is considered as a new object when being detected again. Typically, if 5 consecutive frames cannot be detected, the original tracked object is considered to be lost, and when the tracked object is detected again, the original tracked object is considered to be a new tracked object, and a new object ID is assigned, so that the same real object corresponds to a different tracked object ID, that is, the ID is switched. In actual target detection, missing detection is a common problem, a target object cannot be detected for 5 continuous frames or even longer time, so that tracking fails, and particularly in medium-long-term tracking, the problem of frequent tracking failure cannot be solved by the existing technology. The invention realizes the support of object tracking more stable in medium and long term by a method of tracking the object list in multiple stages.
Disclosure of Invention
In order to solve the problems in the prior tracking technology, the invention provides a method for tracking a target object in a video recognition for a medium and long term, and the stability of target tracking is improved through target matching of different time spans.
The invention specifically adopts the following technical scheme:
a method for tracking a target object in a video recognition for a medium and long term is characterized by comprising the following steps:
detecting by adopting at least two detection cycles with increasing lengths, wherein each detection cycle corresponds to a tracking object list respectively, and the unique ID of each tracking object is recorded;
when a longer detection period comes, matching the tracking object in the tracking object list with the shorter preamble detection period with the tracking object in the tracking object list with the longer detection period, and if the matching is successful, generating a tracking point by taking the information in the tracking queue of the tracking object with the shorter detection period, and adding the tracking point into the tracking queue of the matched tracking object with the longer detection period.
Preferably, the tracking object matching success means that the ID matching of the tracking object is successful.
Preferably, the matching of the tracked object is successful, that is, the tracked object is matched based on the position or the characteristic features.
Further preferably, if the tracked object is successfully matched according to the tracking algorithm, the ID of the tracked object with the shorter detection period is updated by using the ID of the tracked object with the longer detection period
Preferably, if a certain tracked object with a shorter detection period cannot match any tracked object with a longer detection period, a tracked object is newly built in the tracked object list with the longer detection period, information in the tracking queue of the tracked object with the shorter detection period is taken to generate a tracking point, and the tracking point is added into the tracking queue of the newly added tracked object with the longer detection period.
Preferably, if a certain tracked object with a longer detection period cannot match any tracked object with a shorter detection period, recording that the tracked object loses one tracking in the tracking queue with the longer detection period, and deleting the ID of the tracked object from the tracked object list with the longer detection period when the number of times of losing the tracking exceeds a set value.
Further preferably, if a certain tracked object with a longer detection period cannot match any tracked object with a shorter detection period, a tracking point is inferred according to previous tracking point information and added into a tracking queue of the tracked object with the longer detection period.
Further preferably, when the information in the tracking queue of the tracked object with a short detection period is taken to generate a tracking point, the information is taken from a fixed position in the tracking queue or calculated according to an average value of each element in the tracking queue.
The invention has the beneficial effects that:
1. achieving medium and long term tracking of targets under high miss detection conditions
For the tracking queue corresponding to each detection period, it is assumed that the corresponding target object cannot be found by five continuous tracking points, that is, the target object is considered to disappear, and the tracking fails. For the first detection period tracking queue, the missing detection time duration can be tolerated to be 0.04 × 4 — 0.16 seconds. For the trace stage queue corresponding to the 2 nd detection period, the missing detection time duration can be tolerated to be 0.64 × 4 — 2.56 seconds. For the trace queue corresponding to the 3 rd detection period, the missing detection time duration can be tolerated to be 10.24 × 4 — 40.96 seconds. And so on. That is, assuming that the target object cannot be detected for 40 seconds continuously, the original target object can still be matched in the tracking queue corresponding to the 2 nd detection period. The method is equivalent to a mechanism for establishing a multi-stage tracking queue, and has a stable medium-long term target tracking capability for the high-missing detection target identification condition.
2. Correcting target object identification for short-term tracking by long-term tracking
When the target object is missed for a period of time, the low-level trace queue (corresponding to the trace queue with a shorter detection period) will have trace identification, so that when the target object is detected again, the target object is considered as a new object, and a new object ID is allocated. For the high-level queue (corresponding to the tracking queue with a longer detection period), the target object tracking stability is higher because the tolerated missing detection time length is longer, so that even if the tracking of the low-level queue fails, the high-level queue can still realize the matching of the target object. If the target object is successfully tracked and matched in the high-level queue, the identification of the target object by the low-level queue can be corrected in reverse, the wrongly-distributed ID of the low-level queue is corrected by using the object ID of the high-level queue, and the number of tracking indexes IDSwitch is reduced.
Drawings
FIG. 1 is a time span diagram of a multi-level queue;
FIG. 2 is a schematic diagram of processing of a picture of frame 1;
FIG. 3 is a schematic diagram of the processing of the 2 nd frame picture;
fig. 4 is a schematic diagram of processing of a 16 th frame picture;
fig. 5 is a schematic view of processing of a 256 th frame picture;
fig. 6 is a schematic diagram of missing detection of the 8 th frame picture;
FIG. 7 is a schematic diagram of missing detection of 19 th-24 th frames;
FIG. 8 is a schematic diagram of level 0 tracking object matching to level 1 tracking object;
fig. 9 is a long-term tracking effect.
Detailed Description
The method for tracking the target object in the video recognition for the medium and long periods comprises the following steps:
first, the target object tracking point multi-stage queue
And tracking detection is carried out by adopting multiple detection periods with gradually increased lengths, each detection period corresponds to a tracking object list, and the unique ID of the tracking object is recorded. And each tracking object in each detection period corresponds to one tracking queue respectively, and tracking points of the tracking objects in different time spans are recorded. This detection mechanism amounts to establishing a multi-level trace queue for each trace object. Assuming that multiple detection periods with increasing lengths are … th period, … st period and nth period in sequence from short to long, the tracking queues of the same tracked object in different detection periods are respectively a 0 th level tracking queue, a 1 st level tracking queue … and an nth level tracking queue, and n can be set according to the tracking duration. The lengths of the tracking queues at each stage can be the same or different. According to the detection condition, the same target object can have a plurality of levels of tracking queues or only one level of tracking queue.
Tracking points of the target object are stored in the tracking queue, and the tracking points record a positioning frame of the target object, including x and y coordinate information of the target object in the image and the width and height of the target object;
adding tracking points of two-stage and multi-stage tracking queue
A. Firstly, reading in a first frame picture of a video stream, and obtaining a list of target objects after target detection, wherein the list comprises the category, position and size information of each target object. Initially, the 0 th-level queue is empty, the system allocates a tracking object ID to each detected target object, generates a tracking object list, allocates a 0 th-level tracking queue (corresponding to the shortest 0 th-level detection period) to each target object, and adds the position and size of each target object to the queue head of the corresponding tracking queue;
B. then reading in the subsequent picture of the video stream, and obtaining the target object list of the subsequent frame after target detection. And matching the target object in the subsequent frame with the target object in the preamble frame by using a multi-target tracking algorithm such as SORT \ DeepSort and the like. If a match is made, they correspond to the same trace object, i.e., have the same trace object ID, and the new position and size are added to the head of the trace object queue. If the tracked object of the preamble frame does not have a corresponding target object in the subsequent frame, recording that the tracked object loses one track. When the number of times of losing the tracking exceeds a set value, the tracking object is deleted. If the new target object in the subsequent frame can not be matched with the tracking object in the existing preamble frame, the tracking object is considered as a new tracking object, a tracking object ID and a tracking queue are allocated to the tracking object, and the current position and size are added into the tracking queue.
C. When a long ith (i ═ 1 to n) detection period comes, matching the tracked objects in the tracked object list of the ith detection period and the ith detection period with short preambles:
1) if a certain tracking object U in the tracking object list of the i-1 detection periodi-1Already exists in the tracked object list of the ith detection period, and the tracked object U isi-1Adding one piece of tracking point information in the tracking queue of the ith-1 detection period into the tracking queue of the ith detection period;
2) if the tracked object U isi-1Is not present in the tracked object list of the i-th detection cycle,
a) if the tracked object U isi-1Tracking object U in tracking object list according to existing tracking technology (such as based on position or similar degree of characteristic features) and ith detection periodiMatching and taking the tracked object Ui-1Adding tracking point information in tracking queue in i-1 detection period into tracking object UiIn the trace queue of the ith detection period, a trace object U is usediID update tracking object U ofi-1The ID of (1);
b) if the object U is trackedi-1According to the tracking algorithm, the tracking algorithm is not matched with any tracking object in the tracking object list of the ith detection period, and a new tracking object is established in the tracking object list of the ith detection periodTracking object Ui1Get the tracked object Ui-1Adding tracking point information in tracking queue in i-1 detection period into tracking object Ui1In the trace queue of the i-th detection period.
Tracking object Ui-1The method for obtaining the information of one tracking point in the i-1 detection period tracking queue comprises multiple methods, and the tracking point at a fixed position in each element of the i-1 detection period tracking queue can be directly taken, or can be obtained by calculation according to the average value of each element. The tracking queue adopts a first-in first-out queue, and the newly added tracking point information is positioned at the head of the tracking queue.
D. If a certain tracking object U in the tracking object list of the ith detection periodiIf the tracking point information can not be matched with any tracking object in the tracking object list of the i-1 detection period, a tracking point is deduced according to the previous tracking point information, and the tracking object U added into the i-1 detection periodiAnd recording a tracking object UiThe tracking queue of the ith detection cycle (ii) loses tracking once, and when the number of times of losing tracking exceeds a set value, the ID of the tracked object is deleted from the tracked object list of the ith detection cycle.
Correction of target object identification
In the system, the target object lists of different detection periods are managed independently, the target object of the (i-1) th detection period may or may not appear in the target object set of the (i) th detection period, and conversely, the target object of the (i) th detection period may or may not appear in the target object set of the (i-1) th detection period. This is because the target object detected by the target detection may have missed detections of different durations in the tracking of the target object at different time spans. If the missing detection time is short, the target object in the (i-1) th detection period is assigned with a new ID due to the tracking failure, and for the subsequent level, the tracking is continued; at this time, the tracking object of the i-1 detection period does not appear in the tracking object after the i-th detection period. Ideally, no missing detection occurs, each target object can be tracked well, and the target objects in the i-1 detection period are successfully added to the i-th level and the i + 1-th level …, which all have the same tracking object ID to identify the same tracking object.
If the tracking loss occurs in the i-1 th level tracking queue, a new tracking object ID and a tracking queue are allocated, when the i detection period comes, if the position information acquired from the i-1 th level tracking queue can be matched with the existing tracking object of the i level, the fact that the new tracking object of the i-1 level is actually the continuation of the existing object is determined, and at the moment, the i-level tracking object ID is acquired and used for updating the i-1 th level tracking object ID, so that the correction of the identification of the tracking object is realized.
The target object medium-long term tracking method of the present invention is described in detail below with reference to the accompanying drawings and specific embodiments.
Implementation of one-time and medium-time tracking
Taking the helmet as an example, assume a five-stage queue, and first-in first-out queues are adopted, each queue being 16 in length. And storing the tracking point of each frame of the target object in the 0-level queue, storing the latest element of each 16 element taking queue head of the 0-level queue into the queue head of the 1-level queue, storing the element in the later taking queue head of each 16 elements of the 1-level queue into the queue head of the 2-level queue, and so on. For simplicity, the ith detection period corresponds to the time when the trace queue of the (i-1) th-level trace object is filled. In practical applications, the length of the detection period is not limited thereto.
Assuming a frame rate of 25 frames per second, as shown in fig. 1, the following is the time span covered by each level of queues and the time interval of each trace point.
Stage 0: the frame rate was 25fps, the tracking dot interval was 1/25-0.040 s, and the detection period was 16 × 0.04-0.64 sec;
stage 1: the time span of the tracking point interval of the 0 th-level queue is 0.64 seconds, and the detection period is 0.64 × 16-10.24 seconds;
stage 2: the tracking point interval is 10.24 seconds for the time span of the first-stage queue, and the detection period is 10.24 × 16 ═ 163.84 seconds;
and so on
The storage process of the specific queue is as follows:
in the following, it is assumed that there is only one target object in the scene, and the target object is considered to be the same target instance as long as the target object can be detected from both the previous and next frames.
1. Processing of normal trace without missed detection
Assume that there is only one target object in the scene: the safety helmet is arranged in each frame of video stream, the width of the video is 1280 pixels, the height of the video is 720 pixels, the width of a positioning frame of the safety helmet is 35 pixels, and the height of the positioning frame of the safety helmet is 25 pixels. The helmet initial position is (100, 80), moving 2 pixels in the x-direction and 1 pixel in the y-direction per frame. Target object location box: (xmin, ymin, xmax, ymax)
The first frame picture, the helmet alignment box is (100, 80, 135, 105):
because it is the first helmet object, as shown in fig. 2, first a trace object ID, e.g., anquanmo 001, is allocated to the helmet at level 0, a fifo queue with a length of 16 is allocated to the trace object corresponding to a detection period of 0.64 seconds, and the alignment box (100, 80, 135, 105) is added to the head of the queue;
in the second picture, as shown in fig. 3, the helmet positioning frame is (102, 81, 137, 106):
because the trace object is already allocated, a new location box is directly added to the head of line position of the level 0 trace queue of the trace object.
And analogizing in sequence, adding the target object corresponding to the picture of the 16 th frame all the time, at this time, allocating a tracking object at the 1 st level, keeping the instance ID unchanged, still being ANQUANMAO001, and also corresponding to a tracking queue with the length of 16. And taking out the tracking frame at the head of the queue from the 0 th level, and adding the tracking frame into a tracking queue corresponding to the tracking object of the 1 st level, as shown in the figure 4.
Frame 17, the helmet positioning frame is (132, 96, 167, 121):
the positioning frame is continuously added to the head of the 0-level queue, and because the queue length is 16 and the queue length is first-in first-out, the positioning frame (100, 80, 135, 105) of the target object corresponding to the first frame of picture of the existing 0-level queue is extruded out of the queue.
The elements in the level 1 queue of this example have not changed
Frame 18, the helmet positioning box is (134, 97, 169, 122):
the positioning frame is continuously added into the head of the 0 th level queue, and at the moment, the positioning frame (102, 81, 137, 106) of the target object corresponding to the 2 nd frame picture of the 0 th level queue is extruded out of the queue.
The elements in the level 1 queue of this example have not changed
In turn and so on
Picture of frame 32
The target object is added to the head of the 0 th level queue, and the 16 th frame picture of the 0 th level queue is extruded out of the queue. The head of queue element of the 0 th level queue is added to the head of the first level queue of the object instance, and the first level queue has 2 elements.
In turn and so on
Picture 256 as shown in fig. 5. Level 0 and level 1 processing are as above, when the level 1 queue is full, and a trace object is assigned at level 2, with the instance ID unchanged, still anquanmo 001, again corresponding to a trace queue of length 16. And taking out the tracking frame at the head of the queue from the level 1, and adding the tracking frame into a tracking queue corresponding to the tracking object of the level 2.
The processing at the other levels is similar to the above.
Second, processing of tracking failure caused by missing detection
A. Level 0 occasional missed detection of tracked objects without loss
As shown in FIG. 6, suppose that the 1 st to 8 th pictures can all correctly detect the safety helmet, the 9 th picture cannot detect the safety helmet, and the 10 th picture can detect the safety helmet.
At this time, there are 8 elements in the 0 th level queue, and no object is detected in the 9 th picture, and a speculative tracking point is added to the tracking queue.
After the 10 th picture detects the object, the 10 th picture corresponds to the existing tracking object again, and a new positioning frame is added into the queue.
B. Level 0 continuous missed detection tracking object loss
As shown in FIG. 7, suppose that the 1 st to 18 th pictures can correctly detect the safety helmet, the 6 th pictures from 19 th to 24 th pictures can not detect the safety helmet continuously, and the safety helmet can be detected after the 25 th picture.
Since there are more than 16 pictures, there is one element in the level 1 queue at this time, and the object IDs of level 0 and level 1 are both anquinmao 001.
No object was detected in any of frames 19 to 24, and no object was detected in more than 5 consecutive frames, so the object instance anquanmo 001 was considered to disappear during the tracking of level 0, and the anquanmo 001 object of level 0 was deleted.
C. Level 0 re-detection of subsequent reassignment of tracked objects
Assuming that the object can be detected after the 25 th frame, and is considered as a new target object, the tracking object ID, anquanmo 002 and the new tracking queue are allocated to the object in the 0 th stage, and the subsequent tracking location box is added into the new tracking queue.
D. The new object of level 0 is re-matched to the original object when entering level 1
As shown in fig. 8: when the 32 th frame is reached, a tracking frame of the queue head is obtained from the 0 th-level object, the tracking frame is matched with the tracking frame of the 1 st-level object, and after the matching is successful, the 0 th-level ANQUANMAO002 and the 1 st-level ANQUANMAO001 are determined to be the same example, so that the tracking frame of the 0 th-level queue head of the ANQUANMAO002 is added to the 1 st-level queue head of the ANQUANMAO 001.
E. Obtaining tracking object ID from level 1 modifies tracking object of level 0
As shown in fig. 8, since the stage 1 achieves the matching of anquanmo 001 and anquanmo 002, it can be determined that the anquanmo 002 of stage 0 is the original anquanmo 001, and the subject ID anquanmo 002 of stage 0 is modified to anquanmo 001.
After modification, the object ID for both level 0 and level 1 is anquanmo 001.
Thirdly, ensuring accurate scene identification by stable tracking in medium and long term
As shown in figure 9, if it is assumed that a person is required to wear a safety helmet in a scene, the person and the safety helmet are two target objects which are detected independently, and if the person is found not to wear the safety helmet, an alarm is given. Considering the stability of detection, it is required that no safety helmet is detected to give an alarm after 3 seconds, the 0-level queue has a time span of only 0.64 seconds, so that the queue is placed in the 1-level queue for detection, and for the 1-level queue, the time span of one tracking point is 0.64 seconds, if 5 times are detected continuously, only people exist in the scene, and no safety helmet gives an alarm.
Without stable tracking, the headgear and person may be present simultaneously or separately. If the safety helmet and the person are detected once every 8 frames but are not detected simultaneously in the same frame, the object is considered to be lost when the existing tracking method exceeds the set frame number and cannot detect the object. Thus, when viewed from the 0-level queue, when the safety helmet exists, the person does not exist, and when the person exists, the safety helmet does not exist. Without longer tracking of level 1, an alarm may be mistakenly issued that a person is not wearing a safety helmet. Because the level 1 and the level 2 tracking with longer time span exist, the safety helmet and the person exist all the time, and the actual detection is finished at the level 1, so that the person can be identified to wear the safety helmet all the time, and the alarm cannot be sent out mistakenly.

Claims (9)

1. A method for tracking a target object in a video recognition for a medium and long term is characterized by comprising the following steps:
detecting by adopting at least two detection cycles with increasing lengths, wherein each detection cycle corresponds to a tracking object list respectively, and the unique ID of each tracking object is recorded;
when a longer detection period comes, matching the tracking object in the tracking object list with the shorter preamble detection period with the tracking object in the tracking object list with the longer detection period, and if the matching is successful, generating a tracking point by taking the information in the tracking queue of the tracking object with the shorter detection period, and adding the tracking point into the tracking queue of the matched tracking object with the longer detection period.
2. The method for long-term and medium-term tracking of a target object in video recognition according to claim 1, wherein successful matching of the tracked object means successful matching of an ID of the tracked object.
3. The method as claimed in claim 1, wherein the successful matching of the tracked object is based on the matching of the tracked object based on the position or the proximity of the tracked object based on the characteristic features.
4. The method of claim 3, wherein the ID of the tracking object with shorter detection period is updated with the ID of the tracking object with longer detection period after the tracking object is successfully matched.
5. The method as claimed in claim 1, wherein if a tracking object with a shorter detection period cannot match any tracking object with a longer detection period, a new tracking object is created in the tracking object list with the longer detection period, and the information in the tracking queue of the tracking object with the shorter detection period is used to generate a tracking point, which is added to the tracking queue of the new tracking object with the longer detection period.
6. The method as claimed in claim 1, wherein if a tracking object with a longer detection period cannot match any tracking object with a shorter detection period, the tracking object is recorded to lose one tracking in the tracking queue with the longer detection period, and the number of times of losing tracking exceeds a set value, the ID of the tracking object is deleted from the tracking object list with the longer detection period.
7. The method as claimed in claim 6, wherein if a tracking object with a longer detection period cannot match any tracking object with a shorter detection period, a tracking point is inferred from previous tracking point information and added to the tracking queue of the tracking object with the longer detection period.
8. The method for tracking the middle-long term target object in video identification as claimed in claim 1 or 5, wherein when the information in the tracking queue of the tracking object with shorter detection period is taken to generate a tracking point, the information is taken from a fixed position in the tracking queue or calculated according to the average value of each element in the tracking queue.
9. The tracking method of claim 1, wherein: the tracking queue adopts a first-in first-out queue, and the newly added tracking point information is positioned at the head of the tracking queue.
CN202010680657.2A 2020-07-15 2020-07-15 Target object medium-long-term tracking method in video identification Active CN111914690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010680657.2A CN111914690B (en) 2020-07-15 2020-07-15 Target object medium-long-term tracking method in video identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010680657.2A CN111914690B (en) 2020-07-15 2020-07-15 Target object medium-long-term tracking method in video identification

Publications (2)

Publication Number Publication Date
CN111914690A true CN111914690A (en) 2020-11-10
CN111914690B CN111914690B (en) 2023-11-10

Family

ID=73281187

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010680657.2A Active CN111914690B (en) 2020-07-15 2020-07-15 Target object medium-long-term tracking method in video identification

Country Status (1)

Country Link
CN (1) CN111914690B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3952304A (en) * 1973-11-23 1976-04-20 Hughes Aircraft Company Tracking system utilizing Kalman filter concepts
WO2006013689A1 (en) * 2004-08-06 2006-02-09 Murata Manufacturing Co., Ltd. Radar
WO2012138828A2 (en) * 2011-04-08 2012-10-11 The Trustees Of Columbia University In The City Of New York Kalman filter approach to augment object tracking
CN104281837A (en) * 2014-09-26 2015-01-14 哈尔滨工业大学深圳研究生院 Pedestrian tracking method combining Kalman filtering with ROI expansion between adjacent frames
CN104574439A (en) * 2014-12-25 2015-04-29 南京邮电大学 Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method
RU2551356C1 (en) * 2013-12-04 2015-05-20 Федеральное государственное казенное военное образовательное учреждение высшего профессионального образования "Военный учебно-научный центр Военно-Морского Флота "Военно-морская академия имени Адмирала Флота Советского Союза Н.Г. Кузнецова" Method of non-strobe automatic tracking of mobile target
US9552648B1 (en) * 2012-01-23 2017-01-24 Hrl Laboratories, Llc Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering
CN109271888A (en) * 2018-08-29 2019-01-25 汉王科技股份有限公司 Personal identification method, device, electronic equipment based on gait
CN109919043A (en) * 2019-02-18 2019-06-21 北京奇艺世纪科技有限公司 A kind of pedestrian tracting method, device and equipment
CN110378264A (en) * 2019-07-08 2019-10-25 Oppo广东移动通信有限公司 Method for tracking target and device
CN110443833A (en) * 2018-05-04 2019-11-12 佳能株式会社 Method for tracing object and equipment
US20200134837A1 (en) * 2019-12-19 2020-04-30 Intel Corporation Methods and apparatus to improve efficiency of object tracking in video frames

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3952304A (en) * 1973-11-23 1976-04-20 Hughes Aircraft Company Tracking system utilizing Kalman filter concepts
WO2006013689A1 (en) * 2004-08-06 2006-02-09 Murata Manufacturing Co., Ltd. Radar
WO2012138828A2 (en) * 2011-04-08 2012-10-11 The Trustees Of Columbia University In The City Of New York Kalman filter approach to augment object tracking
US9552648B1 (en) * 2012-01-23 2017-01-24 Hrl Laboratories, Llc Object tracking with integrated motion-based object detection (MogS) and enhanced kalman-type filtering
RU2551356C1 (en) * 2013-12-04 2015-05-20 Федеральное государственное казенное военное образовательное учреждение высшего профессионального образования "Военный учебно-научный центр Военно-Морского Флота "Военно-морская академия имени Адмирала Флота Советского Союза Н.Г. Кузнецова" Method of non-strobe automatic tracking of mobile target
CN104281837A (en) * 2014-09-26 2015-01-14 哈尔滨工业大学深圳研究生院 Pedestrian tracking method combining Kalman filtering with ROI expansion between adjacent frames
CN104574439A (en) * 2014-12-25 2015-04-29 南京邮电大学 Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method
CN110443833A (en) * 2018-05-04 2019-11-12 佳能株式会社 Method for tracing object and equipment
CN109271888A (en) * 2018-08-29 2019-01-25 汉王科技股份有限公司 Personal identification method, device, electronic equipment based on gait
CN109919043A (en) * 2019-02-18 2019-06-21 北京奇艺世纪科技有限公司 A kind of pedestrian tracting method, device and equipment
CN110378264A (en) * 2019-07-08 2019-10-25 Oppo广东移动通信有限公司 Method for tracking target and device
US20200134837A1 (en) * 2019-12-19 2020-04-30 Intel Corporation Methods and apparatus to improve efficiency of object tracking in video frames

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王江涛: "基于视频的目标检测、跟踪及其行为识别研究", 《中国博士学位论文全文数据库(电子期刊)》 *
覃兴平: "基于视频图像的运动目标检测与跟踪技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)》, pages 138 - 1520 *

Also Published As

Publication number Publication date
CN111914690B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
JP5032846B2 (en) MONITORING DEVICE, MONITORING RECORDING DEVICE, AND METHOD THEREOF
US7336297B2 (en) Camera-linked surveillance system
CN109872341B (en) High-altitude parabolic detection method and system based on computer vision
CN108734107B (en) Multi-target tracking method and system based on human face
CN110443833B (en) Object tracking method and device
US20200193619A1 (en) Method and device for tracking an object
EP0659016B1 (en) Method and apparatus for video cut detection
US7394916B2 (en) Linking tracked objects that undergo temporary occlusion
EP0635983A2 (en) Method and means for detecting people in image sequences
US11871125B2 (en) Method of processing a series of events received asynchronously from an array of pixels of an event-based light sensor
Liu et al. Moving object detection and tracking based on background subtraction
EP3690736A1 (en) Method of processing information from an event-based sensor
KR101913648B1 (en) Method for tracking multiple objects
CN111914690B (en) Target object medium-long-term tracking method in video identification
KR20040068987A (en) Method for efficiently storing the trajectory of tracked objects in video
US7738009B2 (en) Method for following at least one object in a scene
CN111815682A (en) Multi-target tracking method based on multi-track fusion
CN110728846B (en) Vehicle snapshot accurate control method
Xiang Real-time follow-up tracking fast moving object with an active camera
Zhang et al. What makes for good multiple object trackers?
Desurmont et al. Performance evaluation of frequent events detection systems
US20070286458A1 (en) Method and System for Tracking a Target
Patel et al. Scene-Change Detection using Locality Preserving Projections
CN116958142B (en) Target detection and tracking method based on compound eye event imaging and high-speed turntable
JP2023177717A (en) Data collection system and data collection method for additional learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant