CN109191497A - A kind of real-time online multi-object tracking method based on much information fusion - Google Patents

A kind of real-time online multi-object tracking method based on much information fusion Download PDF

Info

Publication number
CN109191497A
CN109191497A CN201810927485.7A CN201810927485A CN109191497A CN 109191497 A CN109191497 A CN 109191497A CN 201810927485 A CN201810927485 A CN 201810927485A CN 109191497 A CN109191497 A CN 109191497A
Authority
CN
China
Prior art keywords
target
frame
kalman
detection
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810927485.7A
Other languages
Chinese (zh)
Inventor
冯长驹
练智超
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201810927485.7A priority Critical patent/CN109191497A/en
Publication of CN109191497A publication Critical patent/CN109191497A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of real-time online multi-object tracking methods based on much information fusion.This method are as follows: target detection network is trained using data set first, depth convolution feature, normalization characteristic vector, using euclidean distance metric feature vector similitude are extracted from network;Then detection target is matched with tracing path using Hungary Algorithm, for the target detection of non-match tracing track, single goal tracker of Kalman's prediction device in conjunction with correlation filter is distributed, threshold decision is carried out for the tracing path of unallocated detection target and tracker is destroyed;Successfully gather finally for tracking, setting Kalman filter observation is collected by detection, updates Kalman's prediction device parameter.The present invention improves robustness of the target following in the case where tracking difficult scene, can in target occlusion, fast move etc. and to track difficult scene under, realize the accurate tracking to target.

Description

A kind of real-time online multi-object tracking method based on much information fusion
Technical field
The present invention relates to technical field of computer vision, especially a kind of more mesh of real-time online based on much information fusion Mark tracking.
Background technique
Traditional algorithm of target detection is to select some candidate in the input picture of different scale using sliding window Then haar-like is extracted in region, the features such as hog train classifier using traditional machine learning method, such as Adaboost, SVM etc., finally classify to candidate region, distinguish target and background.Conventional method is adopted using sliding window Sample, number of samples is huge, for balance detection rate, generally only calculates simple characteristics of image, the expressive ability of these features It is very limited, for example template matching feature hog feature will be unable to detect target when deformation occurs for target.
In recent years, can learn with the development of depth learning technology, when depth convolutional network is classified to description energy The powerful feature of power.RCNN extracts the candidate regions there may be target by selective search algorithm from input picture The depth convolution feature of candidate region is extracted in domain, and finally feature feeding classifier is classified, distinguishes target and background.Afterwards Solves the problems, such as duplicate convolutional calculation to have also been proposed pooling layers of ROI.Faster-RCNN introduces candidate region and generates Network (region proposal network) substitution selective search algorithm realizes deep learning instruction end to end Practice algorithm.It is that target detection problems are solved using the method classified to candidate region above, speed is slower.
Multiple target tracking algorithm is based upon detection and realizes tracking, then by the detection target position connection of each frame It is made into the algorithm of the track of each target.In recent years, it is strong similar more to lay particular emphasis on design one for multiple target tracking algorithm Property measure, it is the feature based on some powerful expression abilities that they are most of, such as appearance rarefaction representation, and integrating channel is special Sign, recurrent neural network coding etc., but this method in target occlusion, fast move etc. and to track difficult scene under, cannot achieve Preferable tracking to target.
Summary of the invention
The purpose of the present invention is to provide a kind of real-time online multi-object tracking methods based on much information fusion, in mesh It marks block, fast move etc. and track under difficult scene, realize accurate, the efficient tracking to target.
The technical solution for realizing the aim of the invention is as follows: it is a kind of based on much information fusion real-time online multiple target with Track method, steps are as follows:
Step 1, the t frame image under input time sequence: if t is equal to 1, step 4 is passed directly to, otherwise, into step Rapid 2;
Step 2, using the similitude S (i, j) of Euclidean distance method metric objective detection i and target tracking track j;
Step 3, target detection collection D is matched using Hungary AlgorithmtWith target tracking track collection Tt-1, when similitude S (i, It j) is more than threshold θ, then target detection i and target tracking track j are not matched that;
It step 4, is the target detection collection for not matching target tracking trackEach of target detection, distribute one The single goal tracker that Kalman's prediction device and correlation filter combine;
Step 5, the target tracking track collection of unallocated target detection is recordedIn unappropriated number: when the number is super It when crossing threshold tau, then regards as target and leaves scene, destroy the corresponding tracker of target, and enter step 6;Otherwise when the number When not above threshold tau, then step 6 is directly entered;
Step 6, the collection of successful match target detection and target tracking track is combined into target detection collectionIt is examined with target Survey collectionThe observation of Kalman's prediction device is set, the parameter of Kalman's prediction device is updated, then parameter t increases 1 certainly, into Enter step 1.
Further, using the similar of Euclidean distance method metric objective detection i and target tracking track j described in step 2 Property S (i, j), specific as follows:
The pre-training GoogleNet on image classification task ImageNet extracts pool5 layers of depth convolution feature, returns One changes feature vector, is then measured using Euclidean distance, formula is as follows:
Sappearance(di,dj)=| | di-dj||2
Wherein, SappearanceIndicate that calculate i-th of target extracts depth convolution feature d from GoogleNetiWith j-th of mesh Mark feature djBetween appearance similitude, | | | |2Indicate two norm of vector;
The similitude S (i, j) of target detection i and target tracking track j are finally calculated, formula is as follows:
S (i, j)=Sappearance(di,dj)*Smotion(pi,bj)*Smotion(pi,bj)
Wherein, Smotion(pi,bj) indicate target detection i and target tracking track kinematic similarity.
Further, single goal tracker of Kalman's prediction device described in step 4 in conjunction with correlation filter, specifically It is as follows:
Input k-th frame image, frame surrounded to target using filter and is tracked, calculate corresponding output peak value and APEC makees Kalman Prediction value using the encirclement frame for exporting target as the observation of Kalman filter after tracking successfully For observation, then the parameter of Kalman filter is updated, predicts the position of K+1 frame target, under then reading in One frame image, is arranged the position of search box;
Under multiple target tracking algorithm frame, the target that current frame detector detects surrounds frame and isWhereinThe dbjective state that present frame Kalman estimates isWhereinThen the kinematic similarity of target and shape similarity calculation method are shown below:
In formula, bjIndicate j-th surrounded in frame that detector detects,Respectively indicate j-th of packet The x and y-coordinate in 7 points of the upper left corner of peripheral frame and the length and width for surrounding frame, M indicate the encirclement frame number that present frame detects;pi Indicate that i-th of target that Kalman filter is estimated surrounds frame,Respectively indicate the upper left of i-th of encirclement frame The length and width of the x and y-coordinate and encirclement frame that 7 point of angle, N indicate the target frame number that Kalman filter is estimated;Smotion (pi,bj) indicate that the encirclement frame that detects of j-th of detector and i-th of Kalman's prediction device estimate the kinematic similitude between target Property, Sshape(pi,bj) indicate that the encirclement frame that detects of j-th of detector and i-th of Kalman's prediction device are estimated between target Appearance similitude.
Compared with prior art, the present invention its remarkable advantage are as follows: (1) it using the target signature of much information fusion is indicated, The similitude between target is accurately calculated, so that algorithm can be very good tracking target;(2) using Kalman's prediction device and The single goal tracker that correlation filter combines can use Kalman's prediction device estimation target when target is blocked completely Follow-up location improves model to the robustness of target occlusion problem.
Detailed description of the invention
Fig. 1 is the flow chart of the real-time online multi-object tracking method merged the present invention is based on much information.
Fig. 2 is the monotrack algorithm flow chart in conjunction with Kalman's prediction device and correlation filter.
Fig. 3 is result figure of the present invention in actual video tracking test, wherein (a), (b) are automobile video frequency tracking test Result figure (c) is face video tracking experimental result picture.
Specific embodiment
The present invention is based on the real-time online multi-object tracking method of much information fusion, this method mainly divides three to walk greatly: the Onestep extraction depth convolution feature calculation target similitude;Second step matches target detection with tracing path;Third portion Collect in conjunction with detection and update Kalman's prediction device parameter, in conjunction with Fig. 1, the specific steps are as follows:
Step 1, the t frame image under input time sequence: if t is equal to 1, step 4 is passed directly to, otherwise, into step Rapid 2;
Step 2, using the similitude S (i, j) of Euclidean distance method metric objective detection i and target tracking track j;
Step 3, target detection collection D is matched using Hungary AlgorithmtWith target tracking track collection Tt-1, when similitude S (i, It j) is more than threshold θ, then target detection i and target tracking track j are not matched that;
It step 4, is the target detection collection for not matching target tracking trackEach of target detection, distribute one The single goal tracker that Kalman's prediction device and correlation filter combine;
Step 5, the target tracking track collection of unallocated target detection is recordedIn unappropriated number: when the number is super It when crossing threshold tau, then regards as target and leaves scene, destroy the corresponding tracker of target, and enter step 6;Otherwise when the number When not above threshold tau, then step 6 is directly entered;
Step 6, the collection of successful match target detection and target tracking track is combined into target detection collectionIt is examined with target Survey collectionThe observation of Kalman's prediction device is set, the parameter of Kalman's prediction device is updated, then parameter t increases 1 certainly, into Enter step 1.
Further, using the similar of Euclidean distance method metric objective detection i and target tracking track j described in step 2 Property S (i, j), specific as follows:
The pre-training GoogleNet on image classification task ImageNet extracts pool5 layers of depth convolution feature, returns One changes feature vector, is then measured using Euclidean distance, formula is as follows:
Sappearance(di,dj)=| | di-dj||2
Wherein, SappearanceIndicate that calculate i-th of target extracts depth convolution feature d from GoogleNetiWith j-th of mesh Mark feature djBetween appearance similitude, | | | |2Indicate two norm of vector;
The similitude S (i, j) of target detection i and target tracking track j are finally calculated, formula is as follows:
S (i, j)=Sappearance(di,dj)*Smotion(pi,bj)*Smotion(pi,bj)
Wherein, Smotion(pi,bj) indicate target detection i and target tracking track kinematic similarity.
Further, single goal tracker of Kalman's prediction device described in step 4 in conjunction with correlation filter, specifically It is as follows:
In conjunction with Fig. 2, in conjunction with the monotrack algorithm flow of Kalman's prediction device and correlation filter, the present invention is using card Single goal tracing algorithm of the Germania prediction device in conjunction with correlation filter, enhance algorithm for target occlusion, quickly movement with And target crosses the multi-target tracking ability under scene, the specific method is as follows:
Input k-th frame image, frame surrounded to target using filter and is tracked, calculate corresponding output peak value and APEC makees Kalman Prediction value using the encirclement frame for exporting target as the observation of Kalman filter after tracking successfully For observation, then the parameter of Kalman filter is updated, predicts the position of K+1 frame target, under then reading in One frame image, is arranged the position of search box;
Under multiple target tracking algorithm frame, the target that current frame detector detects surrounds frame and isWhereinThe dbjective state that present frame Kalman estimates isWhereinThen the kinematic similarity of target and shape similarity calculation method are shown below:
In formula, bjIndicate j-th surrounded in frame that detector detects,Respectively indicate j-th of packet The x and y-coordinate in 7 points of the upper left corner of peripheral frame and the length and width for surrounding frame, M indicate the encirclement frame number that present frame detects;pi Indicate that i-th of target that Kalman filter is estimated surrounds frame,Respectively indicate the upper left of i-th of encirclement frame The length and width of the x and y-coordinate and encirclement frame that 7 point of angle, N indicate the target frame number that Kalman filter is estimated;Smotion (pi,bj) indicate that the encirclement frame that detects of j-th of detector and i-th of Kalman's prediction device estimate the kinematic similitude between target Property, Sshape(pi,bj) indicate that the encirclement frame that detects of j-th of detector and i-th of Kalman's prediction device are estimated between target Appearance similitude.
As shown in figure 3, illustrating the result figure of present invention target following in actual video, experiment test of the invention is It is tested under NVIDIA TITAN X environment, under such test environment, setting YOLO, will as detector portion Similarity threshold parameter θ is set as 0.5, will track unallocated frequency threshold value τ and is set as 8, then respectively to automobile video frequency and people Face video is tested, and wherein Fig. 3 (a), (b) are automobile video frequency tracking test result figure, and Fig. 3 (c) is that face video tracking is real Test result figure.By experimental result picture it is found that tracking effect of the present invention is fine, can target occlusion, fast move, target crosses Etc. each target of tracking of robust under scenes.

Claims (3)

1. a kind of real-time online multi-object tracking method based on much information fusion, which is characterized in that steps are as follows:
Step 1, the t frame image under input time sequence: if t is equal to 1, step 4 is passed directly to, otherwise, enters step 2;
Step 2, using the similitude S (i, j) of Euclidean distance method metric objective detection i and target tracking track j;
Step 3, target detection collection D is matched using Hungary AlgorithmtWith target tracking track collection Tt-1, when similitude S (i, j) is more than Threshold θ, then target detection i and target tracking track j are not matched that;
It step 4, is the target detection collection for not matching target tracking trackEach of target detection, distribute a karr The single goal tracker that graceful prediction device and correlation filter combine;
Step 5, the target tracking track collection of unallocated target detection is recordedIn unappropriated number: when the number be more than threshold It when value τ, then regards as target and leaves scene, destroy the corresponding tracker of target, and enter step 6;Otherwise when the number is not super When crossing threshold tau, then step 6 is directly entered;
Step 6, the collection of successful match target detection and target tracking track is combined into target detection collectionWith target detection collectionThe observation of Kalman's prediction device is set, the parameter of Kalman's prediction device is updated, then parameter t is from increasing 1, into step Rapid 1.
2. the real-time online multi-object tracking method based on much information fusion according to claim l, which is characterized in that It is specific as follows using the similitude S (i, j) of Euclidean distance method metric objective detection i and target tracking track j described in step 2:
The pre-training GoogleNet on image classification task ImageNet extracts pool5 layers of depth convolution feature, normalization Then feature vector is measured using Euclidean distance, formula is as follows:
Sappearance(di, dj)=| | di-dj||2
Wherein, SappearanceIndicate that calculate i-th of target extracts depth convolution feature d from GoogleNetiIt is special with j-th of target Levy djBetween appearance similitude, | | | |2Indicate two norm of vector;
The similitude S (i, j) of target detection i and target tracking track j are finally calculated, formula is as follows:
S (i, j)=Sappearance(di, dj)*Smotion(pi, bj)*Smotion(pi, bj)
Wherein, Smotion(pi, bj) indicate target detection i and target tracking track kinematic similarity.
3. the real-time online method for tracking target according to claim 1 based on much information fusion, which is characterized in that step Single goal tracker of the Kalman's prediction device described in rapid 4 in conjunction with correlation filter, specific as follows:
K-th frame image is inputted, frame is surrounded to target using filter and is tracked, calculates corresponding output peak value and APEC, The encirclement frame of target will be exported after tracking successfully as the observation of Kalman filter, using Kalman Prediction value as observation Value, is then updated the parameter of Kalman filter, predicts the position of K+1 frame target, then reads in next frame figure The position of search box is arranged in picture;
Under multiple target tracking algorithm frame, the target that current frame detector detects surrounds frame and isWhereinThe dbjective state that present frame Kalman estimates isWhereinThen the kinematic similarity of target and shape similarity calculation method are shown below:
In formula, bjIndicate j-th surrounded in frame that detector detects,Respectively indicate j-th of encirclement frame 7 points of the upper left corner x and y-coordinate and the length and width of surrounding frame, the encirclement frame number that detects of M expression present frame;piIt indicates I-th of target that Kalman filter is estimated surrounds frame, xpi, ypi, wpi, hpiRespectively indicate 7 points of the upper left corner of i-th of encirclement frame X and y-coordinate and the length and width of surrounding frame, the target frame number estimated of N expression Kalman filter;Smotion(pi, bj) table Show that encirclement frame that j-th of detector detects and i-th of Kalman's prediction device estimate the kinematic similarity between target, Sshape (pi, bj) indicate that the encirclement frame that detects of j-th of detector is similar to the appearance that i-th of Kalman's prediction device is estimated between target Part.
CN201810927485.7A 2018-08-15 2018-08-15 A kind of real-time online multi-object tracking method based on much information fusion Pending CN109191497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810927485.7A CN109191497A (en) 2018-08-15 2018-08-15 A kind of real-time online multi-object tracking method based on much information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810927485.7A CN109191497A (en) 2018-08-15 2018-08-15 A kind of real-time online multi-object tracking method based on much information fusion

Publications (1)

Publication Number Publication Date
CN109191497A true CN109191497A (en) 2019-01-11

Family

ID=64935953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810927485.7A Pending CN109191497A (en) 2018-08-15 2018-08-15 A kind of real-time online multi-object tracking method based on much information fusion

Country Status (1)

Country Link
CN (1) CN109191497A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872342A (en) * 2019-02-01 2019-06-11 北京清帆科技有限公司 A kind of method for tracking target under special scenes
CN109919974A (en) * 2019-02-21 2019-06-21 上海理工大学 Online multi-object tracking method based on the more candidate associations of R-FCN frame
CN109934849A (en) * 2019-03-08 2019-06-25 西北工业大学 Online multi-object tracking method based on track metric learning
CN110334734A (en) * 2019-05-31 2019-10-15 宁波中车时代传感技术有限公司 A kind of intelligent sensing fusion method based on meta-learn technology
CN110415277A (en) * 2019-07-24 2019-11-05 中国科学院自动化研究所 Based on light stream and the multi-target tracking method of Kalman filtering, system, device
CN110619657A (en) * 2019-08-15 2019-12-27 青岛文达通科技股份有限公司 Multi-camera linkage multi-target tracking method and system for smart community
CN110675432A (en) * 2019-10-11 2020-01-10 智慧视通(杭州)科技发展有限公司 Multi-dimensional feature fusion-based video multi-target tracking method
CN111009000A (en) * 2019-11-28 2020-04-14 华南师范大学 Insect feeding behavior analysis method and device and storage medium
CN111311647A (en) * 2020-01-17 2020-06-19 长沙理工大学 Target tracking method and device based on global-local and Kalman filtering
CN111833375A (en) * 2019-04-23 2020-10-27 舟山诚创电子科技有限责任公司 Method and system for tracking animal group track
CN112116634A (en) * 2020-07-30 2020-12-22 西安交通大学 Multi-target tracking method of semi-online machine
CN112418213A (en) * 2020-11-06 2021-02-26 北京航天自动控制研究所 Vehicle driving track identification method and device and storage medium
CN112561963A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Target tracking method and device, road side equipment and storage medium
CN112634333A (en) * 2020-12-30 2021-04-09 武汉卓目科技有限公司 Tracking device method and device based on ECO algorithm and Kalman filtering
CN112785630A (en) * 2021-02-02 2021-05-11 宁波智能装备研究院有限公司 Multi-target track exception handling method and system in microscopic operation
CN113012194A (en) * 2020-12-25 2021-06-22 深圳市铂岩科技有限公司 Target tracking method, device, medium and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘忠耿等: "多种信息融合的实时在线多目标跟踪", 《南京信息工程大学学报(自然科学版)》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872342A (en) * 2019-02-01 2019-06-11 北京清帆科技有限公司 A kind of method for tracking target under special scenes
CN109919974A (en) * 2019-02-21 2019-06-21 上海理工大学 Online multi-object tracking method based on the more candidate associations of R-FCN frame
CN109919974B (en) * 2019-02-21 2023-07-14 上海理工大学 Online multi-target tracking method based on R-FCN frame multi-candidate association
CN109934849A (en) * 2019-03-08 2019-06-25 西北工业大学 Online multi-object tracking method based on track metric learning
CN111833375B (en) * 2019-04-23 2024-04-05 舟山诚创电子科技有限责任公司 Method and system for tracking animal group track
CN111833375A (en) * 2019-04-23 2020-10-27 舟山诚创电子科技有限责任公司 Method and system for tracking animal group track
CN110334734A (en) * 2019-05-31 2019-10-15 宁波中车时代传感技术有限公司 A kind of intelligent sensing fusion method based on meta-learn technology
CN110415277B (en) * 2019-07-24 2022-03-08 中国科学院自动化研究所 Multi-target tracking method, system and device based on optical flow and Kalman filtering
CN110415277A (en) * 2019-07-24 2019-11-05 中国科学院自动化研究所 Based on light stream and the multi-target tracking method of Kalman filtering, system, device
CN110619657A (en) * 2019-08-15 2019-12-27 青岛文达通科技股份有限公司 Multi-camera linkage multi-target tracking method and system for smart community
CN110619657B (en) * 2019-08-15 2023-10-24 青岛文达通科技股份有限公司 Multi-camera linkage multi-target tracking method and system for intelligent communities
CN115311329A (en) * 2019-10-11 2022-11-08 杭州云栖智慧视通科技有限公司 Video multi-target tracking method based on dual-link constraint
CN115311329B (en) * 2019-10-11 2023-05-23 杭州云栖智慧视通科技有限公司 Video multi-target tracking method based on double-link constraint
CN110675432A (en) * 2019-10-11 2020-01-10 智慧视通(杭州)科技发展有限公司 Multi-dimensional feature fusion-based video multi-target tracking method
CN111009000A (en) * 2019-11-28 2020-04-14 华南师范大学 Insect feeding behavior analysis method and device and storage medium
CN111311647A (en) * 2020-01-17 2020-06-19 长沙理工大学 Target tracking method and device based on global-local and Kalman filtering
CN112116634A (en) * 2020-07-30 2020-12-22 西安交通大学 Multi-target tracking method of semi-online machine
CN112116634B (en) * 2020-07-30 2024-05-07 西安交通大学 Multi-target tracking method of semi-online machine
CN112418213A (en) * 2020-11-06 2021-02-26 北京航天自动控制研究所 Vehicle driving track identification method and device and storage medium
CN112561963A (en) * 2020-12-18 2021-03-26 北京百度网讯科技有限公司 Target tracking method and device, road side equipment and storage medium
CN113012194A (en) * 2020-12-25 2021-06-22 深圳市铂岩科技有限公司 Target tracking method, device, medium and equipment
CN113012194B (en) * 2020-12-25 2024-04-09 深圳市铂岩科技有限公司 Target tracking method, device, medium and equipment
CN112634333B (en) * 2020-12-30 2022-07-05 武汉卓目科技有限公司 Tracking device method and device based on ECO algorithm and Kalman filtering
CN112634333A (en) * 2020-12-30 2021-04-09 武汉卓目科技有限公司 Tracking device method and device based on ECO algorithm and Kalman filtering
CN112785630A (en) * 2021-02-02 2021-05-11 宁波智能装备研究院有限公司 Multi-target track exception handling method and system in microscopic operation

Similar Documents

Publication Publication Date Title
CN109191497A (en) A kind of real-time online multi-object tracking method based on much information fusion
CN108875588B (en) Cross-camera pedestrian detection tracking method based on deep learning
Miao et al. Pose-guided feature alignment for occluded person re-identification
Iqbal et al. Pose for action-action for pose
CN103020986B (en) A kind of motion target tracking method
CN108256421A (en) A kind of dynamic gesture sequence real-time identification method, system and device
Ogale A survey of techniques for human detection from video
CN104616316B (en) Personage's Activity recognition method based on threshold matrix and Fusion Features vision word
Kaâniche et al. Recognizing gestures by learning local motion signatures of HOG descriptors
Elmezain et al. Hand trajectory-based gesture spotting and recognition using HMM
KR102132722B1 (en) Tracking method and system multi-object in video
CN107194950B (en) Multi-person tracking method based on slow feature analysis
Azmat et al. An elliptical modeling supported system for human action deep recognition over aerial surveillance
CN113850221A (en) Attitude tracking method based on key point screening
Zhu et al. Action recognition in broadcast tennis video using optical flow and support vector machine
Lit et al. Multiple object tracking with gru association and kalman prediction
CN114283355A (en) Multi-target endangered animal tracking method based on small sample learning
Lian et al. A real time face tracking system based on multiple information fusion
Pang et al. Analysis of computer vision applied in martial arts
Xiang et al. Multitarget tracking using hough forest random field
CN112613472B (en) Pedestrian detection method and system based on deep search matching
Ramadass et al. Feature extraction method for video based human action recognitions: extended optical flow algorithm
Zheng et al. Identifying same persons from temporally synchronized videos taken by multiple wearable cameras
Jiang et al. Spatial and temporal pyramid-based real-time gesture recognition
Wang et al. Beyond pedestrians: A hybrid approach of tracking multiple articulating humans

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111

RJ01 Rejection of invention patent application after publication