CN109360226A - A kind of multi-object tracking method based on time series multiple features fusion - Google Patents
A kind of multi-object tracking method based on time series multiple features fusion Download PDFInfo
- Publication number
- CN109360226A CN109360226A CN201811210852.8A CN201811210852A CN109360226A CN 109360226 A CN109360226 A CN 109360226A CN 201811210852 A CN201811210852 A CN 201811210852A CN 109360226 A CN109360226 A CN 109360226A
- Authority
- CN
- China
- Prior art keywords
- tracking
- target
- frame
- tracking target
- candidate frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention proposes a kind of multi-object tracking methods based on time series multiple features fusion.The method of the present invention obtains the classification and candidate frame of tracking target according to multi-target detection algorithm;Moving projection central point is calculated using convolutional network and correlation filter and screens candidate frame;Calculate appearance similarity scores;Calculate kinematic similarity score;Calculate interaction feature similarity scores;Candidate frame updates tracking clarification of objective information in the conversion of the tracking box of current frame image after being screened;Calculate the moving projection central point and screening candidate frame of the tracking target for not being matched to candidate frame;Not matched candidate frame is associated for already present tracking target, constructs new tracking target;Using the degree of overlapping handed over and between tracking target more each than calculating;It is the target to have disappeared by the tracking Target in multiple image continuously in loss state.Compared with prior art, the present invention improves tracking accuracy.
Description
Technical field
The present invention relates to computer visions, target following technical field, how special based on time series more particularly to one kind
Levy the multi-object tracking method of fusion.
Background technique
Target following refers in image sequence, first detects the interested target of system, and it is accurately fixed to carry out to target
Then the motion information of target is constantly updated in position during target is mobile, to realize the lasting tracking to target.Target
Tracking can be divided into multiple target tracking and monotrack, and monotrack only focuses on an interesting target, and task is design
One motion model or display model, which solve the factors such as change of scale, target occlusion, illumination, to be influenced, and is calibrated frame by frame interested
The corresponding picture position of target.Compared to monotrack, multiple target tracking also needs to solve two additional tasks: discovery is simultaneously
Handle the target of emerging target and disappearance in video sequence;Maintain the specific identity of each target.
Track target initialization, frequently block, target leaves detection zone, the similar appearance of multiple targets, Yi Jiduo
Interaction between a target all can increase difficulty to multiple target tracking.In order to judge the mesh of emerging target and disappearance in time
Mark, multiple target tracking algorithm generally require the basis that multi-target detection is realized as algorithm.
In recent years, with the development of deep learning, the development of computer vision field is very rapid.Algorithm of target detection is
Through very accurate, and processing speed with higher.But in multiple target tracking field, since the difficult point of multiple target tracking is not yet complete
Complete solution is determined, and the data association algorithm based on detection still has very big room for promotion.The innovation of the invention consists in that using phase
The position that filtering algorithm predicts each target is closed, the dependency degree of detection algorithm is reduced, while proposing one based on object position
It sets, appearance, movement, LSTM (the Long Short-Term Memory) network frame for interacting multiple features, passes through and extract high distinguish
The characteristic model of degree overcomes multiple target occlusion issue, improves the precision of multiple target tracking.
Currently, the mode of multiple target tracking field more prevalence is to rely on the data association algorithm of detector, such side
Method has well solved the problems such as object initialization, extinction and change of scale, but still cannot solve to depend on detection unduly very well
Mutually blocked between device performance, multiple target, similar appearance target distinguish the problems such as.
Summary of the invention
In order to solve the above-mentioned technical problem, the invention proposes a kind of more mesh based on time series multiple features data correlation
Mark tracking.
The technical scheme is that a kind of multi-object tracking method based on time series multiple features data correlation, specifically
The following steps are included:
Step 1: according to the tracking target in SSD multi-target detection algorithm detection frame image, passing through SSD detecting and tracking target
Confidence level compared with confidence threshold value, the classification of statistical trace target and the candidate frame for tracking target;
Step 2: extracting tracking target in the convolution feature of the position frame of present frame, by tracking target using convolutional network
Correlation filter calculate current frame image in each position response confidence score, the point of highest scoring is defined as currently
The moving projection central point of target, and the candidate frame screened by moving projection central point are tracked under frame image;
Step 3: calculating in tracking state or lose the appearance similarity scores of candidate frame after the tracking target and screening of state;
Step 4: calculating in tracking state or lose the kinematic similarity score of candidate frame after the tracking target and screening of state;
Step 5: calculating in tracking state or lose the interaction feature similitude of candidate frame after the tracking target and screening of state
Score;
Step 6: if in tracking state or if losing the tracking object matching to candidate frame of state by total similarity scores and
Matching score threshold compares, and when total similarity scores are greater than matching score threshold, then candidate frame is converted to tracking target in present frame
The tracking box of image updates external appearance characteristic, the velocity characteristic, interaction feature information of tracking target;If in tracking state or losing
The tracking target forgotten oneself is not matched to candidate frame, then the status information of tracking target is updated by step 2;
Step 7: not matched candidate frame is associated for already present tracking target, it will not matched candidate frame identification
Newly to track target, new tracking target is initialized, new tracking target is established, constructs the position feature model, outer of new tracking target
See characteristic model, velocity characteristic model and interaction feature model, and its state be updated to tracking state, in subsequent frame image into
Row data correlation matched jamming;
Step 8: again retrieve present frame each tracking target in tracking mode, using hand over and than calculating it is each with
Degree of overlapping between track target;
Step 9: being the target to have disappeared by the tracking Target in continuous multiple frames image continuously in loss state, protect
The data information of its tracking mode is deposited, Data Matching operation no longer is carried out to it.
Preferably, frame image described in step 1 is m width image, the categorical measure that target is tracked described in step 1 is
Nm, the candidate frame of target is tracked described in step 1 are as follows:
Di,m={ xi,m∈[li,m,li,m+lenthi,m],yi,m∈[wi,m,wi,m+widthi,m]|(xi,m,yi,m)},i∈[1,
Km]
Wherein, KmFor the candidate frame quantity for tracking target in m frame image, li,mFor the i-th tracking in m frame image
The starting point coordinate of the candidate frame X-axis of target, wi,mFor in m frame image i-th tracking target candidate frame Y-axis starting point coordinate,
lenthi,mFor the length of the candidate frame of the i-th tracking target in m frame image, widthi,mFor the i-th tracking in m frame image
The width of the candidate frame of target;
Preferably, convolutional network described in step 2 is the VGG16 network good in ImageNet classification task pre-training,
And the first layer feature vector of tracking position of object frame is extracted by VGG16 network;
Pass through the two-dimensional feature vector of channel cInterpolation model by the two-dimensional feature vector of channel cIt is converted into one-dimensional
The feature vector of continuous space:
Wherein,For the two-dimensional feature vector of channel c, bcIt is defined as a cube interpolating function three times, NcForAdopt
Sample number, L are the length of the feature vector of one-dimensional continuous space, and Channel is the quantity in channel;
Convolution operator are as follows:
Wherein, yi,mIt is the response of the tracking target i of m image,For the two-dimensional feature vector of channel c, Channel
For the quantity in channel,The feature vector of the one-dimensional continuous space of channel c,It is tracking target i in m frame image
The correlation filter of channel c;
Pass through training sample training correlation filter are as follows:
In n given training sample to { (yi,q,y'i,q) be trained to obtain under (q ∈ [m-n, m-1]), that is, pass through
It minimizes objective function optimization and obtains correlation filter:
Wherein, yi,m-jIt is the response of the tracking target i of m-j image, y'i,m-jFor yi,m-jIdeal Gaussian distribution,Correlation filter for tracking target i in the channel c of m frame image, weight αjIt is the impact factor of training sample j, by punishing
Penalty function w is determined, obtains the correlation filter in each channel by training
Pass through the response y of the tracking target i of m imagei,m(l) l ∈ [0, L), maximizing yi,m(l) corresponding
lp,i,m:
lp,i,m=argmax (yi,m(l))l∈[0,L)
Wherein, L is the length of the feature vector of one-dimensional continuous space;
By lp,i,mIt is converted into the point of the two-dimensional feature vector in channelAfter being reduced to two-dimensional coordinate, it is mapped as
Coordinate points p under present framei,m=(xp,i,m,yp,i,m), the i-th tracking target T as in m frame imageiMoving projection center
Point;
If tracking target TiIn tracking state, the candidate frame around predicted location area is only selected to carry out subsequent number of targets
According to matching:
If tracking target TiPrevious frame length be lenthi,m-1, width widthi,m-1, the i-th tracking in m frame image
Target TiMoving projection central point be pi,m=(xp,i,m,yp,i,m), the candidate frame center of the i-th tracking target in m frame image
Point is ci,m=(li,m+lenthi,m/2,wi,m+widthi,m/2)i∈[1,Km], when candidate frame central point and moving projection center
The distance of point meets condition:
d(pi,m,ci,m)=(xp,i,m-li,m-lenthi,m/2)2+(yp,i,m-wi,m-widthi,m/2)2< min
(lenthi,m-1/2,widthi,m-1/2)
The candidate frame for the condition that meets then is subjected to subsequent target data matching;
If tracking target TiIn state is lost, selection screens candidate frame near the position of its disappearance former frame:
Moving projection central point is t when taking its disappearance former framei,m=(xt,i,m,yt,i,m), length lenthi,m-1, width
For widthi,m-1, when candidate frame center and disappearance centre distance d (ti,ci,m) when meeting following conditions:
d(ti,m,ci,m)=(xt,i,m-li,m-lenthi,m/2)2+(yt,i,m-wi,m-widthi,m/2)2< min
(lenthi,m-1/2,widthi,m-1/2)
The candidate frame for the condition that meets then is subjected to subsequent target data matching;
If tracking target TiIn failed matched jamming state, moving projection central point can be used and update in its candidate frame
Heart point:
Update tracking target TiCandidate frame central point be moving projection central point pi,m=(xp,i,m,yp,i,m), candidate frame
Length, the width of candidate frame and m-1 frame image remain unchanged;
Preferably, candidate frame is the time screened by step 2 according to moving projection central point after screening described in step 3
Select frame;
Appearance similarity scores described in step 3 specifically calculates are as follows:
By described in step 2 in m frame image i-th tracking target screening after candidate frame Di,mMost by removal VGG16
The articulamentum VGG16 network of later layer obtains tracking target T in the m frame image of N-dimensionaliExternal appearance characteristic vector
It is trained and is respectively obtained with training method end to end to training set by multiple target tracking public data collection
The LSTM network of external appearance characteristic and and the first full articulamentum FC1;
Target T will be trackediPreceding M frame image data pass through it is same by remove VGG16 the last layer articulamentum
VGG16 network extracts the LSTM network for further passing through external appearance characteristic after the external appearance characteristic vector of M N-dimensional, extracts N-dimensional
United history external appearance characteristic vector
Joint connectionWithBy the first full articulamentum FC1, to obtain tracking target TiWith candidate frame Di,m
Appearance similarity scores SA(Ti,Di,m), if target TiCertain preceding frame image data not yet generate, then with 0 value replacement;
Preferably, kinematic similarity score described in step 4 calculates are as follows:
Described in step 2 in m frame image i-th tracking target screening after candidate frame Di,mCentral point are as follows:
(li,m+lenthi,m/2,wi,m+widthi,m/2)
Previous frame image tracks target TiCandidate frame center are as follows:
(li,m-1+lenthi,m-1/2,wi,m-1+widthi,m-1/2)
The velocity characteristic vector of i-th tracking target in m frame image are as follows:
It is trained and is respectively obtained with training method end to end to training set by multiple target tracking public data collection
The LSTM network of velocity characteristic and and the second full articulamentum FC2;
The LSTM network that the velocity characteristic vector of i-th tracking target in M frame image is passed through to velocity characteristic, extracts joint
The motion feature vector of historical series
Joint connectionWithBy the second full articulamentum FC2, thus the tracking mesh in tracking state or loss state
Mark TiWith candidate frame Di,mKinematic similarity score be SV(Ti,Di,m), if target TiCertain preceding frame exercise data not yet produce
It is raw, then with the replacement of 0 value;
Preferably, interaction feature similarity scores described in step 5 calculates are as follows:
With candidate frame D after screeningi,mCentre coordinate ci,m=(li,m+lenthi,m/2,wi,m+widthi,m/ 2) centered on,
The box for establishing the fixed size that length and width are H, by frame with other candidate frame centre coordinate ci',mThe point of coincidence is set to 1, Gu
The box center for determining size is also set to 1, remaining position is 0, obtains:
Wherein,
x∈[li,m+lenthi,m/2-H/2,li,m+lenthi,m/2+H/2]
y∈[wi,m+widthi,m/2-H/2,wi,m+widthi,m/2+H/2]
Again willBeing converted into length is H2One-dimensional vector, the interaction feature vector for obtaining candidate frame is
It is trained and is respectively obtained with training method end to end to training set by multiple target tracking public data collection
The LSTM network of interaction feature and and the full articulamentum FC3 of third;
With target TiCentered on the centre coordinate of certain frame image, the box for the fixed size that length and width are H is established, it will be in frame
Be set to 1 with other points for being overlapped of tracking target's center's coordinates, the box center of fixed size is also set to 1, remaining position is 0,
Obtain target TiIn the interaction feature vector of the frame, by target TiPreceding M frame number interaction feature vector by interaction feature
LSTM network extracts united history interaction feature vector
JointWithBy the full articulamentum FC3 of third, to obtain TiAnd Di,mInteraction feature similarity scores SI
(Ti,Di,m), if target TiCertain preceding frame interaction feature vector not yet generate, then with 0 value replacement;
Preferably, total similarity scores described in step 6 are as follows:
Stotal,i=α1SA(Ti,Di,m)+α2SV(Ti,Di,m)+α3SI(Ti,Di,m)
Wherein, α1For external appearance characteristic likeness coefficient, α2For velocity characteristic likeness coefficient, α3For interaction feature similitude
Coefficient;
Total similarity scores are greater than matching score threshold Stotal,i> β then candidate frame Di,mTracking target is converted in m frame figure
The tracking box of picture;
In step 6 by step 2 update tracking target status information be keep tracking target be tracking state, for continuous
The failed matched tracking target in tracking state of multiframe, then be translated into loss state, no longer using side described in step 2
Method;
Preferably, the degree of overlapping between each tracking target described in step 8 are as follows:
Wherein, A is tracking target TaTracking box area, B be tracking target TbTracking box area, for be in IOU >
0.8 tracking target TaWith tracking target Tb, according to the obtained total similarity scores S of the step 6total,aWith Stotal,bInto
Row compares, by Stotal,aWith Stotal,bLower tracking targeted transformation is to lose state, keeps Stotal,aWith Stotal,bHigher tracking
Target is tracking state;
Preferably, multiple image described in step 9 is MDFrame.
Compared with prior art, the present invention has the following advantages and beneficial effects:
Characteristic of the method for the present invention according to each target under time series, constructs LSTM network frame, so that
System can solve the problem of target is blocked for a long time, preferably improve the matched standard of target data in conjunction with historical data feature
Exactness;
Present invention incorporates the features in terms of the position of tracking target, appearance, movement, interaction four, have used convolution net
Network extracts the appearance further feature information and shallow-layer characteristic information of object, improves the discrimination of tracking target signature;With object
The direction of the every frame movement of body and velocity information, have in continuity speciality from object of which movement information, improve object matching accuracy;
By the interaction feature information of object under successive frame, interaction models are proposed, analyze tracking target and the other targets of surrounding
Active force relationship, to improve matching accuracy.It is improved by using the mode that multi thread combines progress Data Matching
The accuracy of target following;
Quick correlation filtering autotracking method is used to each target, calculates the position that target moves under present frame
It sets, filters out the candidate frame for meeting the band of position, reduce the calculation amount of data association algorithm well.Autotracking algorithm for
The tracking state target of missing inspection in target detection can solve the problems, such as to depend on object detector performance unduly from line trace.
Detailed description of the invention
Fig. 1: technical solution of the present invention the general frame;
Fig. 2: the survival condition figure of single target;
Fig. 3: external appearance characteristic Model Matching figure;
Fig. 4: velocity characteristic Model Matching figure;
Fig. 5: interaction feature Model Matching figure;
Fig. 6: interaction feature LSTM network model matching figure;
Fig. 7: system multiple target tracking schematic diagram.
Specific embodiment
Understand for the ease of those of ordinary skill in the art and implement the present invention, with reference to the accompanying drawings and embodiments to this hair
It is bright to be described in further detail, it should be understood that implementation example described herein is merely to illustrate and explain the present invention, not
For limiting the present invention.
The embodiment of the present invention is introduced below with reference to Fig. 1 to Fig. 6.The technical solution of the present embodiment is a kind of based on time sequence
The multi-object tracking method of column multiple features data correlation, specifically includes the following steps:
Step 1: according to the tracking target in SSD multi-target detection algorithm detection frame image, passing through SSD detecting and tracking target
Confidence level compared with confidence threshold value, the classification of statistical trace target and the candidate frame for tracking target;
Frame image described in step 1 is m width image, and the categorical measure that target is tracked described in step 1 is Nm, step 1
Described in track target candidate frame are as follows:
Di,m={ xi,m∈[li,m,li,m+lenthi,m],yi,m∈[wi,m,wi,m+widthi,m]|(xi,m,yi,m)},i∈[1,
Km]
Wherein, KmFor the candidate frame quantity for tracking target in m frame image, li,mFor the i-th tracking in m frame image
The starting point coordinate of the candidate frame X-axis of target, wi,mFor in m frame image i-th tracking target candidate frame Y-axis starting point coordinate,
lenthi,mFor the length of the candidate frame of the i-th tracking target in m frame image, widthi,mFor the i-th tracking in m frame image
The width of the candidate frame of target;
Step 2: extracting tracking target in the convolution feature of the position frame of present frame, by tracking target using convolutional network
Correlation filter calculate current frame image in each position response confidence score, the point of highest scoring is defined as currently
The moving projection central point of target, and the candidate frame screened by moving projection central point are tracked under frame image;
Convolutional network described in step 2 is the VGG16 network good in ImageNet classification task pre-training, and is passed through
The first layer feature vector of VGG16 network extraction tracking position of object frame;
Pass through the two-dimensional feature vector of channel cInterpolation model by the two-dimensional feature vector of channel cIt is converted into one-dimensional
The feature vector of continuous space:
Wherein,For the two-dimensional feature vector of channel c, bcIt is defined as a cube interpolating function three times, NcForSampling
Number, L are the length of the feature vector of one-dimensional continuous space, and Channel=512 is the quantity in channel;
Convolution operator are as follows:
Wherein, yi,mIt is the response of the tracking target i of m image,For the two-dimensional feature vector of channel c, Channel
For the quantity in channel,The feature vector of the one-dimensional continuous space of channel c,It is tracking target i in m frame image
The correlation filter of channel c;
Pass through training sample training correlation filter are as follows:
In n given training sample to { (yi,q,y'i,q) be trained to obtain under (q ∈ [m-n, m-1]), that is, pass through
It minimizes objective function optimization and obtains correlation filter:
Wherein, yi,m-jIt is the response of the tracking target i of m-j image, y'i,m-jFor yi,m-jIdeal Gaussian distribution,Correlation filter for tracking target i in the channel c of m frame image, weight αjIt is the impact factor of training sample j, by punishing
Penalty function w is determined, obtains the correlation filter in each channel by trainingNumber of training n=30;
Pass through the response y of the tracking target i of m imagei,m(l) l ∈ [0, L), maximizing yi,m(l) corresponding
lp,i,m:
lp,i,m=argmax (yi,m(l))l∈[0,L)
Wherein, L is the length of the feature vector of one-dimensional continuous space;
By lp,i,mIt is converted into the point of the two-dimensional feature vector in channelAfter being reduced to two-dimensional coordinate, it is mapped as
Coordinate points p under present framei,m=(xp,i,m,yp,i,m), the i-th tracking target T as in m frame imageiMoving projection center
Point;
If tracking target TiIn tracking state, the candidate frame around predicted location area is only selected to carry out subsequent number of targets
According to matching:
If tracking target TiPrevious frame length be lenthi,m-1, width widthi,m-1, the i-th tracking in m frame image
Target TiMoving projection central point be pi,m=(xp,i,m,yp,i,m), the candidate frame center of the i-th tracking target in m frame image
Point is ci,m=(li,m+lenthi,m/2,wi,m+widthi,m/2)i∈[1,Km], when candidate frame central point and moving projection center
The distance of point meets condition:
d(pi,m,ci,m)=(xp,i,m-li,m-lenthi,m/2)2+(yp,i,m-wi,m-widthi,m/2)2< min
(lenthi,m-1/2,widthi,m-1/2)
The candidate frame for the condition that meets then is subjected to subsequent target data matching;
If tracking target TiIn state is lost, selection screens candidate frame near the position of its disappearance former frame:
Moving projection central point is t when taking its disappearance former framei,m=(xt,i,m,yt,i,m), length lenthi,m-1, width
For widthi,m-1, when candidate frame center and disappearance centre distance d (ti,ci,m) when meeting following conditions:
d(ti,m,ci,m)=(xt,i,m-li,m-lenthi,m/2)2+(yt,i,m-wi,m-widthi,m/2)2< min
(lenthi,m-1/2,widthi,m-1/2)
The candidate frame for the condition that meets then is subjected to subsequent target data matching;
If tracking target TiIn failed matched jamming state, moving projection central point can be used and update in its candidate frame
Heart point:
Update tracking target TiCandidate frame central point be moving projection central point pi,m=(xp,i,m,yp,i,m), candidate frame
Length, the width of candidate frame and m-1 frame image remain unchanged;
Step 3: calculating in tracking state or lose the appearance similarity scores of candidate frame after the tracking target and screening of state;
Candidate frame is the candidate frame screened by step 2 according to moving projection central point after screening described in step 3;
Appearance similarity scores described in step 3 specifically calculates are as follows:
By described in step 1 in m frame image i-th tracking target candidate frame Di,mBy removing VGG16 the last layer
Articulamentum VGG16 network, obtain N=1000 dimension m frame image in track target TiExternal appearance characteristic vector
By multiple target tracking the given training set of public data collection MOT17-Challenge with training method end to end into
Row training respectively obtains the LSTM network and and the first full articulamentum FC1 of external appearance characteristic;
Target T will be trackediPreceding M frame image data pass through it is same by remove VGG16 the last layer articulamentum
VGG16 network extracts the LSTM network for further passing through external appearance characteristic after the external appearance characteristic vector of M N-dimensional, extracts N-dimensional
United history external appearance characteristic vector
Joint connectionWithBy the first full articulamentum FC1, to obtain tracking target TiWith candidate frame Di,m
Appearance similarity scores SA(Ti,Di,m), if target TiCertain preceding frame image data not yet generate, then with 0 value replacement;
Step 4: calculating in tracking state or lose the tracking target of state and the kinematic similarity score of candidate frame;
Kinematic similarity score described in step 4 calculates are as follows:
Described in step 2 in m frame image i-th tracking object filtering after candidate frame Di,mCentral point are as follows:
(li,m+lenthi,m/2,wi,m+widthi,m/2)
Previous frame image tracks target TiCandidate frame center are as follows:
(li,m-1+lenthi,m-1/2,wi,m-1+widthi,m-1/2)
The velocity characteristic vector of i-th tracking target in m frame image are as follows:
By multiple target tracking the given training set of public data collection MOT17-Challenge with training method end to end into
Row training respectively obtains the LSTM network and and the second full articulamentum FC2 of velocity characteristic;
The LSTM network that the velocity characteristic vector of i-th tracking target in M frame image is passed through to velocity characteristic, extracts joint
The motion feature vector of historical series
Joint connectionWithBy the second full articulamentum FC2, thus the tracking mesh in tracking state or loss state
Mark TiWith candidate frame Di,mKinematic similarity score be SV(Ti,Di,m), if target TiCertain preceding frame exercise data not yet produce
It is raw, then with the replacement of 0 value;
Step 5: calculating in tracking state or lose the tracking target of state and the interaction feature similarity scores of candidate frame;
Interaction feature similarity scores described in step 5 calculates are as follows:
With candidate frame D after screeningi,mCentre coordinate ci,m=(li,m+lenthi,m/2,wi,m+widthi,m/ 2) centered on,
The box for establishing the fixed size that length and width are H, by frame with other candidate frame centre coordinate ci',mThe point of coincidence is set to 1, Gu
The box center for determining size is also set to 1, remaining position is 0, obtains:
Wherein,
x∈[li,m+lenthi,m/2-H/2,li,m+lenthi,m/2+H/2]
y∈[wi,m+widthi,m/2-H/2,wi,m+widthi,m/2+H/2]
Again willBeing converted into length is H2One-dimensional vector, the interaction feature vector for obtaining candidate frame is
By multiple target tracking the given training set of public data collection MOT17-Challenge with training method end to end into
Row training respectively obtains the LSTM network and and the full articulamentum FC3 of third of interaction feature;
With target TiCentered on the centre coordinate of certain frame image, the box for the fixed size that length and width are H=300 is established,
The point being overlapped with other tracking target's center's coordinates in frame is set to 1, the box center of fixed size is also set to 1, remaining position
It is set to 0, obtains target TiIn the interaction feature vector of the frame, by target TiPreceding M frame number interaction feature vector it is special by interaction
The LSTM network of sign extracts united history interaction feature vector
JointWithBy the full articulamentum FC3 of third, to obtain TiAnd Di,mInteraction feature similarity scores SI
(Ti,Di,m), if target TiCertain preceding frame interaction feature vector not yet generate, then with 0 value replacement;
Step 6: if in tracking state or if losing the tracking object matching to candidate frame of state by total similarity scores and
Matching score threshold compares, and when total similarity scores are greater than matching score threshold, then candidate frame is converted to tracking target in present frame
The tracking box of image updates external appearance characteristic, the velocity characteristic, interaction feature information of tracking target;If in tracking state or losing
The tracking target forgotten oneself is not matched to candidate frame, then the status information of tracking target is updated by step 2;
Total similarity scores described in step 6 are as follows:
Stotal,i=α1SA(Ti,Di,m)+α2SV(Ti,Di,m)+α3SI(Ti,Di,m)
Wherein, α1For external appearance characteristic likeness coefficient, α2For velocity characteristic likeness coefficient, α3For interaction feature similitude
Coefficient;
Total similarity scores are greater than matching score threshold Stotal,i> β then candidate frame Di,mTracking target is converted in m frame figure
The tracking box of picture;
In step 6 by step 2 update tracking target status information be keep tracking target be tracking state, for continuous
The failed matched tracking target in tracking state of multiframe, then be translated into loss state, no longer using side described in step 2
Method;
Step 7: not matched candidate frame is associated for already present tracking target, it will not matched candidate frame identification
Newly to track target, new tracking target is initialized, new tracking target is established, constructs the position feature model, outer of new tracking target
See characteristic model, velocity characteristic model and interaction feature model, and its state be updated to tracking state, in subsequent frame image into
Row data correlation matched jamming;
Step 8: again retrieve present frame each tracking target in tracking mode, using hand over and than calculating it is each with
Degree of overlapping between track target;
Degree of overlapping between each tracking target described in step 8 are as follows:
Wherein, A is tracking target TaTracking box area, B be tracking target TbTracking box area, for be in IOU >
0.8 tracking target TaWith tracking target Tb, according to the obtained total similarity scores S of the step 6total,aWith Stotal,bInto
Row compares, by Stotal,aWith Stotal,bLower tracking targeted transformation is to lose state, keeps Stotal,aWith Stotal,bHigher tracking
Target is tracking state;
Step 9: being the target to have disappeared by the tracking Target in multiple image continuously in loss state, save it
The data information of tracking mode no longer carries out Data Matching operation to it.
Multiple image described in step 9 is MD=30 frame images.
It should be understood that the part that this specification does not elaborate belongs to the prior art.
It should be understood that the above-mentioned description for preferred embodiment is more detailed, can not therefore be considered to this
The limitation of invention patent protection range, those skilled in the art under the inspiration of the present invention, are not departing from power of the present invention
Benefit requires to make replacement or deformation under protected ambit, fall within the scope of protection of the present invention, this hair
It is bright range is claimed to be determined by the appended claims.
Claims (9)
1. a kind of multi-object tracking method based on time series multiple features fusion, which comprises the following steps:
Step 1: according to the tracking target in SSD multi-target detection algorithm detection frame image, passing through setting for SSD detecting and tracking target
Reliability is compared with confidence threshold value, the classification of statistical trace target and the candidate frame for tracking target;
Step 2: extracting tracking target in the convolution feature of the position frame of present frame, by the phase for tracking target using convolutional network
The response confidence score that filter calculates each position in current frame image is closed, the point of highest scoring is defined as present frame figure
As the moving projection central point of lower tracking target, and the candidate frame screened by moving projection central point;
Step 3: calculating in tracking state or lose the appearance similarity scores of candidate frame after the tracking target and screening of state;
Step 4: calculating in tracking state or lose the kinematic similarity score of candidate frame after the tracking target and screening of state;
Step 5: calculating in tracking state or lose the interaction feature similarity scores of candidate frame after the tracking target and screening of state;
Step 6: if in tracking state or lose state tracking object matching to candidate frame if by total similarity scores with match
Score threshold compares, and when total similarity scores are greater than matching score threshold, then candidate frame is converted to tracking target in current frame image
Tracking box, update tracking target external appearance characteristic, velocity characteristic, interaction feature information;If in tracking state or losing state
Tracking target be not matched to candidate frame, then by step 2 update tracking target status information;
Step 7: not matched candidate frame being associated for already present tracking target, not matched candidate frame is regarded as newly
Target is tracked, new tracking target is initialized, establishes new tracking target, position feature model, the appearance for constructing new tracking target are special
Model, velocity characteristic model and interaction feature model are levied, and its state is updated to tracking state, is counted in subsequent frame image
According to association matched jamming;
Step 8: each tracking target in tracking mode of present frame is retrieved again, using friendship and than calculating each tracking mesh
Degree of overlapping between mark;
Step 9: being the target to have disappeared by the tracking Target in continuous multiple frames image continuously in loss state, save it
The data information of tracking mode no longer carries out Data Matching operation to it.
2. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Frame image described in rapid 1 is m width image, and the categorical measure that target is tracked described in step 1 is Nm, tracked described in step 1
The candidate frame of target are as follows:
Di,m={ xi,m∈[li,m,li,m+lenthi,m],yi,m∈[wi,m,wi,m+widthi,m]|(xi,m,yi,m)},i∈[1,Km]
Wherein, KmFor the candidate frame quantity for tracking target in m frame image, li,mFor the i-th tracking target in m frame image
The starting point coordinate of candidate frame X-axis, wi,mFor in m frame image i-th tracking target candidate frame Y-axis starting point coordinate,
lenthi,mFor the length of the candidate frame of the i-th tracking target in m frame image, widthi,mFor the i-th tracking in m frame image
The width of the candidate frame of target.
3. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Convolutional network described in rapid 2 is the VGG16 network good in ImageNet classification task pre-training, and is extracted by VGG16 network
The first layer feature vector of tracking position of object frame;
Pass through the two-dimensional feature vector of channel cInterpolation model by the two-dimensional feature vector of channel cIt is converted into one-dimensional continuous
The feature vector in space:
Wherein,For the two-dimensional feature vector of channel c, bcIt is defined as a cube interpolating function three times, NcForHits, L
For the length of the feature vector of one-dimensional continuous space, Channel is the quantity in channel;
Convolution operator are as follows:
Wherein, yi,mIt is the response of the tracking target i of m image,For the two-dimensional feature vector of channel c, Channel is logical
The quantity in road,The feature vector of the one-dimensional continuous space of channel c,It is to track target i in the channel c of m frame image
Correlation filter;
Pass through training sample training correlation filter are as follows:
In n given training sample to { (yi,q,y'i,q) be trained to obtain under (q ∈ [m-n, m-1]), that is, pass through minimum
Change objective function optimization and obtain correlation filter:
Wherein, yi,m-jIt is the response of the tracking target i of m-j image, y'i,m-jFor yi,m-jIdeal Gaussian distribution,For with
Correlation filter of the track target i in the channel c of m frame image, weight αjIt is the impact factor of training sample j, by penalty w
It determines, the correlation filter in each channel is obtained by training
Pass through the response y of the tracking target i of m imagei,m(l) l ∈ [0, L), maximizing yi,m(l) corresponding lp,i,m:
lp,i,m=argmax (yi,m(l))l∈[0,L)
Wherein, L is the length of the feature vector of one-dimensional continuous space;
By lp,i,mIt is converted into the point of the two-dimensional feature vector in channelAfter being reduced to two-dimensional coordinate, it is mapped as present frame
Under coordinate points pi,m=(xp,i,m,yp,i,m), the i-th tracking target T as in m frame imageiMoving projection central point;
If tracking target TiIn tracking state, the candidate frame around predicted location area is only selected to carry out subsequent target data
Match:
If tracking target TiPrevious frame length be lenthi,m-1, width widthi,m-1, the i-th tracking target in m frame image
TiMoving projection central point be pi,m=(xp,i,m,yp,i,m), the candidate frame central point of the i-th tracking target is in m frame image
ci,m=(li,m+lenthi,m/2,wi,m+widthi,m/2)i∈[1,Km], when candidate frame central point and moving projection central point
Distance meets condition:
d(pi,m,ci,m)=(xp,i,m-li,m-lenthi,m/2)2+(yp,i,m-wi,m-widthi,m/2)2< min (lenthi,m-1/2,
widthi,m-1/2)
The candidate frame for the condition that meets then is subjected to subsequent target data matching;
If tracking target TiIn state is lost, selection screens candidate frame near the position of its disappearance former frame:
Moving projection central point is t when taking its disappearance former framei,m=(xt,i,m,yt,i,m), length lenthi,m-1, width is
widthi,m-1, when candidate frame center and disappearance centre distance d (ti,ci,m) when meeting following conditions:
d(ti,m,ci,m)=(xt,i,m-li,m-lenthi,m/2)2+(yt,i,m-wi,m-widthi,m/2)2< min (lenthi,m-1/2,
widthi,m-1/2)
The candidate frame for the condition that meets then is subjected to subsequent target data matching;
If tracking target TiIn failed matched jamming state, moving projection central point can be used and update its candidate frame central point:
Update tracking target TiCandidate frame central point be moving projection central point pi,m=(xp,i,m,yp,i,m), the length of candidate frame
Degree, the width of candidate frame and m-1 frame image remain unchanged.
4. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Candidate frame is the candidate frame screened by step 2 according to moving projection central point after screening described in rapid 3;
Appearance similarity scores described in step 3 specifically calculates are as follows:
By described in step 2 in m frame image i-th tracking target screening after candidate frame Di,mBy removal VGG16 last
The articulamentum VGG16 network of layer obtains tracking target T in the m frame image of N-dimensionaliExternal appearance characteristic vector
It is trained by multiple target tracking public data collection to training set with training method end to end and respectively obtains appearance
The LSTM network of feature and and the first full articulamentum FC1;
Target T will be trackediPreceding M frame image data pass through it is same by remove VGG16 the last layer articulamentum VGG16 net
Network extracts the LSTM network for further passing through external appearance characteristic after the external appearance characteristic vector of M N-dimensional, extracts N-dimensional and go through in combination
History external appearance characteristic vector
Joint connectionWithBy the first full articulamentum FC1, to obtain tracking target TiWith candidate frame Di,mIt is outer
See similarity scores SA(Ti,Di,m), if target TiCertain preceding frame image data not yet generate, then with 0 value replacement.
5. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Kinematic similarity score described in rapid 4 calculates are as follows:
Described in step 2 in m frame image i-th tracking target screening after candidate frame Di,mCentral point are as follows:
(li,m+lenthi,m/2,wi,m+widthi,m/2)
Previous frame image tracks target TiCandidate frame center are as follows:
(li,m-1+lenthi,m-1/2,wi,m-1+widthi,m-1/2)
The velocity characteristic vector of i-th tracking target in m frame image are as follows:
It is trained by multiple target tracking public data collection to training set with training method end to end and respectively obtains speed
The LSTM network of feature and and the second full articulamentum FC2;
The LSTM network that the velocity characteristic vector of i-th tracking target in M frame image is passed through to velocity characteristic, extracts joint history
The motion feature vector of sequence
Joint connectionWithBy the second full articulamentum FC2, thus the tracking target T in tracking state or loss stateiWith
Candidate frame Di,mKinematic similarity score be SV(Ti,Di,m), if target TiCertain preceding frame exercise data not yet generate, then with 0
Value replaces.
6. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Interaction feature similarity scores described in rapid 5 calculate are as follows:
With candidate frame D after screeningi,mCentre coordinate ci,m=(li,m+lenthi,m/2,wi,m+widthi,m/ 2) it centered on, establishes
Length and width are the box of the fixed size of H, by frame with other candidate frame centre coordinate ci',mThe point of coincidence is set to 1, fixes big
Small box center is also set to 1, remaining position is 0, obtains:
Wherein,
x∈[li,m+lenthi,m/2-H/2,li,m+lenthi,m/2+H/2]
y∈[wi,m+widthi,m/2-H/2,wi,m+widthi,m/2+H/2]
Again willBeing converted into length is H2One-dimensional vector, the interaction feature vector for obtaining candidate frame is
It is trained by multiple target tracking public data collection to training set with training method end to end and respectively obtains interaction
The LSTM network of feature and and the full articulamentum FC3 of third;
With target TiCentered on the centre coordinate of certain frame image, establish the box for the fixed size that length and width are H, by frame with
The point that other tracking target's center's coordinates are overlapped is set to 1, and the box center of fixed size is also set to 1, remaining position is 0, obtains
Target TiIn the interaction feature vector of the frame, by target TiThe interaction feature vector of preceding M frame number pass through the LSTM net of interaction feature
Network extracts united history interaction feature vector
JointWithBy the full articulamentum FC3 of third, to obtain TiAnd Di,mInteraction feature similarity scores SI(Ti,
Di,m), if target TiCertain preceding frame interaction feature vector not yet generate, then with 0 value replacement.
7. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Total similarity scores described in rapid 6 are as follows:
Stotal,i=α1SA(Ti,Di,m)+α2SV(Ti,Di,m)+α3SI(Ti,Di,m)
Wherein, α1For external appearance characteristic likeness coefficient, α2For velocity characteristic likeness coefficient, α3For interaction feature likeness coefficient;
Total similarity scores are greater than matching score threshold Stotal,i> β then candidate frame Di,mTracking target is converted in m frame image
Tracking box;
In step 6 by step 2 update tracking target status information be keep tracking target be tracking state, for continuous multiple frames
The failed matched tracking target in tracking state, then be translated into loss state, no longer use step 2 the method.
8. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Degree of overlapping between each tracking target described in rapid 8 are as follows:
Wherein, A is tracking target TaTracking box area, B be tracking target TbTracking box area, for be in IOU > 0.8
Tracking target TaWith tracking target Tb, according to the obtained total similarity scores S of the step 6total,aWith Stotal,bCompared
Compared with by Stotal,aWith Stotal,bLower tracking targeted transformation is to lose state, keeps Stotal,aWith Stotal,bHigher tracking target
To track state.
9. the multi-object tracking method according to claim 1 based on time series multiple features fusion, it is characterised in that: step
Multiple image described in rapid 9 is MDFrame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811210852.8A CN109360226B (en) | 2018-10-17 | 2018-10-17 | Multi-target tracking method based on time series multi-feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811210852.8A CN109360226B (en) | 2018-10-17 | 2018-10-17 | Multi-target tracking method based on time series multi-feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109360226A true CN109360226A (en) | 2019-02-19 |
CN109360226B CN109360226B (en) | 2021-09-24 |
Family
ID=65349536
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811210852.8A Active CN109360226B (en) | 2018-10-17 | 2018-10-17 | Multi-target tracking method based on time series multi-feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109360226B (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109798888A (en) * | 2019-03-15 | 2019-05-24 | 京东方科技集团股份有限公司 | Posture determining device, method and the visual odometry of mobile device |
CN109886243A (en) * | 2019-03-01 | 2019-06-14 | 腾讯科技(深圳)有限公司 | Image processing method, device, storage medium, equipment and system |
CN109919974A (en) * | 2019-02-21 | 2019-06-21 | 上海理工大学 | Online multi-object tracking method based on the more candidate associations of R-FCN frame |
CN109993772A (en) * | 2019-03-26 | 2019-07-09 | 东北大学 | Example rank characteristic aggregation method based on temporal and spatial sampling |
CN110032635A (en) * | 2019-04-22 | 2019-07-19 | 齐鲁工业大学 | One kind being based on the problem of depth characteristic fused neural network to matching process and device |
CN110047095A (en) * | 2019-03-06 | 2019-07-23 | 平安科技(深圳)有限公司 | Tracking, device and terminal device based on target detection |
CN110148153A (en) * | 2019-04-03 | 2019-08-20 | 深圳云天励飞技术有限公司 | A kind of tracking and relevant apparatus of multiple target |
CN110163890A (en) * | 2019-04-24 | 2019-08-23 | 北京航空航天大学 | A kind of multi-object tracking method towards space base monitoring |
CN110223316A (en) * | 2019-06-13 | 2019-09-10 | 哈尔滨工业大学 | Fast-moving target tracking method based on circulation Recurrent networks |
CN110288051A (en) * | 2019-07-03 | 2019-09-27 | 电子科技大学 | A kind of polyphaser multiple target matching process based on distance |
CN110288627A (en) * | 2019-05-22 | 2019-09-27 | 江苏大学 | One kind being based on deep learning and the associated online multi-object tracking method of data |
CN110414443A (en) * | 2019-07-31 | 2019-11-05 | 苏州市科远软件技术开发有限公司 | A kind of method for tracking target, device and rifle ball link tracking |
CN110675430A (en) * | 2019-09-24 | 2020-01-10 | 中国科学院大学 | Unmanned aerial vehicle multi-target tracking method based on motion and appearance adaptation fusion |
CN110991283A (en) * | 2019-11-21 | 2020-04-10 | 北京格灵深瞳信息技术有限公司 | Re-recognition and training data acquisition method and device, electronic equipment and storage medium |
CN111027370A (en) * | 2019-10-16 | 2020-04-17 | 合肥湛达智能科技有限公司 | Multi-target tracking and behavior analysis detection method |
CN111179318A (en) * | 2019-12-31 | 2020-05-19 | 浙江大学 | Double-flow method-based complex background motion small target detection method |
CN111179310A (en) * | 2019-12-20 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Video data processing method and device, electronic equipment and computer readable medium |
CN111354022A (en) * | 2020-02-20 | 2020-06-30 | 中科星图股份有限公司 | Target tracking method and system based on kernel correlation filtering |
CN111429483A (en) * | 2020-03-31 | 2020-07-17 | 杭州博雅鸿图视频技术有限公司 | High-speed cross-camera multi-target tracking method, system, device and storage medium |
CN111612822A (en) * | 2020-05-21 | 2020-09-01 | 广州海格通信集团股份有限公司 | Object tracking method and device, computer equipment and storage medium |
CN111709975A (en) * | 2020-06-22 | 2020-09-25 | 上海高德威智能交通***有限公司 | Multi-target tracking method and device, electronic equipment and storage medium |
CN111866192A (en) * | 2020-09-24 | 2020-10-30 | 汉桑(南京)科技有限公司 | Pet interaction method, system and device based on pet ball and storage medium |
CN112001252A (en) * | 2020-07-22 | 2020-11-27 | 北京交通大学 | Multi-target tracking method based on heteromorphic graph network |
CN112348822A (en) * | 2019-08-08 | 2021-02-09 | 佳能株式会社 | Image processing apparatus and image processing method |
CN113192106A (en) * | 2021-04-25 | 2021-07-30 | 深圳职业技术学院 | Livestock tracking method and device |
WO2021208251A1 (en) * | 2020-04-15 | 2021-10-21 | 上海摩象网络科技有限公司 | Face tracking method and face tracking device |
CN114219836A (en) * | 2021-12-15 | 2022-03-22 | 北京建筑大学 | Unmanned aerial vehicle video vehicle tracking method based on space-time information assistance |
CN114822084A (en) * | 2021-01-28 | 2022-07-29 | 阿里巴巴集团控股有限公司 | Traffic control method, target tracking method, system, device, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080123900A1 (en) * | 2006-06-14 | 2008-05-29 | Honeywell International Inc. | Seamless tracking framework using hierarchical tracklet association |
CN101783020A (en) * | 2010-03-04 | 2010-07-21 | 湖南大学 | Video multi-target fast tracking method based on joint probability data association |
CN104200488A (en) * | 2014-08-04 | 2014-12-10 | 合肥工业大学 | Multi-target tracking method based on graph representation and matching |
CN108573496A (en) * | 2018-03-29 | 2018-09-25 | 淮阴工学院 | Multi-object tracking method based on LSTM networks and depth enhancing study |
-
2018
- 2018-10-17 CN CN201811210852.8A patent/CN109360226B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080123900A1 (en) * | 2006-06-14 | 2008-05-29 | Honeywell International Inc. | Seamless tracking framework using hierarchical tracklet association |
CN101783020A (en) * | 2010-03-04 | 2010-07-21 | 湖南大学 | Video multi-target fast tracking method based on joint probability data association |
CN104200488A (en) * | 2014-08-04 | 2014-12-10 | 合肥工业大学 | Multi-target tracking method based on graph representation and matching |
CN108573496A (en) * | 2018-03-29 | 2018-09-25 | 淮阴工学院 | Multi-object tracking method based on LSTM networks and depth enhancing study |
Non-Patent Citations (2)
Title |
---|
TAI DO NHU,ET AL: "《Tracking by Detection of Multiple Faces using SSD and CNN Features》", 《RESEARCHGATE》 * |
周纪强: "《监控视频中多类目标检测与多目标跟踪算法研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109919974A (en) * | 2019-02-21 | 2019-06-21 | 上海理工大学 | Online multi-object tracking method based on the more candidate associations of R-FCN frame |
CN109919974B (en) * | 2019-02-21 | 2023-07-14 | 上海理工大学 | Online multi-target tracking method based on R-FCN frame multi-candidate association |
CN109886243A (en) * | 2019-03-01 | 2019-06-14 | 腾讯科技(深圳)有限公司 | Image processing method, device, storage medium, equipment and system |
CN110458127A (en) * | 2019-03-01 | 2019-11-15 | 腾讯医疗健康(深圳)有限公司 | Image processing method, device, equipment and system |
CN110458127B (en) * | 2019-03-01 | 2021-02-26 | 腾讯医疗健康(深圳)有限公司 | Image processing method, device, equipment and system |
CN109886243B (en) * | 2019-03-01 | 2021-03-26 | 腾讯医疗健康(深圳)有限公司 | Image processing method, device, storage medium, equipment and system |
CN110047095A (en) * | 2019-03-06 | 2019-07-23 | 平安科技(深圳)有限公司 | Tracking, device and terminal device based on target detection |
CN110047095B (en) * | 2019-03-06 | 2023-07-21 | 平安科技(深圳)有限公司 | Tracking method and device based on target detection and terminal equipment |
CN109798888B (en) * | 2019-03-15 | 2021-09-17 | 京东方科技集团股份有限公司 | Posture determination device and method for mobile equipment and visual odometer |
CN109798888A (en) * | 2019-03-15 | 2019-05-24 | 京东方科技集团股份有限公司 | Posture determining device, method and the visual odometry of mobile device |
CN109993772A (en) * | 2019-03-26 | 2019-07-09 | 东北大学 | Example rank characteristic aggregation method based on temporal and spatial sampling |
CN109993772B (en) * | 2019-03-26 | 2022-12-20 | 东北大学 | Example level feature aggregation method based on space-time sampling |
CN110148153A (en) * | 2019-04-03 | 2019-08-20 | 深圳云天励飞技术有限公司 | A kind of tracking and relevant apparatus of multiple target |
CN110032635A (en) * | 2019-04-22 | 2019-07-19 | 齐鲁工业大学 | One kind being based on the problem of depth characteristic fused neural network to matching process and device |
CN110163890A (en) * | 2019-04-24 | 2019-08-23 | 北京航空航天大学 | A kind of multi-object tracking method towards space base monitoring |
CN110288627A (en) * | 2019-05-22 | 2019-09-27 | 江苏大学 | One kind being based on deep learning and the associated online multi-object tracking method of data |
CN110223316A (en) * | 2019-06-13 | 2019-09-10 | 哈尔滨工业大学 | Fast-moving target tracking method based on circulation Recurrent networks |
CN110223316B (en) * | 2019-06-13 | 2021-01-29 | 哈尔滨工业大学 | Rapid target tracking method based on cyclic regression network |
CN110288051A (en) * | 2019-07-03 | 2019-09-27 | 电子科技大学 | A kind of polyphaser multiple target matching process based on distance |
CN110288051B (en) * | 2019-07-03 | 2022-04-22 | 电子科技大学 | Multi-camera multi-target matching method based on distance |
CN110414443A (en) * | 2019-07-31 | 2019-11-05 | 苏州市科远软件技术开发有限公司 | A kind of method for tracking target, device and rifle ball link tracking |
CN112348822A (en) * | 2019-08-08 | 2021-02-09 | 佳能株式会社 | Image processing apparatus and image processing method |
CN110675430A (en) * | 2019-09-24 | 2020-01-10 | 中国科学院大学 | Unmanned aerial vehicle multi-target tracking method based on motion and appearance adaptation fusion |
CN111027370A (en) * | 2019-10-16 | 2020-04-17 | 合肥湛达智能科技有限公司 | Multi-target tracking and behavior analysis detection method |
CN110991283A (en) * | 2019-11-21 | 2020-04-10 | 北京格灵深瞳信息技术有限公司 | Re-recognition and training data acquisition method and device, electronic equipment and storage medium |
CN111179310A (en) * | 2019-12-20 | 2020-05-19 | 腾讯科技(深圳)有限公司 | Video data processing method and device, electronic equipment and computer readable medium |
CN111179318B (en) * | 2019-12-31 | 2022-07-12 | 浙江大学 | Double-flow method-based complex background motion small target detection method |
CN111179318A (en) * | 2019-12-31 | 2020-05-19 | 浙江大学 | Double-flow method-based complex background motion small target detection method |
CN111354022B (en) * | 2020-02-20 | 2023-08-22 | 中科星图股份有限公司 | Target Tracking Method and System Based on Kernel Correlation Filtering |
CN111354022A (en) * | 2020-02-20 | 2020-06-30 | 中科星图股份有限公司 | Target tracking method and system based on kernel correlation filtering |
CN111429483A (en) * | 2020-03-31 | 2020-07-17 | 杭州博雅鸿图视频技术有限公司 | High-speed cross-camera multi-target tracking method, system, device and storage medium |
WO2021208251A1 (en) * | 2020-04-15 | 2021-10-21 | 上海摩象网络科技有限公司 | Face tracking method and face tracking device |
CN111612822B (en) * | 2020-05-21 | 2024-03-15 | 广州海格通信集团股份有限公司 | Object tracking method, device, computer equipment and storage medium |
CN111612822A (en) * | 2020-05-21 | 2020-09-01 | 广州海格通信集团股份有限公司 | Object tracking method and device, computer equipment and storage medium |
CN111709975B (en) * | 2020-06-22 | 2023-11-03 | 上海高德威智能交通***有限公司 | Multi-target tracking method, device, electronic equipment and storage medium |
CN111709975A (en) * | 2020-06-22 | 2020-09-25 | 上海高德威智能交通***有限公司 | Multi-target tracking method and device, electronic equipment and storage medium |
CN112001252B (en) * | 2020-07-22 | 2024-04-12 | 北京交通大学 | Multi-target tracking method based on different composition network |
CN112001252A (en) * | 2020-07-22 | 2020-11-27 | 北京交通大学 | Multi-target tracking method based on heteromorphic graph network |
CN111866192A (en) * | 2020-09-24 | 2020-10-30 | 汉桑(南京)科技有限公司 | Pet interaction method, system and device based on pet ball and storage medium |
CN114822084A (en) * | 2021-01-28 | 2022-07-29 | 阿里巴巴集团控股有限公司 | Traffic control method, target tracking method, system, device, and storage medium |
CN113192106B (en) * | 2021-04-25 | 2023-05-30 | 深圳职业技术学院 | Livestock tracking method and device |
CN113192106A (en) * | 2021-04-25 | 2021-07-30 | 深圳职业技术学院 | Livestock tracking method and device |
CN114219836B (en) * | 2021-12-15 | 2022-06-03 | 北京建筑大学 | Unmanned aerial vehicle video vehicle tracking method based on space-time information assistance |
CN114219836A (en) * | 2021-12-15 | 2022-03-22 | 北京建筑大学 | Unmanned aerial vehicle video vehicle tracking method based on space-time information assistance |
Also Published As
Publication number | Publication date |
---|---|
CN109360226B (en) | 2021-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109360226A (en) | A kind of multi-object tracking method based on time series multiple features fusion | |
CN108986064B (en) | People flow statistical method, equipment and system | |
Miao et al. | Pose-guided feature alignment for occluded person re-identification | |
WO2020042419A1 (en) | Gait-based identity recognition method and apparatus, and electronic device | |
CN107145862B (en) | Multi-feature matching multi-target tracking method based on Hough forest | |
CN109191497A (en) | A kind of real-time online multi-object tracking method based on much information fusion | |
CN108875588A (en) | Across camera pedestrian detection tracking based on deep learning | |
CN107292911A (en) | A kind of multi-object tracking method merged based on multi-model with data correlation | |
CN109934127B (en) | Pedestrian identification and tracking method based on video image and wireless signal | |
Kasiri et al. | Fine-grained action recognition of boxing punches from depth imagery | |
CN109800624A (en) | A kind of multi-object tracking method identified again based on pedestrian | |
CN105512618B (en) | Video tracing method | |
CN107564035B (en) | Video tracking method based on important area identification and matching | |
CN102682302A (en) | Human body posture identification method based on multi-characteristic fusion of key frame | |
CN111739053B (en) | Online multi-pedestrian detection tracking method under complex scene | |
CN112541424A (en) | Real-time detection method for pedestrian falling under complex environment | |
CN113628245B (en) | Multi-target tracking method, device, electronic equipment and storage medium | |
CN110111362A (en) | A kind of local feature block Similarity matching method for tracking target | |
CN111626194A (en) | Pedestrian multi-target tracking method using depth correlation measurement | |
Wu et al. | Multivehicle object tracking in satellite video enhanced by slow features and motion features | |
CN105279769A (en) | Hierarchical particle filtering tracking method combined with multiple features | |
Naik et al. | DeepPlayer-track: player and referee tracking with jersey color recognition in soccer | |
CN106446911A (en) | Hand recognition method based on image edge line curvature and distance features | |
CN114926859A (en) | Pedestrian multi-target tracking method in dense scene combined with head tracking | |
CN105894540A (en) | Method and system for counting vertical reciprocating movements based on mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |