CN104200488A - Multi-target tracking method based on graph representation and matching - Google Patents

Multi-target tracking method based on graph representation and matching Download PDF

Info

Publication number
CN104200488A
CN104200488A CN201410377583.XA CN201410377583A CN104200488A CN 104200488 A CN104200488 A CN 104200488A CN 201410377583 A CN201410377583 A CN 201410377583A CN 104200488 A CN104200488 A CN 104200488A
Authority
CN
China
Prior art keywords
track
target
node
formula
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410377583.XA
Other languages
Chinese (zh)
Inventor
檀结庆
钟金琴
李莹莹
辜丽川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN201410377583.XA priority Critical patent/CN104200488A/en
Publication of CN104200488A publication Critical patent/CN104200488A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a multi-target tracking method based on graph representation and matching. Compared with the prior art, the method has the advantage that the defect of incapability of successfully tracking due to frequent interactive shielding of targets and similar appearance features in a video tracking technology is overcome. The multi-target tracking method comprises the following steps: inputting a tracking video, and generating target-reliable short tracks in adjacent time windows; building a spatial motion model which takes a graph as a framework and an appearance model which takes color and local two-value difference as features for the formed target short tracks; calculating the appearance feature and spatial motion similarity among the tracks; realizing relevant tracking of a target by using a weighted two-value graph matching framework; repeating the steps continually to obtain a motion track at all moments of each target. Through adoption of the multi-target tracking method, the target tracking accuracy and efficiency in a complicated scene are increased, and the application degree of a track tracking technology in various scenes is increased. Accurate tracking of the target in a complicated environment is realized by means of online learning of the appearance model and the spatial motion model.

Description

A kind of multi-object tracking method that represents and mate based on figure
Technical field
The present invention relates to intelligent video technology field, is a kind of multi-object tracking method that represents and mate based on figure specifically.
Background technology
Multiple target tracking in video is a research contents the most basic in computer vision field, has practical application area widely, as intelligent monitoring, man-machine interaction, autonomous robot, augmented reality etc., at present also more to the corresponding research of its expansion.When background is comparatively simple, between target, do not exist in the situation such as seriously block, can obtain comparatively desirable effect.But under some comparatively complicated and crowded environment, all to video multi-target, tracking has brought very large difficulty for frequent disappearance and the reproduction etc. of mutually blocking between the similarity of target appearance, target, target.
Existing solution is mainly divided into two classes: the tracking based on feature modeling and the tracking based on data correlation.Method based on feature modeling mainly adopts the combination of the simple local feature such as color, texture and these features to describe tracking target; with the detection and tracking of realize target; but existing these class methods are seldom considered motion and the spatial information of tracking target, therefore under complex environment, often there will be with wrong, with the phenomenon such as losing.Method based on data correlation is that candidate's echo and known target track relatively and are finally determined to the method that correct observation and track match, these class methods have improved the precision of multiple target tracking greatly, main representative has: Joint Probabilistic Data Association algorithm (JPDA) and many assumption methods (MHT), but in the time of multiple goal and the increase of echo number, JPDA can suffer from the huge problem of data correlation combination calculated amount, computation complexity is exponent increase, and MHT is because a large amount of " iteration " processes is wasted the much time.How to construct the characteristic model that a kind of discrimination is high, realize between multiple goal associated having become soon and exactly and be badly in need of the technical matters that solves.
Summary of the invention
To the object of the invention is in order solving under complex environment, between similar, the target of target appearance, mutually to block and cause following the tracks of successful defect, provide a kind of and represent and the multi-object tracking method of coupling solves the problems referred to above based on figure.
To achieve these goals, technical scheme of the present invention is as follows:
A multi-object tracking method that represents and mate based on figure, comprises the following steps:
In video sequence, carry out target detection with the model of training in advance, the response detecting in adjacent two frames is carried out to feature description, similarity coupling matrix between structure two frames responses, generates the short track of target reliability with bivalve value strategy;
For the short track forming is set up to scheme as the spatial movement model of framework and is the display model of feature with color, local two-value difference (Local Difference Binary is called for short LDB);
Calculate external appearance characteristic and spatial movement similarity between track;
By cum rights bipartite graph matching tracking target: the node using the track of adjacent time window as bipartite graph, between track, the linear combination of external appearance characteristic and spatial movement similarity is as the be connected weight on limit of two nodes, finally realize bipartite graph matching optimization by Hungary Algorithm, form the long track of target following.
The generation of the described short track of tracking target comprises the following steps:
Given adjacent two frames, detect all responses with the model of training in advance;
Each response detecting is described with position, size and the color histogram at its place;
Construct similarity coupling matrix between two frame responses, with the response of associated two interframe of bivalve value strategy, generate the short track of target, computing formula is as follows:
S ( r i , r j ) = A pos ( r i , r j ) A size ( r i , r j ) A color ( r i , r j ) if t j - t i = 1 0 otherwise
S ( r i , r j ) > θ 1 and ∀ r k ∈ R - { r i , r j } min [ S ( r i , r j ) - S ( r k , r j ) , S ( r i , r j ) - S ( r i , r k ) ] > θ 2
S (r in formula i, r j) expression response r iand r jbetween similarity, t iand t jrepresent respectively two current frames, R represents the set of all responses, θ 1and θ 2represent two threshold values.By this bivalve value strategy, between the response being associated, form reliable short track.
Described set up scheming and comprise the following steps as the spatial movement model of framework with taking color, LDB as the display model of feature for short track:
(1) use non-directed graph to set up spatial movement model
At two continuous time windows, set up a non-directed graph G=(V, E), the set of V presentation graphs node, E represents the set on the limit of link node.Each node is made up of a pair of track, and the weight of node represents that this is the probability of same target to track.Two nodes in every limit connection layout, the weight on limit represents the correlativity between two nodes.
The weight of any node in calculating chart: first calculate the similarity of two tracks based on average velocity in node; then calculate the similarity of position-based information between two tracks, the weight that finally calculates node is the product of the similarity of similarity based on average velocity and position-based information.If the weight of node is larger, represent that the probability that in node, two tracks are same target is larger.
Calculate the weight on limit that connects two nodes: to any two nodes, calculate the two tracks kinematic relation of lap in time of window at one time, suppose to exist between two tracks linear movement relation.This relational application, to two tracks of another time window, the distance between real motion relation and hypothesis kinematic relation is the weight of two nodes.
(2) set up display model
The appearance information of track is made up of hsv color histogram and LDB feature.LDB feature is local feature description's being proposed by Xin Yang and Kwang-Ting Cheng2014, has the feature accurate, fast operation of describing.
Between described track, similarity is calculated and is comprised the steps:
Two track T i k, T j k+1between the similarity of external appearance characteristic calculate and adopt Bhattacharyya distance, the histogrammic similarity of hsv color is expressed as ρ (H coli, H colj), the similarity of LDB feature is expressed as ρ (τ lDBi, τ lDBj), the similarity of external appearance characteristic is expressed as ρ app(V ij)=ρ (H coli, H colj)+ρ (τ lDBi, τ lDBj);
Between node, spatial movement is related to that similarity is calculated as follows:
ρ motion ( V ij ) = ω v ( V ij ) · Π V mn ω v ( V mn ) · w e ( V ij , V mn )
V in formula mnrepresent V ijnear node, i.e. motion feature and V ijsimilar node.Such two track T i k, T j k+1between external appearance characteristic and motion feature merge similarity and be:
ρ fuse(V ij)=α·ρ app(V ij)+β·ρ motion(V ij)
In formula, α and β represent respectively the fusion weight of external appearance characteristic and motion feature.
Described as follows by cum rights bipartite graph matching tracking target step:
Set up cum rights bipartite graph: consider the pursuit path in two continuous time windows T k = { T 1 k , T 2 k , . . . , T m k } With T k + 1 = { T 1 k + 1 , T 2 k + 1 , . . . , T n k + 1 } , Cum rights bipartite graph is expressed as G=(T k, T k+1, E), T kand T k+1respectively as the node set of bipartite graph, and e arbitrarily ij(e ij∈ E) for connecting T i kand T j k+1limit, its weight is ρ fuseif weight more two nodes is more similar;
Then multiple target tracking is regarded as to the maximum weight matching problem of bipartite graph, and the maximum weight matching problem of bipartite graph can adopt integer programming model to represent, this Optimized model solution can be tried to achieve by Hungary Algorithm;
Finally, the node matching is connected, just formed the long track of tracking target.
Beneficial effect
A kind of multi-object tracking method that represents and mate based on figure of the present invention, has compared with prior art improved accuracy and the efficiency of target following under complex environment, has promoted the level of application of target following technology in all kinds of scenes.The present invention not only used high identification, it is simple to calculate, fast hsv color and the LDB feature of matching speed described tracking target, and employing figure framework represents motion and spatial relationship between target, can effectively distinguish the moving target that locus is close and outward appearance is similar.In the time that pursuit path is associated, adopt authority bipartite graph matching method, the method has the feature simple, complexity is low that realizes compared with other correlating methods, thereby can be applied to preferably in real-time follow-up.When the situations such as in addition, in whole tracing process, the display model of target and graph model have all adopted the technology of on-line study, and target appearance changes when running into, the light of environment changes, block alternately continually between target, also can realize tracking target exactly.
Brief description of the drawings
Fig. 1 is the method flow diagram of the embodiment of the present invention
Fig. 2 is the generation figure ((a) target response for detecting in continuous 5 frames, (b) the short track of target for generating) of the short track of tracking target of the embodiment of the present invention
Fig. 3 is the short track of tracking in formalization figure (a) two time windows of spatial movement model of the embodiment of the present invention, (b) node in spatial movement model and adjacent node thereof)
Fig. 4 is the weight schematic diagram of asking two node limits of the embodiment of the present invention
Fig. 5 is the bipartite graph matching figure of the embodiment of the present invention
Embodiment
For making that architectural feature of the present invention and effect of reaching are had a better understanding and awareness, coordinate and explain in order to preferred embodiment and accompanying drawing, be described as follows:
As shown in Figure 1, a kind of multi-object tracking method that represents and mate based on figure, comprises the following steps:
The first step, the generation of the short track of tracking target: in video sequence, carry out target detection with the model of training in advance, the response detecting in adjacent two frames is carried out to feature description, construct similarity coupling matrix between two frame responses, generate the short track of target with bivalve value strategy.Such as, at the time window of continuous 5 frames, generate the short track of tracking target, here according to different application, the target type of tracking can be different, and as tracking pedestrians, vehicle etc., the size of time window also can change.
(1) detect the response in every frame with GMM (gauss hybrid models), its size, color histogram and positional information for the response detecting are described.
(2) choose two successive frames, as: t=1, t=2.Set up two frame-to-frame response characteristic similarity coupling matrix S, as Fig. 2 (a), when t=1, detect two response r 1and r 6, when t=2, detect three response r 2, r 5and r 7.Similarity between two responses is calculated as follows formula:
S ( r i , r j ) = A pos ( r i , r j ) A size ( r i , r j ) A color ( r i , r j ) if t j - t i = 1 0 otherwise
(3) generating short track between two frame responses with following bivalve value strategy is T 1={ r 1, r 2, T 2={ r 5, T 3={ r 6, r 7.
S ( r i , r j ) > θ 1 and ∀ r k ∈ R - { r i , r j } min [ S ( r i , r j ) - S ( r k , r j ) , S ( r i , r j ) - S ( r i , r k ) ] > θ 2
By that analogy, every adjacent two frames can generate short track, then the track that has common response between track is connected, and generate the short track of tracking of long point.As shown in Fig. 2 (b), the pursuit path generating in continuous 5 frames.
Second step, the pursuit path generating for adjacent two windows sets up to scheme as the spatial movement model of framework and the display model taking color, LDB as feature, if two adjacent time windows are K and K+1, K is made up of 1 to 5 frame, K+1 is made up of 6 to 10 frames, and the short track of tracking in two windows is as shown in Fig. 3 (a).
(21) set up to scheme the spatial movement model as framework
(211) between time window K and K+1, set up a non-directed graph G=(V, E), the set of V presentation graphs node, E represents the set on the limit of link node.Each node is made up of a pair of track, as: V ij=(T i k, T j k+1), the weight of node represents the probability that this is connected to track, as: ω v(T i k, T j k+1); Two nodes in every limit connection layout, as: V ijand V mn, the weights omega of this edge so e(V ij, V mn) just represent two pairs of correlativitys between track.Arbitrary track is represented to its space structure information with near the track it, as: T i knear trajectory table is shown (T i k) -, suppose to be formed by 2 tracks T j k+1near trajectory table is shown (T j k+1) -, suppose to be formed by 3 tracks node V so ijnear node be just expressed as (V ij) -, be defined as follows:
(V ij) -={T l k,T h k+1}where?T k l∈(T i k) -and?T h k+1∈(T j k+1) -
That is to say (V ij) -there is 6 nodes (6 pairs of tracks) composition.Graph model is as shown in Fig. 3 (b).
(212) calculate any node V in built figure ij=(T i k, T j k+1) weight: establish represent that a track is made up of the response of m+1 frame, with represent respectively the first frame and the last frame of this track, represent the in frame, respond.
First in node, the contextual definition of two tracks based on speed difference is:
E &upsi; = 1 if P j t j n - P j t j 0 n + 1 - P i t i m - P i t i 0 m + 1 < &delta; &upsi; &epsiv; &upsi; otherwise
In above formula, be δ υthe threshold values of setting, ε υit is the very little value of setting.If E υvery little, represent that two pedestrians walk possibility together very little; Then calculate position difference between two tracks, be defined as follows:
P ^ i t j u = P i t i u + P . i t i u &CenterDot; ( t j u - t i u )
E P = e - ( P ^ i t j u - P j t j u ) &OverBar;
In above formula represent track T i k? time movement velocity, represent that track exists time location estimation represent the expectation value of the difference of all estimated positions and actual position; The weight that finally calculates node is:
ω v(V ij)=E υ·E P
(213) weight on the limit of calculating two nodes: to any two node V ijand V mn, definition two track T i kand T m klap is respectively two vectorial S in time iand S m, suppose to exist between two vectors linear movement relation, that is: S i=S ma+B, wherein A and B are respectively parameter, as shown in Figure 4.Known S iand S m, two parameter estimation are as follows:
A ^ = [ ( S m - S m &OverBar; ) T ( S m - S m &OverBar; ) ] - 1 &CenterDot; ( S m - S m &OverBar; ) T ( S i - S i &OverBar; )
B ^ = S i - A ^ &CenterDot; S m
Define on this basis V ijand V mnthe weight on limit is:
w e ( V ij , V mn ) = c &CenterDot; exp { - 1 2 &sigma; 2 &CenterDot; ( S j - S n &CenterDot; A ^ - B ^ ) T &CenterDot; ( S j - S n &CenterDot; A ^ - B ^ ) }
In above formula, c represents normalized factor, and σ represents motion variance between two tracks.If w e(V ij, V mn) large, if represent T i kwith T j k+1connect T so m kwith T n k+1the probability connecting is larger.
(22) set up display model: in target following process; often can run into target appearance changes; the situations such as target occlusion; in order better to follow the tracks of the target in complex scene; the present embodiment has adopted the method for the many Fusion Features of on-line study to carry out the display model of establishing target, and concrete steps are as follows:
(221) in a time window, as K, choose training sample.Suppose that the response in same short track represents same target; The response that belongs to different short tracks in same frame represents different targets.Like this, select two responses different in same track as positive sample, two responses different in different tracking marks are as negative sample.
(222) to all samples color of choosing and the integrating description of LDB feature.Color characteristic adopts HSV histogram, as: H, S, V are quantified as respectively 16,4,4, can obtain the HSV histogram of 256 dimensions.LDB feature is divided into multi-layer net target exactly, as 2 × 2, and 3 × 3,4 × 4,5 × 5, the number of plies can be selected, and the more descriptions of the number of plies are more accurate.Each for grid cell the gradient, the gradient of Y-direction of gray scale, directions X describe, relatively generate the two-value number of three with each grid cell and with other grid cell information of layer, these all two-value numbers are arranged to the LDB descriptor just having formed goal description in certain sequence.As follows in two value informations of a pair of grid cell i and j generation:
&tau; ( Func ( i ) , Func ( j ) ) = 1 if ( Func ( i ) - Func ( j ) ) > 0 andi &NotEqual; j 0 otherwise
Wherein Func (i)={ I intensity(i), Grandient x(i), Grandient y(i) }.To 2 × 2,3 × 3,4 × 4,5 × 5 grids, can generate the LDB descriptor of 1386.These two features all have to be calculated simply, the feature that descriptive power is strong.
(223) similarity of calculating color characteristic and LDB feature, adopts Adaboost algorithm to learn sample, so just obtains the display model that identification is strong.
The 3rd step, arbitrarily node V ij=(T i k, T j k+1) two track similarities formed by two parts, external appearance characteristic similarity and spatial movement similarity.
(31) external appearance characteristic similarity is calculated: obtained the external appearance characteristic of two tracks by the on-line study method of upper step, the histogrammic similarity of hsv color is calculated and adopted Bhattacharyya distance, as the hsv color histogram similarity of 256 dimensions is expressed as
&rho; ( H coli , H colj ) = ( 1 - &Sigma; b = 1 256 h coli , b h colj , b ) 1 / 2
LDB character representation is τ lDBi, τ lDBj, statistics τ lDBiand τ lDBjthe identical number of value of corresponding positions, is normalized, and just obtains the similarity of LDB feature between two tracks, is expressed as ρ (τ lDBi, τ lDBj).The similarity of two track external appearance characteristics is expressed as ρ app(V ij)=ρ (H coli, H colj)+ρ (τ lDBi, τ lDBj);
(32) node V ij=(T i k, T j k+1) in two trajectory range kinematic relation similarities by the motion of this node self, and the relation of neighborhood of nodes motion between moving with this node try to achieve, and is calculated as follows:
&rho; motion ( V ij ) = &omega; v ( V ij ) &CenterDot; &Pi; V mn &omega; v ( V mn ) &CenterDot; w e ( V ij , V mn )
V in above formula mnrepresent V ijadjacent node.Node V like this ij=(T i k, T j k+1) external appearance characteristic and motion feature merge similarity and be:
ρ fuse(V ij)=α·ρ app(V ij)+β·ρ motion(V ij)
In above formula, α and β represent respectively the fusion weight of external appearance characteristic and motion feature, and α and β can dynamically adjust according to actual conditions.Carry out tracking target such as be characterized as master with outward appearance in the time that nothing is blocked alternately, at this moment can strengthen the value of α, otherwise, the value of increasing β.
The 4th step, uses the associated tracking target of cum rights bipartite graph matching framework.First with wanting the associated short track of target to set up a new cum rights bipartite graph, then ask the maximum weight matching of built bipartite graph, finally the track matching is connected, just formed the long track of tracking target.Concrete steps are as follows:
(41) consider the short track of tracking in continuous two time window K and K+1 T k = { T 1 k , T 2 k , . . . , T m k } With T k + 1 = { T 1 k + 1 , T 2 k + 1 , . . . , T n k + 1 } , As shown in Fig. 3 (a).Set up cum rights bipartite graph G=(T k, T k+1, E), T kand T k+1respectively as the node set of bipartite graph, and e arbitrarily ij(e ij∈ E) for connecting T i kand T j k+1limit, its weight is ω ij.
(42) weights omega on calculating every limit of bipartite graph ij, as the computing method of the 3rd step, obtain ω ijfuse(V ij), set up weight matrix W=[ω ij] m × n.
(43) ask the maximum weight matching M of bipartite graph, can be converted into the Solve problems of following integer programming model, suppose m < n:
max g = &Sigma; i = 1 m &Sigma; j = 1 n &omega; ij x ij
subject?to &Sigma; j = 1 n x ij = 1 , i = 1,2 , . . . , m
&Sigma; i = 1 m x ij &le; 1 , j = 1,2 , . . . , n
x ij=0or1,i=1,2,...,m;j=1,2,...,n
The solution of this Optimized model can be tried to achieve by Hungary Algorithm, has so just obtained the maximum weight matching M of bipartite graph, as shown in Figure 5.The track that association matches, has formed the long track of tracking target.
Constantly repeat above step, thereby obtain the movement locus in each all moment of target.
The graph model of the description object space motion feature of the present embodiment, the kinematic relation of having portrayed well target and its neighborhood internal object, similar in target appearance, when mutual frequent, play an important role to distinguishing different target; The cum rights bipartite graph model of the associated objects of the present embodiment, compared with other data correlation method, has and realizes the features such as simple, matching speed is fast, has reduced the time complexity of target following.Generally speaking, a kind of multi-object tracking method that represents and mate based on figure that the present embodiment proposes, in crowded in target, frequent mutual situation, can follow the tracks of for a long time by the continuous of realize target, compared with existing other trackings, reduce Loss Rate and wrong with rate, thereby improved the accuracy rate of multiple target tracking.
In order to verify the validity of the present embodiment tracking, we have done a large amount of experiments, the present embodiment are applied in multiple video databases, as ' PETS2006 ', ' database such as AVSS2007 ' and ' PETS2009 ', tracking effect is carried out to qualitative and quantitative analysis.Specifically exactly the moving target in certain section of video in database is detected to tracking, record the position of target in every frame, according to the moving target of the associated every frame of tracking, connect the position that is judged to be same target in every frame, form the pursuit path of target.
Follow the tracks of the target of same video sequence with different trackings, carry out qualitative analysis and judge the good and bad degree of tracking.As tracking, GM-PHD method (Gaussian Mixture probability hypothesis density method with the present embodiment, within 2006, proposed by Ba-Ngu Vo and Wing-Kin Ma) and CP1 (single mode plate coupling tracker, within 2012, being proposed in CVPR meeting by people such as Zheng Wu) tracking follows the tracks of respectively video sequence PETS2009S2.L2, in this sequence, multiple pedestrians pass an intersection with different speed and direction.These pedestrian's great majority very close to, frequently block.GM-PHD method is often with losing or following wrong target, and CP1, in the time that outward appearance is similar, also produces trail-and-error, but the method for the present embodiment tracking target more exactly.
Follow the tracks of the target of same video sequence with different trackings, carry out quantitative test and judge the good and bad degree of tracking.Quantitative evaluating method adopts the CLEAR MOT standard proposing for general 2008 in the world, be MOTA (accuracy of multiple target tracking) and MOTP (degree of accuracy of multiple target tracking), also adopt in addition some other working standard: MT (target is accounted for the more than 80% track number of target physical presence frame number by successfully following the tracks of frame number), ML (target is accounted for the track number of target physical presence frame number below 20% by successfully following the tracks of frame number), FRMT (segments for track), IDS (number of times of track mark exchange).Follow the tracks of respectively video sequence with the tracking of the present embodiment and GM-PHD and CP1 tracking ' PETS2006 ', ' AVSS2007 ' and ' PETS2009S2.L2 ', the quantitative comparison of tracking results is as shown in table 1, in table ↑ the larger tracking effect of expression value is better, and vice versa.
Table 1: the quantitative comparison of the present embodiment and other two kinds of method tracking results
Experiment conclusion: in complex environment, in target appearance situation similar, that frequently block, the tracking power of the present embodiment is better than existing additive method.As shown in table 1, the present embodiment is obviously being better than additive method aspect the accuracy of target following, continuity, and the tracking accuracy of the present embodiment is also close with additive method.
More than show and described ultimate principle of the present invention, principal character and advantage of the present invention.The technician of the industry should understand; the present invention is not restricted to the described embodiments; what in above-described embodiment and instructions, describe is principle of the present invention; the present invention also has various changes and modifications without departing from the spirit and scope of the present invention, and these changes and improvements all fall in claimed scope of the present invention.The protection domain that the present invention requires is defined by appending claims and equivalent thereof.

Claims (8)

1. the multi-object tracking method that represents and mate based on figure, is characterized in that, comprises the following steps:
(11) generate the reliable short track of tracking target: input video sequence, carry out target detection at continuous time window, the response detecting is carried out to feature description, construct similarity coupling matrix between adjacent two frame responses, by the response target in the associated consecutive frame of bivalve value strategy, generate the short track of target reliability;
(12) for the short track of target forming sets up to scheme as the spatial movement model of framework and the display model taking color, local two-value difference as feature;
(13) calculate external appearance characteristic and spatial movement similarity between the short track of target;
(14) by cum rights bipartite graph matching tracking target: the node using the track of adjacent time window as bipartite graph, between track, the linearity of external appearance characteristic and spatial movement similarity merges as the be connected weight on limit of two nodes, realize bipartite graph matching optimization by Hungary Algorithm, two nodes of limit weight maximum carry out association, form the long track of tracking target.
2. multi-object tracking method according to claim 1, is characterized in that, the reliable short track of the described generation tracking target of step (11) comprises the following steps:
(21) detect the response in every frame with gauss hybrid models, its size, color and positional information for the response detecting are described;
(22) choose two successive frames, set up two frame-to-frame response characteristic similarity coupling matrix S, the similarity between two responses is calculated as follows formula:
(formula 1)
(23) adopt following bivalve value strategy, associated two frame-to-frame responses, generate short track;
(formula 2)
In formula 2, R represents all response sets that detect in two frames;
By that analogy, in given time window, just generated reliable short track.
3. multi-object tracking method according to claim 1, it is characterized in that, step (12) is described to be set up scheming and comprises the following steps as the spatial movement model of framework with taking color, local two-value difference as the display model of feature for the short track of target forming:
(31) use non-directed graph to set up spatial movement model: at two continuous time windows, set up a non-directed graph G=(V, E), the set of V presentation graphs node, E represents the set on the limit of link node; By movement velocity and the positional information of a pair of track in node, calculate the weight of each node; The kinematic relation information of two pairs of tracks by two nodes, calculates the weight on limit;
(32) set up the display model of short track: the appearance information of track is made up of hsv color histogram and local two-value Differential Characteristics; Linearity by two kinds of features merges, and generates the display model of track.
4. multi-object tracking method according to claim 3, is characterized in that, described in step (31), in computer memory motion model figure, the weight of each node comprises the following steps:
(41) relation of two tracks based on speed difference in node first, is defined as:
(formula 3)
In formula 3 represent T j k+1in track, respond ? position in frame, represent T i kin track, respond ? position in frame, δ υthe threshold values of setting, ε υit is the very little value of setting; If E υvery little, represent that two targets possibility is together very little;
(42) calculate position difference between two tracks, be defined as follows:
(formula 4)
(formula 5)
In formula 4 represent track T i kin response exist time movement velocity, represent track T i kin response exist time location estimation, represent the expectation value of the difference of all estimated positions and actual position;
(43) weight that calculates node is:
ω v(V ij)=E υe p(formula 6)
5. multi-object tracking method according to claim 3, is characterized in that, in the calculating chart described in step (31), the weight on two node limits comprises the following steps:
(51) to any two node V ijand V mn, definition two track T i kand T m klap is respectively two vectorial S in time iand S m, suppose to exist between two vectors linear movement relation, that is: S i=S ma+B, wherein A and B are respectively parameter;
(52) known S iand S m, A and B two parameters are estimated:
(formula 7)
(formula 8)
(53) definition V ijand V mnthe weight on limit is:
(formula 9)
In formula 9, c represents normalized factor, and σ represents motion variance between two tracks; If w e(V ij, V mn) large, if represent T i kwith T j k+1connect T so m kwith T n k+1the probability connecting is larger.
6. multi-object tracking method according to claim 3, is characterized in that, sets up short track display model and comprise the following steps described in step (32):
(61) in a time window, choose training sample; Suppose that the response in same short track represents same target; The response that belongs to different short tracks in same frame represents different targets; Like this, select two responses different in same track as positive sample, two responses different in different tracking marks are as negative sample;
(62) to all samples color of choosing and the integrating description of local two-value Differential Characteristics; Color characteristic adopts HSV histogram, and local two-value Differential Characteristics is divided into multi-layer net target exactly, the gradient of each in every layer gray scale, directions X for grid cell, the gradient of Y-direction are described, relatively generate the two-value number of three with each grid cell and with other grid cell information of layer, these all two-value numbers are arranged in certain sequence to the local two-value difference descriptor just having formed goal description; Arbitrary two value informations to grid cell i and j generation are as follows:
(formula 10)
In formula 10
Func (i)={ I intensity(i), Grandient x(i), Grandient y(i) }; It is simple that these two kinds of external appearance characteristics all have calculating, the feature that descriptive power is strong;
(63) similarity of calculating color characteristic and local two-value Differential Characteristics, adopts Adaboost algorithm to learn sample, so just obtains the display model that identification is strong.
7. multi-object tracking method according to claim 1, is characterized in that, calculates characteristic similarity between short track and comprise the following steps described in step (13):
(71) node V ij=(T i k, T j k+1) in two track external appearance characteristic similarities calculate: the histogrammic similarity of external appearance characteristic hsv color is calculated and is adopted Bhattacharyya distance, as the hsv color histogram similarity of 256 dimensions is expressed as:
(formula 11)
Local two-value Differential Characteristics is expressed as τ lDBi, τ lDBj, statistics τ lDBiand τ lDBjthe identical number of value of corresponding positions, is normalized, and just obtains the similarity of local two-value Differential Characteristics between two tracks, is expressed as ρ (τ lDBi, τ lDBj); The similarity of two track external appearance characteristics is expressed as ρ app(V ij)=ρ (H coli, H colj)+ρ (τ lDBi, τ lDBj);
(72) node V ij=(T i k, T j k+1) in two trajectory range kinematic relation similarities by the motion of this node self, and the relation of neighborhood of nodes motion between moving with this node try to achieve, and is calculated as follows:
(formula 12)
V in formula 12 mnrepresent V ijadjacent node; Node V like this ij=(T i k, T j k+1) external appearance characteristic and motion feature merge similarity and be:
ρ fuse(V ij)=α ρ app(V ij)+β ρ motion(V ij) (formula 13)
In formula 13, α and β represent respectively the fusion weight of external appearance characteristic and motion feature, and α and β can dynamically adjust according to actual conditions.
8. multi-object tracking method according to claim 1, is characterized in that, uses the associated tracking target of cum rights bipartite graph matching framework to comprise the following steps described in step (14):
(81) set up a new cum rights bipartite graph: consider the short track of tracking in continuous two time window K and K+1 with set up cum rights bipartite graph G=(T k, T k+1, E), T kand T k+1respectively as the node set of bipartite graph, and e arbitrarily ij(e ij∈ E) for connecting T i kand T j k+1limit, its weight is ω ij;
(82) weights omega on every limit in calculating bipartite graph ij, as the computing method of the 7th step, obtain ω ijfuse(V ij), set up weight matrix W=[ω ij] m × n;
(83) ask the maximum weight matching M of bipartite graph, can be converted into the Solve problems of following integer programming model, suppose m < n:
subject?to?
x ij=0or1,i=1,2,...,m;j=1,2,...,n
The solution of this Optimized model can be tried to achieve by Hungary Algorithm, has so just obtained the maximum weight matching M of bipartite graph; The track that association matches, has formed the long track of tracking target.
CN201410377583.XA 2014-08-04 2014-08-04 Multi-target tracking method based on graph representation and matching Pending CN104200488A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410377583.XA CN104200488A (en) 2014-08-04 2014-08-04 Multi-target tracking method based on graph representation and matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410377583.XA CN104200488A (en) 2014-08-04 2014-08-04 Multi-target tracking method based on graph representation and matching

Publications (1)

Publication Number Publication Date
CN104200488A true CN104200488A (en) 2014-12-10

Family

ID=52085774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410377583.XA Pending CN104200488A (en) 2014-08-04 2014-08-04 Multi-target tracking method based on graph representation and matching

Country Status (1)

Country Link
CN (1) CN104200488A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751489A (en) * 2015-04-09 2015-07-01 苏州阔地网络科技有限公司 Grid-based relay tracking method and device in online class
CN106127119A (en) * 2016-06-16 2016-11-16 山东大学 Joint probabilistic data association method based on coloured image and depth image multiple features
CN106156705A (en) * 2015-04-07 2016-11-23 中国科学院深圳先进技术研究院 A kind of pedestrian's anomaly detection method and system
CN106600631A (en) * 2016-11-30 2017-04-26 郑州金惠计算机***工程有限公司 Multiple target tracking-based passenger flow statistics method
CN107423695A (en) * 2017-07-13 2017-12-01 苏州珂锐铁电气科技有限公司 Dynamic texture identification method based on bipartite graph
CN107505951A (en) * 2017-08-29 2017-12-22 深圳市道通智能航空技术有限公司 A kind of method for tracking target, unmanned plane and computer-readable recording medium
CN108875666A (en) * 2018-06-27 2018-11-23 腾讯科技(深圳)有限公司 Acquisition methods, device, computer equipment and the storage medium of motion profile
CN109064472A (en) * 2017-03-28 2018-12-21 合肥工业大学 A kind of approximating method and device of the three-dimensional space model fit Plane of vertebrae
CN109360226A (en) * 2018-10-17 2019-02-19 武汉大学 A kind of multi-object tracking method based on time series multiple features fusion
CN109697393A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Person tracking method, device, electronic device and computer-readable medium
CN109934849A (en) * 2019-03-08 2019-06-25 西北工业大学 Online multi-object tracking method based on track metric learning
CN110349181A (en) * 2019-06-12 2019-10-18 华中科技大学 One kind being based on improved figure partition model single camera multi-object tracking method
CN110675319A (en) * 2019-09-12 2020-01-10 创新奇智(成都)科技有限公司 Mobile phone photographing panoramic image splicing method based on minimum spanning tree
CN111739053A (en) * 2019-03-21 2020-10-02 四川大学 Online multi-pedestrian detection tracking method under complex scene
CN111862156A (en) * 2020-07-17 2020-10-30 中南民族大学 Multi-target tracking method and system based on graph matching
CN112884742A (en) * 2021-02-22 2021-06-01 山西讯龙科技有限公司 Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method
WO2023072269A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Object tracking
CN117435934A (en) * 2023-12-22 2024-01-23 中国科学院自动化研究所 Matching method, device and storage medium of moving target track based on bipartite graph
CN117454199A (en) * 2023-12-20 2024-01-26 北京数原数字化城市研究中心 Track association method, system, electronic device and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090237511A1 (en) * 2008-03-18 2009-09-24 Bae Systems Information And Electronic Systems Integration Inc. Multi-window/multi-target tracking (mw/mt tracking) for point source objects
CN103778647A (en) * 2014-02-14 2014-05-07 中国科学院自动化研究所 Multi-target tracking method based on layered hypergraph optimization
CN103955947A (en) * 2014-03-21 2014-07-30 南京邮电大学 Multi-target association tracking method based on continuous maximum energy and apparent model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090237511A1 (en) * 2008-03-18 2009-09-24 Bae Systems Information And Electronic Systems Integration Inc. Multi-window/multi-target tracking (mw/mt tracking) for point source objects
CN103778647A (en) * 2014-02-14 2014-05-07 中国科学院自动化研究所 Multi-target tracking method based on layered hypergraph optimization
CN103955947A (en) * 2014-03-21 2014-07-30 南京邮电大学 Multi-target association tracking method based on continuous maximum energy and apparent model

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BO YANG等: "Multi-Target Tracking by Online Learning a CRF Model of Appearance and Motion Patterns", 《INTERNATIONAL JOURNAL OF COMPUTER VISION》 *
CHANG HUANG等: "Robust Object Tracking by Hierarchical Association of Detection Responses", 《10TH EUROPEAN CONFERENCE ON COMPUTER VISION》 *
SHU ZHANG等: "Online Social Behavior Modeling for Multi-target Tracking", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) WORKSHOPS》 *
XIN YANG等: "Local Difference Binary for Ultrafast and Distinctive Feature Description", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
刘晨光等: "一种基于交互式粒子滤波器的视频中多目标跟踪算法", 《电子学报》 *
马纬章: "多目标跟踪算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156705A (en) * 2015-04-07 2016-11-23 中国科学院深圳先进技术研究院 A kind of pedestrian's anomaly detection method and system
CN104751489A (en) * 2015-04-09 2015-07-01 苏州阔地网络科技有限公司 Grid-based relay tracking method and device in online class
CN106127119B (en) * 2016-06-16 2019-03-08 山东大学 Joint probabilistic data association method based on color image and depth image multiple features
CN106127119A (en) * 2016-06-16 2016-11-16 山东大学 Joint probabilistic data association method based on coloured image and depth image multiple features
CN106600631A (en) * 2016-11-30 2017-04-26 郑州金惠计算机***工程有限公司 Multiple target tracking-based passenger flow statistics method
CN109064472B (en) * 2017-03-28 2020-09-04 合肥工业大学 Fitting method and device for fitting plane of three-dimensional space model of vertebra
CN109064472A (en) * 2017-03-28 2018-12-21 合肥工业大学 A kind of approximating method and device of the three-dimensional space model fit Plane of vertebrae
CN107423695A (en) * 2017-07-13 2017-12-01 苏州珂锐铁电气科技有限公司 Dynamic texture identification method based on bipartite graph
CN107505951B (en) * 2017-08-29 2020-08-21 深圳市道通智能航空技术有限公司 Target tracking method, unmanned aerial vehicle and computer readable storage medium
CN107505951A (en) * 2017-08-29 2017-12-22 深圳市道通智能航空技术有限公司 A kind of method for tracking target, unmanned plane and computer-readable recording medium
CN109697393A (en) * 2017-10-23 2019-04-30 北京京东尚科信息技术有限公司 Person tracking method, device, electronic device and computer-readable medium
WO2019080668A1 (en) * 2017-10-23 2019-05-02 北京京东尚科信息技术有限公司 Person tracking method, device, electronic device, and computer readable medium
US11270126B2 (en) 2017-10-23 2022-03-08 Beijing Jingdong Shangke Information Technology Co., Ltd. Person tracking method, device, electronic device, and computer readable medium
CN108875666B (en) * 2018-06-27 2023-04-18 腾讯科技(深圳)有限公司 Method and device for acquiring motion trail, computer equipment and storage medium
CN108875666A (en) * 2018-06-27 2018-11-23 腾讯科技(深圳)有限公司 Acquisition methods, device, computer equipment and the storage medium of motion profile
CN109360226A (en) * 2018-10-17 2019-02-19 武汉大学 A kind of multi-object tracking method based on time series multiple features fusion
CN109360226B (en) * 2018-10-17 2021-09-24 武汉大学 Multi-target tracking method based on time series multi-feature fusion
CN109934849A (en) * 2019-03-08 2019-06-25 西北工业大学 Online multi-object tracking method based on track metric learning
CN111739053B (en) * 2019-03-21 2022-10-21 四川大学 Online multi-pedestrian detection tracking method under complex scene
CN111739053A (en) * 2019-03-21 2020-10-02 四川大学 Online multi-pedestrian detection tracking method under complex scene
CN110349181B (en) * 2019-06-12 2021-04-06 华中科技大学 Single-camera multi-target tracking method based on improved graph partitioning model
CN110349181A (en) * 2019-06-12 2019-10-18 华中科技大学 One kind being based on improved figure partition model single camera multi-object tracking method
CN110675319B (en) * 2019-09-12 2020-11-03 创新奇智(成都)科技有限公司 Mobile phone photographing panoramic image splicing method based on minimum spanning tree
CN110675319A (en) * 2019-09-12 2020-01-10 创新奇智(成都)科技有限公司 Mobile phone photographing panoramic image splicing method based on minimum spanning tree
CN111862156B (en) * 2020-07-17 2021-02-26 中南民族大学 Multi-target tracking method and system based on graph matching
CN111862156A (en) * 2020-07-17 2020-10-30 中南民族大学 Multi-target tracking method and system based on graph matching
CN112884742A (en) * 2021-02-22 2021-06-01 山西讯龙科技有限公司 Multi-algorithm fusion-based multi-target real-time detection, identification and tracking method
CN112884742B (en) * 2021-02-22 2023-08-11 山西讯龙科技有限公司 Multi-target real-time detection, identification and tracking method based on multi-algorithm fusion
WO2023072269A1 (en) * 2021-10-29 2023-05-04 上海商汤智能科技有限公司 Object tracking
CN117454199A (en) * 2023-12-20 2024-01-26 北京数原数字化城市研究中心 Track association method, system, electronic device and readable storage medium
CN117435934A (en) * 2023-12-22 2024-01-23 中国科学院自动化研究所 Matching method, device and storage medium of moving target track based on bipartite graph

Similar Documents

Publication Publication Date Title
CN104200488A (en) Multi-target tracking method based on graph representation and matching
Chiu et al. Probabilistic 3D multi-modal, multi-object tracking for autonomous driving
CN106875424B (en) A kind of urban environment driving vehicle Activity recognition method based on machine vision
CN110660082B (en) Target tracking method based on graph convolution and trajectory convolution network learning
Fang et al. 3d-siamrpn: An end-to-end learning method for real-time 3d single object tracking using raw point cloud
Leal-Taixé et al. Learning an image-based motion context for multiple people tracking
CN110660083B (en) Multi-target tracking method combined with video scene feature perception
Breitenstein et al. Online multiperson tracking-by-detection from a single, uncalibrated camera
CN108573496B (en) Multi-target tracking method based on LSTM network and deep reinforcement learning
CN103699908B (en) Video multi-target tracking based on associating reasoning
CN103489199B (en) video image target tracking processing method and system
CN102722725B (en) Object tracing method based on active scene learning
CN105335986A (en) Characteristic matching and MeanShift algorithm-based target tracking method
CN102663429B (en) Method for motion pattern classification and action recognition of moving target
CN104680559B (en) The indoor pedestrian tracting method of various visual angles based on motor behavior pattern
CN106373143A (en) Adaptive method and system
CN104574439A (en) Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method
Qin et al. Semantic loop closure detection based on graph matching in multi-objects scenes
CN111402632B (en) Risk prediction method for pedestrian movement track at intersection
CN102063625B (en) Improved particle filtering method for multi-target tracking under multiple viewing angles
CN108898612A (en) Multi-object tracking method based on the enhancing study of multiple agent depth
CN105809718A (en) Object tracking method with minimum trajectory entropy
Prokaj et al. Using 3d scene structure to improve tracking
CN104200226A (en) Particle filtering target tracking method based on machine learning
Yu et al. Accurate and robust visual localization system in large-scale appearance-changing environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20141210