CN101751549A - Method for tracking moving object - Google Patents

Method for tracking moving object Download PDF

Info

Publication number
CN101751549A
CN101751549A CN200810179785A CN200810179785A CN101751549A CN 101751549 A CN101751549 A CN 101751549A CN 200810179785 A CN200810179785 A CN 200810179785A CN 200810179785 A CN200810179785 A CN 200810179785A CN 101751549 A CN101751549 A CN 101751549A
Authority
CN
China
Prior art keywords
mobile object
appearance model
tracing
database
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200810179785A
Other languages
Chinese (zh)
Other versions
CN101751549B (en
Inventor
黄钟贤
石明于
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN200810179785.8A priority Critical patent/CN101751549B/en
Publication of CN101751549A publication Critical patent/CN101751549A/en
Application granted granted Critical
Publication of CN101751549B publication Critical patent/CN101751549B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method for tracking a moving object, which comprises the following steps: detecting the moving object in a plurality of continuous images to obtain space information of the moving object in each image; also extracting appearance characteristics of the moving object in the images to establish an appearance model of the moving object; and finally combining the space information with the appearance characteristics of the moving object to track a moving track of the moving object in the images. Hereby, even though the moving object leaves a monitoring picture, when the moving object enters the monitoring picture again, the method still can continuously track the moving object so as to assist a monitor to find abnormal behaviors in time and take consequent reaction.

Description

The method for tracing of mobile object
Technical field
The present invention relates to a kind of image processing method, and be particularly related to a kind of method for tracing of mobile object.
Background technology
The vision monitoring technology is increasingly important in recent years, especially pass through after the 9.11 incident, increasing monitoring camera is positioned in each place, however traditional monitoring supervise by manpower often, or just deposit in memory storage as the instrument of having access to afterwards.Along with increasing video camera sets up, required manpower is also more and more, therefore plays the part of even more important role by the auxiliary Automatic monitoring systems of computer vision technique in recent years.
Visual monitor system is by the behavior of mobile object in the analysis monitoring picture, and it can be track, attitude or other features, is detected the generation of anomalous event, and the vaild notice Security Officer handles.The basic subject under discussion of many vision monitorings, as existing in the past considerable document such as background subtracting, moving Object Detection and tracking, shadow removal and research, in recent years, focus then transfers the event detection of high-order to, detect, pace up and down and detect or the detection etc. of crowding as behavioural analysis, legacy, under the at present strong monitoring market demand, robotization and the behavioural analysis expection with wisdom will have great demand and business opportunity.
The so-called detection of pacing up and down, be meant when one or more mobile objects in special time, continue and repeat in certain guarded region.For instance, hooker or beggar can hang out in street corner, dauber and can stay in the wall limit, have suicide intention person to pace up and down can to pace up and down to wait with its client in the subway station in railway platform or drug trafficker and meet etc.
Yet, because the visual field of the video camera of visual monitor system is limited, can't contain the path that the ranger moves fully, therefore after the ranger left guarded region, visual monitor system promptly lost the target of monitoring, can't continue to detect its trend.Situation about returning again after particularly the ranger leaves, how still identification and related with behavior before it again is the bottleneck that detection technique faced of pacing up and down at present.
Summary of the invention
In view of this, the invention provides a kind of method for tracing of mobile object, in conjunction with the spatial information and the appearance model of mobile object in many images, the mobile route of this mobile object of sustainable tracking in image.
The present invention proposes a kind of method for tracing of mobile object, it comprises the mobile object that detects in continuous many images, to obtain the spatial information of this mobile object in each image, also extract the macroscopic features of this mobile object in each image in addition, to set up the appearance model of this mobile object, last then in conjunction with the spatial information and the appearance model of this mobile object, to follow the trail of the mobile route of this mobile object in image.
In one embodiment of this invention, above-mentioned detection is the step of the mobile object in many images continuously, also comprises judging whether this mobile object is mobile object, and with non-be the mobile object filtering of mobile object.Wherein, judge that whether this mobile object is whether the mode of mobile object comprises the area of judging this rectangular area greater than first preset value, and when its area during greater than first preset value, the mobile object of promptly judging this rectangular area and being surrounded is a mobile object.Another kind of mode then is whether to judge the length breadth ratio of this rectangular area greater than second preset value, and when its length breadth ratio during greater than second preset value, and the mobile object of promptly judging this rectangular area and being surrounded is a mobile object.
In one embodiment of this invention, the macroscopic features of said extracted mobile object in each image, comprise with the step of the appearance model of setting up mobile object and earlier the rectangular area to be divided into a plurality of blocks, and extract the COLOR COMPOSITION THROUGH DISTRIBUTION of each block, then adopt the mode of pulling over, from each block, take out the intermediate value of its COLOR COMPOSITION THROUGH DISTRIBUTION, set up a binary tree according to this and describe its COLOR COMPOSITION THROUGH DISTRIBUTION, last then choose the COLOR COMPOSITION THROUGH DISTRIBUTION of this binary tree branch, with proper vector as the appearance model of mobile object.
In one embodiment of this invention, the above-mentioned step that the rectangular area is divided into block comprises that cutting apart this rectangular area in certain proportion is head block, health block and lower limb block, then comprises when extracting the COLOR COMPOSITION THROUGH DISTRIBUTION of each block the COLOR COMPOSITION THROUGH DISTRIBUTION of head block is ignored.Wherein, above-mentioned COLOR COMPOSITION THROUGH DISTRIBUTION comprises the color character in RGB (RGB) color space or tone chroma luminance (HSI) color space.
In one embodiment of this invention, mobile object in continuous many images of above-mentioned detection, after the step that obtains the spatial information of this mobile object in each image, also comprise the mobile route that utilizes these spatial informations to follow the trail of this mobile object, and this mobile object of accumulative total rests on the residence time in these images.
In one embodiment of this invention, after above-mentioned accumulative total mobile object rests on the step of the residence time in the image, also comprise and judge whether the residence time that this mobile object rests in these images surpasses one first Preset Time, and when this residence time surpasses first Preset Time, beginning is extracted the macroscopic features of mobile object, setting up the appearance model of mobile object, and, follow the trail of the mobile route of mobile object in these images in conjunction with the spatial information and the appearance model of mobile object.
In one embodiment of this invention, above-mentioned spatial information and appearance model in conjunction with mobile object, the step of following the trail of the mobile route of mobile object in these images comprises utilizes spatial information to calculate the spatially relevant apriori probability of corresponding mobile object in the two adjacent images earlier, and the similarity of utilizing corresponding mobile object in the appearance information calculations two adjacent images, then just in conjunction with apriori probability and similarity in shellfish formula tracker, to judge the mobile route of mobile object in these adjacent images.
In one embodiment of this invention, when judging that the residence time surpasses first Preset Time, comprise that also the residence time and appearance model with this mobile object are recorded in database, it comprises and the appearance model of mobile object and a plurality of appearance models in the database is carried out related whether be recorded in database with the appearance model of judging this mobile object.Wherein, if the appearance model of mobile object has been recorded in database, then only write down the residence time of mobile object in database; Otherwise,, then write down residence time of mobile object and appearance model in database if the appearance model of mobile object is not recorded in database.
In one embodiment of this invention, above-mentionedly the appearance model of mobile object and appearance model in the database are carried out related step comprise and calculate first distance of same mobile object between the appearance model that two different time points are set up, to set up first range distribution, and calculate second distance between the appearance model of two mobile objects in each image, distribute to set up second distance, and then utilize this first range distribution and second distance to distribute and ask for its boundary line, with standard as differentiation appearance model.
In one embodiment of this invention, above-mentioned with mobile object the residence time and after the appearance model is recorded in the step of database, also comprise the time series of this mobile object in the analytical database, whether meet the incident of pacing up and down to judge this mobile object.Whether its judgment mode comprises the time of judging mobile object lasting appearance in these images above second Preset Time, and when mobile object has continued to occur above second Preset Time in these images, judges that promptly this mobile object meets the incident of pacing up and down.Another way then is whether to judge time interval that mobile object leaves these images less than the 3rd Preset Time, and the time interval that leaves these images when mobile object judges that promptly this mobile object meets the incident of pacing up and down during less than the 3rd Preset Time.
Based on above-mentioned, the present invention is by setting up visitor's appearance model, and in conjunction with shellfish formula tracer technique, database management technology and adaptive threshold learning art, monitoring enters the mobile object in the picture constantly, can solve and return the problem that can't continue to detect again after mobile object leaves picture.In addition, the present invention comes across time conditions in the picture according to the visitor, can detect visitor's the incident of pacing up and down automatically.
For above-mentioned feature and advantage of the present invention can be become apparent, embodiment cited below particularly, and conjunction with figs. is described in detail below.
Description of drawings
Fig. 1 is the synoptic diagram of the mobile object that illustrates according to one embodiment of the invention system architecture of following the trail of.
Fig. 2 is the process flow diagram of the method for tracing of the mobile object that illustrates according to one embodiment of the invention.
Fig. 3 (a) and (b) and (c) be the synoptic diagram of the appearance model of the mobile object that illustrates according to one embodiment of the invention.
Fig. 4 is the synoptic diagram of the binary tree of the COLOR COMPOSITION THROUGH DISTRIBUTION that illustrates according to one embodiment of the invention.
Fig. 5 is the process flow diagram of the shellfish formula object method for tracing that illustrates according to one embodiment of the invention.
Fig. 6 is the process flow diagram of the management method of the visitor database that illustrates according to one embodiment of the invention.
Fig. 7 (a) and (b) and (c) be the synoptic diagram of the adaptive threshold update method that illustrates according to one embodiment of the invention.
Fig. 8 is the legend that the adaptive threshold that illustrates according to one embodiment of the invention is calculated.
[main element symbol description]
100: tracing system
110: background subtracting
120: the mobile object object extracts
130: macroscopic features is calculated
140: shellfish formula object is followed the trail of
150: the visitor database management
160: visitor database
170: adaptive threshold is upgraded
180: the event detection of pacing up and down
S210~S230: each step of the method for tracing of the mobile object of one embodiment of the invention
S510~S570: each step of the shellfish formula object method for tracing of one embodiment of the invention
S610~S670: each step of the management method of the visitor database of one embodiment of the invention
Embodiment
The present invention sets up the detection technique of pacing up and down of the non-supervision formula of a cover, system can be automatically by the specific parameter of incident study of monitored picture, and the visitor who enters picture is done the foundation of appearance model, and its and visitor database is analyzed and related.Wherein, by comparing,, still can keep following the trail of uninterrupted even the visitor enters monitoring scene after leaving picture again with historical record.At last, utilize the rule of pacing up and down of definition in advance, can detect the incident of pacing up and down further.In order to make content of the present invention more clear, below the example that can implement according to this really as the present invention especially exemplified by embodiment.
Fig. 1 is the synoptic diagram of the mobile object that illustrates according to one embodiment of the invention system architecture of following the trail of.Please refer to Fig. 1, the tracing system 100 of present embodiment comprises elder generation via background subtracting 110, detects the mobile object in continuous many images.Since present embodiment follow the trail of to as if having the mobile object of complete external form (for example: the pedestrian), so next step can not be the mobile object filtering of mobile object promptly by simple condition enactment, and this is the mobile object object and extracts 120.
On the other hand, at each mobile object that extracts, present embodiment calculates its macroscopic features 130 earlier, and utilize a tracker to continue to follow the trail of this mobile object 140, and set up its appearance model by the macroscopic features of many obtained same mobile objects of image based on Bei Shi decision-making (Bayesian Decision).Tracing system 100 is safeguarded a visitor database 150 simultaneously in storer.Wherein, visitor database management 160 can be compared the macroscopic features of the present mobile object that extracts according to the result of adaptive threshold renewal 170 and be related with the appearance model in the visitor database 150, if this mobile object can associate with the some personages in the visitor database 150, represent that promptly this mobile object past attempts visited this scene; Otherwise, then that this mobile object is newly-increased to visitor database 150.At last, the time conditions that comes across picture according to the visitor can detect the incident of pacing up and down 180 as basis for estimation.When following the trail of mobile object, the situation that tracing system 100 can distribute mobile object in the picture is as sample, and the automatic study person's that how to distinguish the different access difference, in order to foundation, below the detailed process of the method for tracing of mobile object of the present invention is described for an embodiment more promptly as the appearance model interaction.
Fig. 2 is the process flow diagram of the method for tracing of the mobile object that illustrates according to one embodiment of the invention.Please refer to Fig. 2, present embodiment is followed the trail of at the mobile object that enters in the monitored picture, use and set up its appearance model, and compare with the data in the visitor database that is disposed in the system storage, and then judge this mobile object and whether once occurred, and can continue to follow the trail of mobile object, its detailed step is as follows:
At first, detect the mobile object in continuous many images, to obtain the spatial information (step S210) of this mobile object in each image.This moving Object Detection technology mainly is to set up a background image earlier, and with present image therewith background image subtract each other and come the acquisition prospect.Then can utilize the join domain labelling method that each join domain mark is come out by the prospect that is drawn behind the background subtracting, and surround the rectangular area b={r of this join domain with I Left, r Top, r Right, r BottomRecord, wherein r in addition Left, r Top, r RightWith r BottomRepresent that respectively this rectangular area is in the border of image left, up, right, down.
What deserves to be mentioned is, owing to cause the factor of prospect a lot, and be the foreground object that comprises single mobile object at this interested object, therefore present embodiment is example with pedestrian, also comprise and judge whether this mobile object is the pedestrian, and will wherein not belong to pedestrian's mobile object filtering, it for example is by the filtering in addition of following two conditions: first condition is to judge that whether the area of rectangular area is greater than one first preset value, and when its area during greater than first preset value, judge that promptly the mobile object that this rectangular area surrounds is the pedestrian, but filtering noise like this and broken object; Second condition then is to judge that whether the length breadth ratio of rectangular area is greater than second preset value, and when its length breadth ratio during greater than second preset value, judge that then the mobile object that this rectangular area surrounds is the pedestrian, but the so block of the many people's overlappings of filtering or noise on a large scale.
Next step then is to extract the macroscopic features of mobile object in each image, to set up the appearance model (step S220) of this mobile object.In detail, the present invention proposes a kind of new appearance description, and it is considered color structure and by looser body segmentation, draws the macroscopic features than the tool meaning.So-called looser body segmentation, the rectangular area that will surround a pedestrian exactly is divided into a plurality of blocks, and extract the COLOR COMPOSITION THROUGH DISTRIBUTION of each block, for example can the rectangular area be divided into head block, health block and lower limb block, to correspond respectively to pedestrian's head, health and lower limb according to 2: 4: 4 ratio.Wherein, because the color character of head is subject to the influence of its faces direction, and discrimination do not showing, and therefore the information of this head block can be ignored.
For instance, Fig. 3 (a), 3 (b) and 3 (c) are the synoptic diagram of the appearance model of the mobile object that illustrates according to one embodiment of the invention.Wherein, Fig. 3 (a) shows pedestrian's image, and it is via the rectangular area after above-mentioned two condition filterings, can be referred to as pedestrian candidate person P, and Fig. 3 (b) is the connection object of its correspondence.After will connecting the object mark, next can partly take out the intermediate value of COLOR COMPOSITION THROUGH DISTRIBUTION by health block among Fig. 3 (c) and lower limb block, and then set up a binary tree and describe COLOR COMPOSITION THROUGH DISTRIBUTION by the mode of pulling over.
Fig. 4 is the synoptic diagram of the binary tree of the COLOR COMPOSITION THROUGH DISTRIBUTION that illustrates according to one embodiment of the invention.Please refer to Fig. 4, wherein M represents the intermediate value of some COLOR COMPOSITION THROUGH DISTRIBUTION in health block or the lower limb block, ML and MH then for this reason COLOR COMPOSITION THROUGH DISTRIBUTION by M divide open after, the intermediate value of COLOR COMPOSITION THROUGH DISTRIBUTION separately can be analogized according to this and obtains the MLL of branch, MLH, MHL and MHH.Wherein, above-mentioned COLOR COMPOSITION THROUGH DISTRIBUTION can be the arbitrary color character in RGB (RGB) color space or tone chroma luminance (HSI) color space, even the color character of other color spaces, does not limit at this.For convenience of description, present embodiment is selected rgb color space for use, and sets up a binary tree that comprises three layers of COLOR COMPOSITION THROUGH DISTRIBUTION, and it can form the proper vector of one 24 dimension
Figure G2008101797858D0000071
Pedestrian's macroscopic features is described.After obtaining this proper vector, each pedestrian candidate people promptly can represent by its spatial information and appearance model in image.
Behind the spatial information of obtaining mobile object and appearance model, present embodiment is followed the trail of the mobile route (step S230) of mobile object in image according to this promptly further combined with this different information.Present embodiment is reached mobile object by a kind of method for tracing of the mobile object based on Bei Shi decision-making and is followed the trail of, the method is considered the appearance and the position of mobile object in the two adjacent images, make best association and use Bei Shi to make a strategic decision, this is mobile object and follows the trail of.
In detail, suppose when time t, be shown by the tablet menu that comprises n pedestrian candidate person's rectangle that object detects and the appearance modelling is obtained The historical record that the Bei Shi tracker that system safeguarded was in addition followed the trail of before the time at t-1 is the inventory that comprises m visitor's guess
Figure G2008101797858D0000073
So-called visitor's guess is meant under the tracking of continuous time, the τ that is shut away mutually continuous pedestrian candidate person's image, i.e. H={P T-τ, P T-τ+1..., P t, ρ }, P wherein T-τThe pedestrian candidate rectangle that occurs for the first time of visitor for this reason, the rest may be inferred for all the other.In addition, ρ is called confidence index, and it can increase or reduce along with the success of object tracking or failure, when this confidence index during greater than a upper bound threshold value, can think that then this visitor's guess has enough level of confidences, and change visitor's guess into an entities access person; Otherwise, if when this confidence index is lower than zero, think that then this mobile object has left monitoring scene, remove among the inventory M that can be safeguarded this visitor's guess by the Bei Shi tracker this moment.Above-mentioned Bei Shi object is followed the trail of and can be divided into study, relevant and renewal three phases, describes in detail for an embodiment with next again.
Fig. 5 is the process flow diagram of the shellfish formula object method for tracing that illustrates according to one embodiment of the invention.Please refer to Fig. 5, at first, in learning phase, present embodiment provides a group access person to guess inventory M (step S510) earlier, and this visitor guesses that comprising a plurality of processes among the inventory M follows the trail of continuous time, and associated visitor guess.
Then, guess that at the visitor each visitor among the inventory M guesses H i T-1, inspect its length that in image, stops (tracked time) and whether surpass one first Preset Time L 1(step S520) is if it is shorter in length than L 1, assert that then it still is in learning phase, this moment, the pedestrian candidate person of adjacent pictures only came related (step S530) by the correlativity in space.For instance, guess H if belong to the visitor i T-1Rectangular area b i T-1Pedestrian candidate person P with present picture j tRectangular area b j tOverlapping relation on having living space, then the visitor guesses H i T-1Can be by adding pedestrian candidate person P j tAnd be updated to H i t
Then, in association phase, promptly the visitor guesses H i T-1Length greater than the first Preset Time L 1, represent that then it is in the state that is stabilizedly tracked, this moment, we not only considered spatial coherence, also more considered the macroscopic features of object, and made a strategic decision visitor's guess related with pedestrian candidate person (step S540) by Bei Shi.In detail, this step comprises utilizes above-mentioned spatial information to calculate the spatially relevant apriori probability of corresponding mobile object in the two adjacent images, and the similarity of utilizing corresponding mobile object in the above-mentioned appearance information calculations two adjacent images, and then this apriori probability and similarity be incorporated into shellfish formula tracker, whether be associated with pedestrian candidate person to judge visitor's guess.For instance, formula (1) is the discriminating equation (discriminant function) of Bei Shi decision-making:
BD ( H i t - 1 , P j t ) = P ( C H | P j t ) / P ( C H ‾ | P j t ) - - - ( 1 )
= ( p ( C H ) p ( P j t | C H ) ) / ( p ( C H ‾ ) p ( P j t | C H ‾ ) )
Similarity function P (C wherein H| P j t) expression gives P j t, it belongs to H i T-1Probability, and
Figure G2008101797858D0000083
Then opposite, that is P j tDo not belong to H i T-1Probability.Therefore, if, promptly representing decision-making greater than 1, BD is partial to P j tBelong to H i T-1So, with both with related.Wherein, if with the similarity letter formula p (P in the formula (1) j t| C H) with normal distribution N (μ, the ∑ of multidimensional 2) represent, then as the formula (2):
p ( P j t | C H ) = 1 det Σ ( 2 π d ) exp ( - 1 2 ( f j t - μ ) T Σ - 1 ( f j t - μ ) ) - - - ( 2 )
Wherein μ and ∑ is past L 1Individual eigenwert (from
Figure G2008101797858D0000085
To f i T-1) the matrix that on average makes a variation together, computing method are as follows:
μ = Σ k = t - L 1 t - 1 f k L 1 - - - ( 3 )
Figure G2008101797858D0000087
Wherein
σ xy=(f xx)(f yy) (5)
Similarity function
Figure G2008101797858D0000088
Then represent at this with even distribution (uniform distribution) function.On the other hand, because apriori probability p (C H) with
Figure G2008101797858D0000089
Being to be used for the cognitive in advance of this incident generation of reaction pair, is with this cognitive in advance correlativity that corresponds to the space at this.In other words, promptly work as b i T-1With b j tDistance is near more, then gives bigger apriori probability, can represent p (C by an index letter formula relevant with distance at this H) with
Figure G2008101797858D0000091
Shown in (6) and (7):
p ( C H ) = exp ( - D ( b j t , b i t - 1 ) σ D 2 ) - - - ( 6 )
p ( C H ‾ ) = 1 - p ( C H ) - - - ( 7 )
Wherein, σ DBe the parameter by user's control, it can be adjusted according to the speed of mobile object in the picture.Consider above-mentioned spatial coherence and macroscopic features, can be in order to judge visitor guess whether related with pedestrian candidate person (step S550).And in update stage, if P j tWith H i T-1Be judged as via the above-mentioned stage relevant, then can be with P j tNewly-increased to H i T-1And upgrade visitor's guess be
Figure G2008101797858D0000094
(step S560).Simultaneously, also can add a constant Δ ρ, to improve the confidence index ρ of this guess i, up to ρ iReach a preset maximum value ρ MaxOtherwise, if H i T-1Can't be relevant with any one pedestrian candidate person in the picture, then with its confidence index ρ iDeduct constant Δ ρ,, then guess to remove this visitor guess the inventory M, represent that this visitor has left monitored picture from the visitor if its value is reduced to less than zero the time.On the other hand,, promptly represent the visitor of this pedestrian's candidate, so guess that in the visitor increasing a visitor among the inventory M guesses for newly entering if there is pedestrian candidate person can't correlate any visitor's guess among the picture t (step S570), and given ρ M+1=0.In the present embodiment, ρ MaxBe set at 1 and Δ ρ is set at 0.1, but be not limited to this.
For the visitor in the turnover scene being done the identification of appearance, in order to analyzing behavior and the time point of same pedestrian in this scene turnover, the present invention has disposed the visitor database person's that comes the record access appearance model and the access time thereof, the management process of this visitor database as shown in Figure 6, it is described as follows:
At first, newly-increased visitor guess (step S610) in tracker, and judge whether the length of this visitor's guess arrives the second Preset Time L 2Integral multiple (step S620), and when arriving L 2The time, promptly according to its passing L 2Individual macroscopic features is calculated the different matrix of its averaged feature vector and co-variation, uses the appearance model V={N (μ, ∑) that a Gaussian function is described this moment, and { s}} (step S630), wherein { s} is a constant sequence, and it writes down the time of this appearance modelling.Then, the guess length that judges whether the visitor equals L 2(step S640) and its appearance model are be associated with visitor's appearance model of writing down in the visitor database (step S650).Wherein, if there be more than one similar (both distances are less than threshold value T) in visitor's appearance model in this visitor and the database, represent that then this visitor's ever accessed crosses this scene, this moment just can be with itself and the most close person V kAlso through type (8), (9) are updated to the visitor's appearance model that is positioned at visitor database with (10) to carry out association
Figure G2008101797858D0000096
(step S660).
V ~ k = { N ( μ ~ k , Σ ~ k ) , { S k 1 , S k 2 , . . . , S k u , S i 1 , S i 2 , . . . , S i v } } - - - ( 8 )
μ ~ k = μ · μ k + v · μ i μ + v - - - ( 9 )
σ ~ k 2 ( x , y ) = u · σ k 2 ( x , y ) + v · σ i 2 ( x , y ) u + v - - - ( 10 )
σ wherein 2(x, y) (x, y), u and v are the length of time series of two appearance models to the element in the expression co-variation different matrix, with the weight as renewal, by V at this moment iBe an appearance model of newly setting up, so its v value is 1.Otherwise, if V iCan't carry out relatedly with any appearance line in the visitor database, represent that then this is a newly observed visitor, can add its appearance model and time mark in the visitor database (step S670) this moment.Herein two appearance models (respectively be the Gaussian distribution of a d dimension, N 11, ∑ 1) and N 22, ∑ 2)) between distance calculate by following range formula:
D(V(N 1),V(N 2))=(D KL(N 1||N 2)+D KL(N 2||N 1))/2 (11)
Wherein
D KL ( N 1 | | N 2 ) = 1 2 ( 1 n ( det Σ 2 det Σ 1 ) + tr ( Σ 2 - 1 Σ 1 ) + ( μ 2 - μ 1 ) T Σ 2 - 1 ( μ 2 - μ 1 ) - d ) - - - ( 12 )
What deserves to be mentioned is, be L if this visitor guesses length 2Twice more than, represent that then it correlated with visitor database, only need this moment its correspondence of through type (8) continuous updating to get final product at the appearance model of visitor database.
Mention as the threshold value T that judges the relevant foundation of two appearance models at above-mentioned,, can judge that these two appearance models come from different visitor when two appearance modal distances during greater than this threshold value T; If anti-distance is less than threshold value T, and then decidable these both is associated, and then concludes that both are same visitor.
In order to calculate an optimal threshold value T, the present invention proposes a kind of learning strategy of non-supervision formula, the system that makes can automatically learn to upgrade from film, to obtain best appearance resolution characteristic, it comprises considers that following two kinds of incident: incident A betide same visitor and followed the trail of sustainedly and stably.Shown in Fig. 7 (a) and Fig. 7 (b), when the visitor is followed the trail of 2L by system stability ground 2Time span, promptly have enough confidence to believe two appearance models this moment
Figure G2008101797858D0000105
With
Figure G2008101797858D0000106
Be from same visitor, can calculate the distance D (V between these two appearance models this moment 1', V 1), and as the eigenwert of incident A; And incident B betides two visitors and occurs in the picture simultaneously and follow the trail of with being stabilized.Shown in Fig. 7 (c), two visitors appear simultaneously because work as same picture, and promptly have enough confidence to believe that these two appearance models are from different visitors this moment, calculates its distance D (V by this 2, V 3), and as the eigenwert of incident B.
When incident A that collected when system and the quantity of incident B reach a certain amount of, again it is done statistical analysis.As shown in Figure 8, wherein because the eigenwert of incident A is to calculate the distance of the appearance model of setting up in two different time points from same visitor, so its value is comparatively concentrated near null position and distribution; And the eigenwert of incident B is to calculate the distance of the appearance model of two different objects, so its value was comparatively disperseed than far away and distribution from zero point.By calculating these two kinds of incidents other mean value and standard deviation, again with normal distribution N AA, σ A 2) and N BB, σ B 2) represent its range data, can set up first range distribution and second distance and distribute, can ask for the distribute boundary line of the best of this first range distribution and second distance this moment by following equation (13), with threshold value T as differentiation appearance model:
1 σ A 2 π e - 1 2 ( μ A - T σ A ) 2 = 1 σ B 2 π e - 1 2 ( μ B - T σ B ) 2 - - - ( 13 )
At last, for each visitor's that above-mentioned visitor database is put down in writing the appearance model and the residence time, the present invention further is applied to the detection of pacing up and down, as long as analyze the time series that the appearance model of each visitor in the visitor database write down s}, in this judgement that can be paced up and down as condition in following formula (14) and (15):
s t-s 1>α (14)
s i-s i-1<β,1<i≤t (15)
When its Chinese style (14) represents that the visitor occurred detecting up till now from the first time, occur surpassing Preset Time α at picture, and formula (15) represents that the detected time interval of visitor is at least less than Preset Time β.
In sum, the method for tracing of mobile object of the present invention is in conjunction with " mobile object tracking ", " visitor database management ", and technology such as " adaptive threshold study ", macroscopic features according to mobile object in many images is set up the appearance model, and compare with the data in the visitor database that is disposed in the system storage, and can keep for visitor's tracking uninterrupted, even the visitor leaves monitored picture and then enters again, still can successfully this visitor be associated with its previous behavior, and then auxiliary supervisor is the behavior that early notes abnormalities and makes subsequent reactions.
Though the present invention with embodiment openly as above; right its is not in order to qualification the present invention, those skilled in the art, without departing from the spirit and scope of the present invention; when doing a little change and retouching, so protection scope of the present invention is as the criterion when looking the appended claims person of defining.

Claims (23)

1. the method for tracing of a mobile object comprises the following steps:
Detect the mobile object in continuous many images, to obtain the spatial information of this mobile object in each these image;
Extract the macroscopic features of this mobile object in each these image, to set up an appearance model of this mobile object; And
In conjunction with this spatial information and this appearance model of this mobile object, follow the trail of the mobile route of this mobile object in these images.
2. the method for tracing of mobile object as claimed in claim 1, the step that wherein detects continuously this mobile object in many images comprises:
With these figure image subtraction one background images, to detect this mobile object.
3. the method for tracing of mobile object as claimed in claim 2 wherein after the step with these these background images of figure image subtraction, also comprises:
A plurality of join domains in these images of mark; And
Estimate I and surround a rectangular area of each these join domain.
4. the method for tracing of mobile object as claimed in claim 3 wherein detects continuously the step of this mobile object in many images, also comprises:
Judge that whether this mobile object is the thing that follows the trail of the objective; And
Non-this mobile object of filtering for this thing that follows the trail of the objective.
5. the method for tracing of mobile object as claimed in claim 4, judge that wherein whether this mobile object is that the step of this thing that follows the trail of the objective comprises:
Whether an area of judging this rectangular area is greater than one first preset value; And
When this area during, judge that this mobile object that this rectangular area surrounds is this thing that follows the trail of the objective greater than this first preset value.
6. whether the method for tracing of mobile object as claimed in claim 4 wherein judges this mobile object
For the step of this thing that follows the trail of the objective comprises:
Whether a length breadth ratio of judging this rectangular area is greater than one second preset value; And
When this length breadth ratio during, judge that this mobile object that this rectangular area surrounds is this thing that follows the trail of the objective greater than this second preset value.
7. the method for tracing of mobile object as claimed in claim 3 wherein extracts this mobile object this macroscopic features in each these image, comprises with the step of this appearance model of setting up this mobile object:
Cutting apart this rectangular area is a plurality of blocks, and extracts the color distribution of each these block;
The mode that employing is pulled over, an intermediate value of this COLOR COMPOSITION THROUGH DISTRIBUTION is set up a binary tree according to this and is described this COLOR COMPOSITION THROUGH DISTRIBUTION in each these block of taking-up; And
Choose these COLOR COMPOSITION THROUGH DISTRIBUTION of this binary tree branch, with a proper vector as this appearance model of this mobile object.
8. the method for tracing of mobile object as claimed in claim 3, its mobile object is a group traveling together.
9. the method for tracing of mobile object as claimed in claim 8 is wherein cut apart this rectangular area and is comprised that for the step of these blocks cutting apart this rectangular area with 2: 4: 4 ratio is a head block, a body region piece and a lower limb block.
10. the method for tracing of mobile object as claimed in claim 9, the step of wherein extracting this COLOR COMPOSITION THROUGH DISTRIBUTION of each these block comprises this COLOR COMPOSITION THROUGH DISTRIBUTION of ignoring this head block.
11. the method for tracing of mobile object as claimed in claim 10, wherein this COLOR COMPOSITION THROUGH DISTRIBUTION comprises the color feature in RGB color space or the tone chroma luminance color space.
12. the method for tracing of mobile object as claimed in claim 3, this mobile object in detecting continuously many images wherein after the step that obtains this spatial information of this mobile object in each these image, also comprises:
Utilize these spatial informations to follow the trail of this mobile route of this mobile object, and this mobile object of accumulative total rest on the residence time in these images.
13. the method for tracing of mobile object as claimed in claim 12 wherein after this mobile object of accumulative total rests on the step of this residence time in these images, also comprises:
Judge whether this residence time that this mobile object rests in these images surpasses one first Preset Time; And
When this residence time surpasses this first Preset Time, beginning is extracted these macroscopic featuress of this mobile object, set up this appearance model of this mobile object, and, follow the trail of this mobile object this mobile route in these images in conjunction with this spatial information and this appearance model of this mobile object.
14. the method for tracing of mobile object as claimed in claim 13, wherein in conjunction with this spatial information and this appearance model of this mobile object, the step of following the trail of this mobile route of this mobile object in these images comprises:
Utilize these spatial informations to calculate the spatially relevant apriori probability of corresponding this mobile object in the two adjacent images;
Utilize a similarity of corresponding this mobile object in these appearance information calculations two adjacent images; And
In conjunction with this apriori probability and this similarity in a shellfish formula tracker, to judge this mobile object this mobile route in these adjacent images.
15. the method for tracing of mobile object as claimed in claim 13 wherein when judging that this residence time surpasses this first Preset Time, also comprises:
Write down this residence time of this mobile object and this appearance model in a database.
16. the method for tracing of mobile object as claimed in claim 15, this residence time and this appearance model that wherein write down this mobile object comprise in the step of this database:
This appearance model of this mobile object and a plurality of appearance models in this database are carried out related, whether be recorded in this database with this appearance model of judging this mobile object;
If this appearance model of this mobile object has been recorded in this database, this residence time of then only writing down this mobile object is in this database; And
If this appearance model of this mobile object is not recorded in this database, then write down this residence time of this mobile object and this appearance model in this database.
17. the method for tracing of mobile object as claimed in claim 16, wherein this appearance model of this mobile object and these appearance models in this database are carried out relatedly, the step that whether has been recorded in this database with this appearance model of judging this mobile object comprises:
Calculate a distance of each these appearance model in this appearance model of this mobile object and this database, and judge that whether this distance is less than a threshold value; And
If this distance that this appearance model is arranged is less than this threshold value, by this appearance model in this this database of appearance model modification of this mobile object.
18. the method for tracing of mobile object as claimed in claim 17, wherein the step by this appearance model in this this database of appearance model modification of this mobile object comprises:
Select this distance less than upgrading this appearance model in this database to the most similar person of this appearance model of this mobile object in these appearance models of this threshold value.
19. the method for tracing of mobile object as claimed in claim 18, the step of wherein calculating this distance of each these appearance model in this appearance model of this mobile object and this database comprises:
Calculate one first distance of this mobile object between the appearance model that two different time points are set up in each these image, to set up one first range distribution;
Second distance in each these image of calculating between this appearance model of two mobile objects distributes to set up a second distance; And
Ask for the boundary line that this first range distribution and this second distance distribute, with a standard as this appearance model of differentiation.
20. the method for tracing of mobile object as claimed in claim 19, the step of wherein setting up the distribution of this first range distribution and this second distance comprises:
Calculate a mean value and a standard deviation that this first range distribution and this second distance distribute respectively; And
According to this mean value and this standard deviation, represent the data of these first distances and these second distances in the mode of normal distribution, and set up this first range distribution and the distribution of this second distance.
21. the method for tracing of mobile object as claimed in claim 15, wherein at record this residence time of this mobile object and this appearance model after the step of this database, also comprise:
Analyze a time sequence of this mobile object in this database, whether meet the incident of pacing up and down to judge this mobile object.
22. the method for tracing of mobile object as claimed in claim 21 is wherein analyzed this time series of this mobile object in this database, comprises to judge the step whether this mobile object meets this incident of pacing up and down:
Judge whether the time that this mobile object continues to occur surpasses one second Preset Time in these images; And
When this mobile object has continued to occur surpassing this second Preset Time, judge that this mobile object meets this incident of pacing up and down in these images.
23. the method for tracing of mobile object as claimed in claim 21 is wherein analyzed this time series of this mobile object in this database, comprises to judge the step whether this mobile object meets this incident of pacing up and down:
Judge that whether this mobile object leaves a time spacing of these images less than one the 3rd Preset Time; And
This time interval that leaves these images when this mobile object judges that this mobile object meets this incident of pacing up and down during less than the 3rd Preset Time.
CN200810179785.8A 2008-12-03 2008-12-03 Method for tracking moving object Active CN101751549B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200810179785.8A CN101751549B (en) 2008-12-03 2008-12-03 Method for tracking moving object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200810179785.8A CN101751549B (en) 2008-12-03 2008-12-03 Method for tracking moving object

Publications (2)

Publication Number Publication Date
CN101751549A true CN101751549A (en) 2010-06-23
CN101751549B CN101751549B (en) 2014-03-26

Family

ID=42478515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200810179785.8A Active CN101751549B (en) 2008-12-03 2008-12-03 Method for tracking moving object

Country Status (1)

Country Link
CN (1) CN101751549B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324906A (en) * 2012-03-21 2013-09-25 日电(中国)有限公司 Method and equipment for detecting abandoned object
CN103824299A (en) * 2014-03-11 2014-05-28 武汉大学 Target tracking method based on significance
CN105574511A (en) * 2015-12-18 2016-05-11 财团法人车辆研究测试中心 Adaptive object classification device having parallel framework and method
CN107305378A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of method that image procossing follows the trail of the robot of object and follows the trail of object
CN107992198A (en) * 2013-02-06 2018-05-04 原相科技股份有限公司 Optical profile type pointing system
CN108205643A (en) * 2016-12-16 2018-06-26 同方威视技术股份有限公司 Image matching method and device
CN109102669A (en) * 2018-09-06 2018-12-28 广东电网有限责任公司 A kind of transformer substation auxiliary facility detection control method and its device
CN109117721A (en) * 2018-07-06 2019-01-01 江西洪都航空工业集团有限责任公司 A kind of pedestrian hovers detection method
CN110032917A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of accident detection method, apparatus and electronic equipment
CN110717941A (en) * 2018-07-12 2020-01-21 广达电脑股份有限公司 Image object tracking system and method
CN111815671A (en) * 2019-04-10 2020-10-23 曜科智能科技(上海)有限公司 Target quantity statistical method, system, computer device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295367B1 (en) * 1997-06-19 2001-09-25 Emtera Corporation System and method for tracking movement of objects in a scene using correspondence graphs
US6931146B2 (en) * 1999-12-20 2005-08-16 Fujitsu Limited Method and apparatus for detecting moving object

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1141427A (en) * 1995-12-29 1997-01-29 西安交通大学 Method for measuring moving articles based on pattern recognition
CN1766928A (en) * 2004-10-29 2006-05-03 中国科学院计算技术研究所 A kind of motion object center of gravity track extraction method based on the dynamic background sport video
JP4915655B2 (en) * 2006-10-27 2012-04-11 パナソニック株式会社 Automatic tracking device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6295367B1 (en) * 1997-06-19 2001-09-25 Emtera Corporation System and method for tracking movement of objects in a scene using correspondence graphs
US6931146B2 (en) * 1999-12-20 2005-08-16 Fujitsu Limited Method and apparatus for detecting moving object

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103324906A (en) * 2012-03-21 2013-09-25 日电(中国)有限公司 Method and equipment for detecting abandoned object
CN107992198A (en) * 2013-02-06 2018-05-04 原相科技股份有限公司 Optical profile type pointing system
CN103824299A (en) * 2014-03-11 2014-05-28 武汉大学 Target tracking method based on significance
CN103824299B (en) * 2014-03-11 2016-08-17 武汉大学 A kind of method for tracking target based on significance
CN105574511B (en) * 2015-12-18 2019-01-08 财团法人车辆研究测试中心 Have the adaptability object sorter and its method of parallel framework
CN105574511A (en) * 2015-12-18 2016-05-11 财团法人车辆研究测试中心 Adaptive object classification device having parallel framework and method
CN107305378A (en) * 2016-04-20 2017-10-31 上海慧流云计算科技有限公司 A kind of method that image procossing follows the trail of the robot of object and follows the trail of object
CN108205643A (en) * 2016-12-16 2018-06-26 同方威视技术股份有限公司 Image matching method and device
CN110032917A (en) * 2018-01-12 2019-07-19 杭州海康威视数字技术股份有限公司 A kind of accident detection method, apparatus and electronic equipment
CN109117721A (en) * 2018-07-06 2019-01-01 江西洪都航空工业集团有限责任公司 A kind of pedestrian hovers detection method
CN110717941A (en) * 2018-07-12 2020-01-21 广达电脑股份有限公司 Image object tracking system and method
CN109102669A (en) * 2018-09-06 2018-12-28 广东电网有限责任公司 A kind of transformer substation auxiliary facility detection control method and its device
CN111815671A (en) * 2019-04-10 2020-10-23 曜科智能科技(上海)有限公司 Target quantity statistical method, system, computer device and storage medium
CN111815671B (en) * 2019-04-10 2023-09-15 曜科智能科技(上海)有限公司 Target quantity counting method, system, computer device and storage medium

Also Published As

Publication number Publication date
CN101751549B (en) 2014-03-26

Similar Documents

Publication Publication Date Title
CN101751549B (en) Method for tracking moving object
US8243990B2 (en) Method for tracking moving object
US8213679B2 (en) Method for moving targets tracking and number counting
CN101141633B (en) Moving object detecting and tracing method in complex scene
CN101246547B (en) Method for detecting moving objects in video according to scene variation characteristic
CN102831439A (en) Gesture tracking method and gesture tracking system
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
CN101799968B (en) Detection method and device for oil well intrusion based on video image intelligent analysis
CN103854027A (en) Crowd behavior identification method
KR20080085837A (en) Object density estimation in vedio
CN111292355A (en) Nuclear correlation filtering multi-target tracking method fusing motion information
CN106355604A (en) Target image tracking method and system
Wong et al. Recognition of pedestrian trajectories and attributes with computer vision and deep learning techniques
CN105654139A (en) Real-time online multi-target tracking method adopting temporal dynamic appearance model
Khanloo et al. A large margin framework for single camera offline tracking with hybrid cues
CN113763427B (en) Multi-target tracking method based on coarse-to-fine shielding processing
CN112435276A (en) Vehicle tracking method and device, intelligent terminal and storage medium
CN104143197A (en) Detection method for moving vehicles in aerial photography scene
CN110245554A (en) A kind of method, system platform and the storage medium of the early warning of pedestrian movement's trend
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN106127798B (en) Dense space-time contextual target tracking based on adaptive model
CN101610412B (en) Visual tracking method based on multi-cue fusion
CN109977796A (en) Trail current detection method and device
CN103077533A (en) Method for positioning moving target based on frogeye visual characteristics
CN115147921B (en) Multi-domain information fusion-based key region target abnormal behavior detection and positioning method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant