CN105631895A - Temporal-spatial context video target tracking method combining particle filtering - Google Patents

Temporal-spatial context video target tracking method combining particle filtering Download PDF

Info

Publication number
CN105631895A
CN105631895A CN201510956797.7A CN201510956797A CN105631895A CN 105631895 A CN105631895 A CN 105631895A CN 201510956797 A CN201510956797 A CN 201510956797A CN 105631895 A CN105631895 A CN 105631895A
Authority
CN
China
Prior art keywords
frame
space
target
particle filter
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510956797.7A
Other languages
Chinese (zh)
Other versions
CN105631895B (en
Inventor
朱征宇
李帅
徐强
郑加琴
袁闯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201510956797.7A priority Critical patent/CN105631895B/en
Publication of CN105631895A publication Critical patent/CN105631895A/en
Application granted granted Critical
Publication of CN105631895B publication Critical patent/CN105631895B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a temporal-spatial context video target tracking method combining particle filtering. The method comprises the following steps: S1, first frame video data are read, a video target frame is selected, and a temporal-spatial context feature model and a particle filtering feature model are initialized; S2, new frame video data are read, and through a temporal-spatial context tracking method, a specific video target result is acquired; S3, a confidence change trend is adopted to judge whether the specific target is subjected to strong interference, and if strong interference is detected, a particle filtering method is used for recalibrating the result, and a more accurate video target tracking result is thus obtained; and S4, the finally-obtained tracking result is used and at the same time, the temporal-spatial context feature model and the particle filtering feature model are updated. The algorithm provided by the invention can still complete stable tracking in the case of strong interference, and the robustness is higher.

Description

Space-time context video target tracking method in conjunction with particle filter
Technical field
The present invention relates to video image target and catch field, particularly relate to a kind of space-time context video target tracking method in conjunction with particle filter.
Background technology
Video internal object follows the tracks of one of basic problem as computer vision field, is the basis of the follow-up works such as video internal object identification, Activity recognition, has been widely used for the many aspects of people's life at present. But even simultaneously act on target owing to often running into multiple interference factor in conventional object tracking process, such as deformation, illumination variation, blocks and rotation etc., therefore develop a kind of efficiently and the track algorithm of high robust be a difficult and challenging job. This just needs those skilled in the art badly and solves corresponding technical problem.
Summary of the invention
It is contemplated that at least solve the technical problem existed in prior art, especially innovatively propose a kind of space-time context video target tracking method in conjunction with particle filter.
In order to realize the above-mentioned purpose of the present invention, the invention provides a kind of space-time context video target tracking method in conjunction with particle filter, comprise the steps:
S1, reads one-frame video data, selected video object frame, and initializes space-time contextual feature model and particle filter characteristic model;
S2, reads new one-frame video data, obtains particular video frequency objective result therein by space-time context tracking;
S3, adopts the variation tendency of confidence level to judge whether specific objective is subject to strong jamming, if be detected that result is recalibrated by the method that strong jamming then enables particle filter, thus obtaining video frequency object tracking result more accurately;
S4, uses the tracking result finally given to update the characteristic model of space-time contextual feature model and particle filter simultaneously; Jump to S2.
The described space-time context video target tracking method in conjunction with particle filter, it is preferred that described S1 includes:
Particle filter used here as color histogram modeling has five as the reason of the standby correction algorithm of space-time context track algorithm; One, color histogram calculates simplicity, and speed is fast, effective; Its two, use color histogram modeling to have a yardstick adaptability, can better and the cooperation of space-time context track algorithm; Its three, video frequency object tracking process can run into various interference factor, such as background clutter, deformation, dimensional variation etc., and more commonly multiple interference factor simultaneously acts on target to be tracked, at this moment particle filter has the capacity of resisting disturbance more higher than Kalman filter; They are four years old, space-time context track algorithm itself is a robustness track algorithm higher, fireballing, when only running into interference strongly just there will be follow the tracks of drift so that lose situation, this strong interference generally has rotation etc. in motion blur, rapid movement, quick face, and pixel coordinate is indifferent in color histogram modeling, for these interference, there is natural resistance, can effectively revise tracking result; They are five years old, space-time context track algorithm belongs to local candidate search algorithms, particle filter is then overall situation candidate search algorithms, so when video object is lost owing to strong jamming causes following the tracks of, particle filter can rely on the advantage of global search again give lost target for change and recover this tracking.
The described space-time context video target tracking method in conjunction with particle filter, it is preferred that described S1 includes:
Owing to this algorithm belongs to semi-supervised algorithm, the video object of the first frame must manual be chosen, and the video object frame that then this craft selects will be used to the characteristic model of characteristic model and the particle filter building initial space-time context.
The described space-time context video target tracking method in conjunction with particle filter, it is preferred that described S2 includes:
The spatial context of each frame is calculated by space-time context track algorithm formula:
m ( x ) = be - | x - x * α | β = h s c ( x ) ⊗ p ( x ) , p ( x ) = I ( x ) * e - | x - x * | 2 σ 2 ,
WhereinFor standard degree of confidence matrix; B is normaliztion constant, and �� is yardstick constant 2.25; �� is form parameter, in order to ensure pattern curve not only will not excess smoothness but also will not be too sharp-pointed, finally choose ��=1 and can obtain most suitable curve; hscX () is the spatial context of present frame to be asked; The context prior probability model that p (x) is known present frame; X is the coordinate of a certain pixel, x*Represent and follow the tracks of target's center's coordinate, x-x*Representing the x pixel Euclidean distance to target's center, I (x) represents the image pixel intensities through the x coordinate place of normalized; �� is the target scale factor;For Gauss weighting function, this is affected by biological vision system to excite the closer to the content of target's center this viewpoint more important and be modeled, and generates different weighted values according to the distance difference near target's center, namely more high the closer to target weight; FFT finally can be used to accelerate hscX the calculating of (), finally calculates hscUnder the formula of (x):
h s c ( x ) = F - 1 ( F ( be - | x - x * α | β ) F ( p ( x ) ) ) ,
Thus the spatial context model of calculated present frame, represents the relative space relation between present frame different pixels.
The described space-time context video target tracking method in conjunction with particle filter, it is preferred that described S2 also includes:
Being calculated when secondary tracking confidence map by space-time context track algorithm and follow the tracks of result, its formula is as follows
m t ( x ) = H t s t c ( x ) ⊗ p t ( x ) ,
Wherein mtX () is the confidence map of the prediction of t frame to be asked,It is the space-time context model of t frame prediction, is the special spatial context model of elapsed time low-pass filtering treatment, pt(x) beThe context prior probability model of calculated t frame, utilizes FFT to accelerate computing, and its final computing formula is as follows:
Wherein �� represents point multiplication operation, and then obtains the prediction confidence map matrix of t frame, takes the maximum confidence value of this matrix and respective coordinates thereof and as the forecast confidence of t frame and follows the tracks of target's center's point coordinates, and its computing formula is as follows:
x t * = arg max m t ( x ) .
The described space-time context video target tracking method in conjunction with particle filter, it is preferred that described S3 also includes:
The variation tendency adopting grader differentiation degree judges what whether target was blocked, and its formula is:
H k - 1 - H k H k - 1 > ξ ,
Wherein Hk-1Represent it have been found that the grader differentiation degree of kth-1 frame, in like manner HkRepresenting the grader differentiation degree of the kth frame of prediction, �� is the threshold value set,
To formulaImproved, adopt confidence level variation tendency as detection benchmark so that it is can not only to occlusion detection, and can as strongly disturbing general detection; Above-mentioned formula only uses kth frame to be tracked and kth-1 frame to compare simultaneously, in order to avoid occurring excessively judging strongly disturbing situation, to this, to formulaBeing improved, the formula after improvement is:
C m e a n = Σ i = 1 m C k - i m , C m e a n - C k C m e a n > ξ ,
Wherein CmeanRepresent kth-1, k-2 ... the average confidence of k-m frame, CkRepresent the confidence level of kth frame; Use the average confidence of front m frame as benchmark, whether exceeded threshold value with the confidence level fall that this judges kth frame, so can be prevented effectively from and excessively judge strongly disturbing situation.
The described space-time context video target tracking method in conjunction with particle filter, it is preferred that described S3 also includes:
If the confidence level of kth frame to be tracked has exceeded the threshold value of set 30% compared to front m frame average confidence fall, namely think that video object receives strong interference, the result that now space-time contextual target track algorithm directly obtains is insincere, and enable immediately particle filter result is modified generate final result.
The described space-time context video frequency object tracking algorithm in conjunction with particle filter, it is preferred that described S4 also includes:
Characteristic model in order to ensure space-time contextual feature model and particle filter can learn up-to-date maximally effective feature the moment, in order to is conducive to ensuing tracking; This algorithm updates two characteristic models by using present frame to follow the tracks of the final result obtained simultaneously, and this result unrelated is directly from following the tracks of in space-time context obtaining or after particle filter corrects;
The renewal of the characteristic model of space-time context track algorithm: useThe spatial context model of the t frame obtained updates the space-time context model of t+1 frame, and formula is as follows:
H t + 1 s t c = ( 1 - ρ ) H t s t c + ρh t s c
Wherein �� is learning parameter, and the more big renewal speed of �� value is more fast, and the feature before reservation is also more few;For waiting to ask the space-time context model of t+1 frame,It is the space-time context model of t frame,It it is the spatial context model of t frame.
In sum, owing to have employed technique scheme, the invention has the beneficial effects as follows:
The algorithm that the present invention proposes, when in the face of strong jamming, still can complete relatively stable tracking, have higher robustness.
The additional aspect of the present invention and advantage will part provide in the following description, and part will become apparent from the description below, or is recognized by the practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or the additional aspect of the present invention and advantage are from conjunction with will be apparent from easy to understand the accompanying drawings below description to embodiment, wherein:
Fig. 1 is overall procedure schematic diagram of the present invention.
Detailed description of the invention
Being described below in detail embodiments of the invention, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has the element of same or like function from start to finish. The embodiment described below with reference to accompanying drawing is illustrative of, and is only used for explaining the present invention, and is not considered as limiting the invention.
In describing the invention, it will be appreciated that, term " longitudinal direction ", " transverse direction ", " on ", D score, "front", "rear", "left", "right", " vertically ", " level ", " top ", " end " " interior ", the orientation of the instruction such as " outward " or position relationship be based on orientation shown in the drawings or position relationship, it is for only for ease of the description present invention and simplifies description, rather than instruction or hint indication device or element must have specific orientation, with specific azimuth configuration and operation, be therefore not considered as limiting the invention.
In describing the invention, unless otherwise prescribed and limit, it should be noted that term " installation ", " being connected ", " connection " should be interpreted broadly, for instance, can be mechanically connected or electrical connection, also being able to is the connection of two element internals, it is possible to be joined directly together, it is also possible to be indirectly connected to by intermediary, for the ordinary skill in the art, it is possible to understand the concrete meaning of above-mentioned term as the case may be.
In target tracking domain, the tracking based on on-line study is the hot spot model studied at present. This type algorithm is divided into again generation model and discrimination model at present. Generate model prospect of the application target information foreground target is modeled, then utilize this model to look for most like region in the next frame; And discrimination model adds background information and considers, the multiple candidate targets obtained by using grader to screen, determine whether a candidate target is more biased towards foreground target or background with this, finally select the candidate item being partial to prospect background most as last target. Although in general background information is fought to the finish to set the goal and is not directly dependent upon, but can be used in judging whether a candidate item is more likely to background indirectly last target to be decided to do contribution. Therefore generally the tracking precision of discrimination model is better than generation model.
As it is shown in figure 1, the invention provides a kind of space-time context video target tracking method in conjunction with particle filter, comprise the steps:
S1, reads one-frame video data, selected video object frame, and initializes space-time contextual feature model and particle filter characteristic model;
S2, reads new one-frame video data, obtains particular video frequency objective result therein by space-time context tracking;
S3, adopts the variation tendency of confidence level to judge whether specific objective is subject to strong jamming, if be detected that result is recalibrated by the method that strong jamming then enables particle filter, thus obtaining video frequency object tracking result more accurately;
S4, uses the tracking result finally given to update the characteristic model of space-time contextual feature model and particle filter simultaneously; Jump to S2.
Formula (1) and formula (2) is utilized to calculate the spatial context of each frame by space-time context track algorithm:
m ( x ) = be - | x - x * α | β = h s c ( x ) ⊗ p ( x ) - - - ( 1 )
p ( x ) = I ( x ) * e - | x - x * | 2 σ 2 - - - ( 2 )
WhereinFor standard degree of confidence matrix; B is normaliztion constant; �� is yardstick constant 2.25; �� is profile shape parameter, in order to ensure that pattern curve will not be too smooth, without too sharp-pointed, finally choose ��=1 and can obtain most suitable curve; hscX () is the spatial context of present frame to be asked; The context prior probability model that p (x) is known present frame; X is the coordinate of a certain pixel, x*Represent and follow the tracks of target's center's coordinate, x-x*Representing the x pixel Euclidean distance to target's center, I (x) is the image pixel intensities at the x coordinate place through normalized; �� is the target scale factor;For Gauss weighting function, this is affected by biological vision system to excite the closer to the content of target's center this viewpoint more important and set up, namely generate the weighted value of correspondence according to the Euclidean distance near target's center, the more near weight of distance is more high, and the more remote weight of distance is more low. FFT is used to accelerate hscX the calculating of (), finally calculates hscUnder the formula (2) of (x):
h s c ( x ) = F - 1 ( F ( be - | x - x * α | β ) F ( p ( x ) ) ) - - - ( 3 )
Thus the spatial context model of calculated present frame, represents the relative space relation between present frame different pixels.
Followed by the prediction steps of space-time context track algorithm, its formula is as follows
m t ( x ) = H t s t c ( x ) ⊗ p t ( x ) - - - ( 4 )
Wherein mtX () is the confidence level matrix of the prediction of t frame to be asked,It is the space-time context model of t frame prediction, is the special spatial context model of elapsed time low-pass filtering treatment, ptThe context prior probability model of x t frame that () obtains for formula (2). For this formula, it is also possible to utilizing FFT to accelerate computing, its final computing formula is as follows:
Wherein �� represents matrix dot multiplication, and then obtains the forecast confidence matrix of t frame, takes the maximum confidence value of this matrix and respective coordinates thereof and as the forecast confidence of t frame and follows the tracks of target's center's point coordinates, and its computing formula is as follows:
x t * = arg max m t ( x ) - - - ( 6 )
The spatial context model finally using the t frame that formula (3) obtains updates the space-time context model of t+1 frame, and formula is as follows:
H t + 1 s t c = ( 1 - ρ ) H t s t c + ρh t s c - - - ( 7 )
Wherein �� is learning parameter, and the more big renewal speed of �� value is more fast, and the feature before reservation is also more few.For waiting to ask the space-time context model of t+1 frame,It is the space-time context model of t frame,It it is the spatial context model of t frame.
In formula (2) it can be seen that in calculating the process of context prior probability model p (x) of present frame, the relative centre distance and its relative intensity that employ pixel participate in calculating. When target is subject to slight interference, the relative centre distance of pixel and relative intensity also only there occurs slight change, and acute variation does not occur such calculated context prior probability model, and credibility is higher. But when target is subject to strong jamming, such as motion blur, rotate in quick face and seriously block, now its relative centre distance or relative intensity all there occurs violent change, the context prior probability model credibility thus obtained also declines to some extent, and final calculated target location also inevitably there occurs drift. Spatial context characteristic model had a small amount of background characteristics when study there occurs the result of drift and had been mixed in target characteristic at that time, is constantly learnt along with background characteristics and accumulates, ultimately resulting in target characteristic and obscure mutually with background characteristics, thoroughly lost target. Fortunately in conventional video frequency object tracking task, in the target most of the time, experienced interference broadly falls into the interference of lesser extent, strong interference only can continue the comparatively of short duration time, as long as so effectively detecting that target is subject to strongly disturbing situation and is revised, can well avoid the drift when strong jamming of the space-time context track algorithm and loss problem and strengthening the robustness of algorithm.
The variation tendency adopting grader differentiation degree judges what whether target was blocked, and its formula is:
H k - 1 - H k H k - 1 > ξ - - - ( 8 )
Wherein Hk-1Represent it have been found that the grader differentiation degree of kth-1 frame, in like manner HkRepresenting the grader differentiation degree of the kth frame of prediction, �� is the threshold value set, and is 0.3. This formula represents if the differentiation degree of kth frame to be tracked has exceeded the threshold value of set 30% compared to former frame differentiation degree fall, namely thinks and there occurs and block, and now uses the result of kalman filter forecasting to be modified result generating final result.
The present invention is to formulaImproved, adopt confidence level variation tendency as detection benchmark so that it is can not only to occlusion detection, and can as the general detection of strong interference. If only using the confidence level of kth frame to be tracked and the confidence level of kth-1 frame to compare, very likely occur that a kind of more extreme situation is that the confidence level of kth-1 frame is relatively low, and the confidence level of kth frame somewhat declines a bit it is possible to exceed the threshold value of setting, this is it is possible to occur strongly disturbing excessive judgement. To this, this formula is improved by the present invention, and the formula after improvement is:
C m e a n = Σ i = 1 m C k - i m - - - ( 9 )
C m e a n - C k C m e a n > ξ - - - ( 10 ) ,
Wherein CmeanRepresent kth-1, k-2 ... the average confidence of k-m frame, CkRepresent kth frame confidence level. Use the average confidence of front m frame as benchmark, whether exceeded threshold value with the confidence level fall that this judges kth frame, so can be prevented effectively from and excessively judge strongly disturbing situation. Many experiments result shows, m=3 can reach best effect. Namely the present invention uses formula (9) and formula (10) for the detection of strong jamming situation.
Present invention employs the particle filter using color histogram modeling as dynamic prediction model, its reason has five. One, color histogram calculates simplicity, and speed is fast, effective; Its two, use color histogram modeling to have a yardstick adaptability, can better and the cooperation of space-time context track algorithm; They are three years old, video frequency object tracking process can run into various interference factor, such as background clutter, deformation, rotate in face, intensity of illumination change etc., and more commonly these factors simultaneously act on target to be tracked, so when disturbing in the face of many factors, particle filter has higher capacity of resisting disturbance than Kalman filter; They are four years old, use the feature modeling that color histogram is because color histogram as particle filter model unrelated with pixel coordinate, so the strong jamming such as rotation or motion blur has natural resistance in its opposite, it is possible to as space-time context track algorithm when effectively the supplementing when following the tracks of drift likely occurred under these strong jamming factors; Its five, owing to space-time contextual search strategy belongs to Local Search, so being difficult to pick up target after target losing to follow the tracks of, and particle filter can rely on full figure search effectively recover target following.
Particle filter method refer to by find one group state space propagate random sample probability density function is similar to, with sample average replace integral operation, thus obtain state minimum variance distribution process. Needing to place more particle to obtain predicting the outcome more accurately, this will necessarily increase computation complexity, reduces tracking velocity. In order to better profit from the advantage of particle filter, simultaneously avoiding again its too high computation complexity to have influence on real-time tracking effect, the present invention comes decomposition algorithm flow process in addition Optimal improvements by modularization idea. First pass through and be tracked present frame space-time contextual algorithms and obtain following the tracks of result and confidence level, confidence level is substituted in formula (9) and formula (10) and carry out strongly disturbing judgement. If it occur that strong jamming, then it is assumed that this time the tracking result of space-time contextual algorithms is insincere, and enable particle filter and again result is modified; If there is not strong jamming, then it is assumed that the credible result of space-time context track algorithm, it is not necessary to particle filter algorithm is got involved. Last result comes from space-time contextual algorithms or corrected being all used to of particle filter updates space-time context model and particle model, it is ensured that space-time context model and particle filter model all remain up-to-date feature. Optimization by this trace flow, both high speed and the accuracy of space-time context track algorithm it be effectively retained, also the capacity of resisting disturbance that particle filter is good can be merged very well, and effectively prevent the high computation complexity of particle filter, achieving based on space-time context track algorithm, particle filter is auxiliary, and the two is cooperated with each other, mutually supplement so that the tracking effect of space-time contextual algorithms is obviously improved. Flow chart is as shown in Figure 1.
To inventive algorithm (STC_PF) in experiment, Spatio-temporalContext (STC) algorithm and 3 kinds of algorithms of CompressiveTracking (CT) algorithm compare. 51 common open test reference video sequences (TrackerBenchmarkv1.0) are tested by 3 kinds of algorithms. 3 kinds of algorithms all use C/C++ to realize, and are aided with OpenCV2.4.9 environment, and PC is the microcomputer of i5, the 4GB internal memory that a Daepori is logical, and the population in particle filter is set to 100. Optimize owing to code does not do, in order to be beneficial to human eye viewing and facilitate frame speed to calculate, the process of each frame is all added certain time delay, so frame speed contrasts as relative reference only, can not as absolute reference numerical value.
We used two evaluation criterions and these three track algorithm is carried out Quantitative evaluation: respectively success rate (SR) and centre coordinate error (CLE), the benchmark data contrast that both of which is with manual mark draws. The computational methods of SR are: first calculating the score score that each frame is followed the tracks of, its computing formula is defined as:
s c o r e = a r e a ( R t ∩ R g ) a r e a ( R t ∪ R g ) - - - ( 11 ) ,
Wherein RtRepresent the target frame that algorithm calculates, R at each framegRepresent the accurate target frame of manual mark, the area that area (R) is Zone R territory, when SR score is more than 0.5, we are considered as this frame and follow the tracks of successfully, finally use and follow the tracks of successful frame number and divided by the totalframes of video and be multiplied by 100% and can obtain SR. So SR is more high, representing that the tracking accuracy of algorithm is more good closer to 1, effect is more good. And CLE is defined as each frame algorithm and obtains the Euclidean distance of target's center and the target's center of manual mark, so CLE more low expression algorithm keeps track effect is more good. It can be seen that proposed algorithm is success rate or centre coordinate error aspect is all better than STC algorithm from 51 video sequence quantitative analyses that table 1 and table 2 are shown. In the success rate that accuracy is followed the tracks of in reflection, herein algorithm 27 times is best, STC has 16 times, CT only 10 times. Owing to CT uses fixed size to follow the tracks of target, therefore containing in the video sequence of dimensional variation at some, degree of accuracy is inevitably low than the algorithm of the support scaling of the present invention.
Table 1. success rate (SR) (%)
Table 2. mean center error of coordinate (ACLE) (unit: pixel), the speed containing average frame (FPS)
STC algorithm can complete target following task fast and effectively when being subject to the interference of lesser extent in the face of target. In tracking to CarDark video, target is limited primarily by illumination and the impact of change in location factor, and the change of these factors is not strongly, and therefore STC algorithm has efficiently accomplished target following. But in the video trackings such as Boy, Freeman1, Jogging1, Jogging2, its center error of coordinate curve chart occurs in that the situation of track rejection. In the 416th frame of Boy and the 131st frame of Freeman1, the tracking result of STC has occurred in that bigger deviation, and loses target completely in hereafter. Analysis reason is because video internal object and there occurs rotation and violent motion blur in quick face, and interference resistance this kind strong is sharply declined by STC algorithm, causes following the tracks of target unstable, thus losing target. In 72 frames of Jogging1 and 63 frames of Jogging2, the target following result of STC occurs in that very big error equally and finally loses, and analyzes reason and is because video internal object and there occurs and seriously block. Additionally due to target's center's search of STC has been limited in the subrange that twice original object frame width is high, when the position that target reappears is beyond twice target frame scope, it is be absolutely not likely to again find again target afterwards.
CT algorithm when to the texture variations of object and illumination variation little tracking effect good, tracking to Freeman1, follow the tracks of target in video and there occurs rotation in quick face, but there is not acute variation in its textural characteristics and illumination, CT algorithmic stability complete this secondary tracking, be also affected by the interference of scaling owing to following the tracks of target simultaneously, and CT algorithm do not support the feature of scaling to cause its success rate on this video sequence is not high. If tracking target there occurs seriously block, its texture will necessarily be had an immense impact on, so when there is serious circumstance of occlusion in the face of video object in CT algorithm, also it is susceptible to follow the tracks of drift and track rejection, in the 63rd frame of Jogging1 the 72nd frame and Jogging2, the target thing that is blocked is attracted to lost real target. Last be also only limited in the subrange of 25 pixel distances of target's center's radius due to the search radius of CT algorithm, so when track rejection and be in after reappearing Current central remotely time be also very difficult to again give target for change.
Inventive algorithm combines particle filter algorithm on the basis of space-time context track algorithm. When the interference of lesser extent, space-time context track algorithm has the ability to tackle completely, now just can well complete target following task without the intervention of particle filter. In tracking to CarDark sequence, although video object receives includes brightness flop, dimensional variation affects, but the interference of these factors is not strong, therefore space-time context track algorithm is enough to the interference in the face of this degree, and completes tracing task fast and effectively. Not there is not violent change in the change of such confidence level, particle filter is seldom activated, and is mainly tracked as master with space-time context, remains the excellent characteristic of space-time context track algorithm greatly yet. Although particle filter is seldom activated enables tracking, but learning the result that space-time context track algorithm arrives so that the feature set moment of particle filter is in up-to-date always, the tracking to strong jamming situation lower linking tube space-time context can judged at any time. The average tracking centre coordinate error that can obtain CarDark sequence from table 2 is reduced to 2.76 by 2.94, and from table 1, can also be seen that both success rates keep consistent, it is 100%, describe this algorithm and not only remain space-time algorithm excellent characteristic, also final result is carried out a degree of optimization.
Strong interference generally comprises violent background clutter, and what in face sharply, rotation and rapid movement caused obscures. Space-time context track algorithm is when in the face of this kind of strong interference, and resistance sharply declines, and it has not been credible especially for now following the tracks of the result obtained, it is most likely that there will be and follows the tracks of the situation that drift is even lost. And for these strong interference, use color histogram to have natural resistance as characteristic model, it is possible to as space-time contextual algorithms follows the tracks of unsuccessfully effective supplementary. In Bolt and Deer sequence, video internal object receives serious background clutter interference, by in corresponding centre coordinate curve of error it can be seen that STC algorithm and CT algorithm all occur in that drift until track rejection, and the algorithm that the present invention proposes effectively controls drift and recovers to follow the tracks of. Trace it to its cause is under strong background confusion is disturbed, the tracking result of STC algorithm creates drift, the present invention detects strong interference by disturbing formula (9) and (10), enable the PF tracking result to STC in time to be corrected so that follow the tracks of and had efficient recovery. In Boy sequence, target receives simultaneously and rotates in quick face and the interference of motion blur etc., it can be seen that also only have the algorithm that the present invention proposes to complete tracing task from corresponding centre coordinate curve of error.
The algorithm that the present invention proposes is after target is lost due to the interference being subject to strongly, if end of interrupt, can effectively again give target for change and recover to follow the tracks of. Inventive algorithm is owing to combining particle filter algorithm, and particle filter algorithm belongs to overall situation candidate search, therefore when following the tracks of target and losing owing to strong jamming causes following the tracks of, can effectively pick up target and recover to follow the tracks of. In Jogging1 and Jogging2 sequence, video internal object receives once serious blocking, target to be tracked is completely covered, simultaneously it can also be seen that the tracking result of three algorithms all occurs in that drift from corresponding centre coordinate error curve diagram, wherein the tracking result of STC and CT algorithm finally lost target and failing all completely and gives (respectively the 63rd frame of the 72nd frame of Jogging1 and Jogging2) again for change, although but the algorithm that only present invention proposes also occurs in that of short duration tracking drift finally again given target for change and recovered tracking. Trace it to its cause and be because STC algorithm and CT algorithm candidate search is Local Search, when the position that target reappears is beyond local search area, it is without being likely to again give for change target, and the algorithm that the present invention proposes is when detecting that strong interference includes blocking, particle filter algorithm can be enabled immediately and realize global search.
STC algorithm is high in the face of the interference resistance that degree is general, and calculates efficiently, but robustness is not high when strong jamming. Fortunately in conventional target following, strong interference only can continue the of short duration time, and most of the time target only to be limited by slight interference. As long as so can detect when target is subject to strong jamming timely and result is revised, can effectively strengthen the robustness of STC algorithm.
The present invention proposes strong jamming detection formula on STC algorithm basis, and by combination for particle filter (PF), becomes " space-time context track algorithm (STC_PF) in conjunction with particle filter ". Algorithm first still uses STC that target is tracked, and obtains confidence level and the target frame of this secondary tracking. STC is followed the tracks of the confidence level that obtains and substitutes into that formula (9) and (10) is middle judges whether the tracking of this frame encounters strong jamming, if it is not, directly adopt the result of the tracking of STC as final result; If be detected that strong jamming, then enable particle filter and result is modified and as final result. Finally use final result to update STC model and PF model simultaneously.
By testing on TrackerBenchmarkv1.0 test set, test result indicate that, compared with former STC algorithm, average accuracy has been brought up to 46.67% by 38.57%, and mean center error of coordinate is reduced to 50.17 from 85.42, and average frame speed only slightly have decreased to 44.91fps from 45.89fps, and (frame speed is for convenience of eye-observation herein, the process of each frame of two algorithms has all added identical time delay, only can as relative reference), it is possible to find still to meet the requirement of real-time tracking. Table one and table two respectively experimental result data, it was shown that the STC_PF algorithm that the present invention proposes, when in the face of strong jamming, still can complete relatively stable tracking, have higher robustness than former algorithm.
Can often occur following the tracks of drift when in the face of strong jamming, Given this by adding the judgement of strong jamming formula and particle filter, be effectively improved tracking effect and robustness. Test result indicate that, inventive algorithm is subject under factor interference in various degree and obstruction conditions in target, and success rate and tracking stability all increase, and both ensure that real-time, improve again accurate rate, enhance the robustness of algorithm.
In the description of this specification, specific features, structure, material or feature that the description of reference term " embodiment ", " some embodiments ", " example ", " concrete example " or " some examples " etc. means in conjunction with this embodiment or example describe are contained at least one embodiment or the example of the present invention. In this manual, the schematic representation of above-mentioned term is not necessarily referring to identical embodiment or example. And, the specific features of description, structure, material or feature can combine in an appropriate manner in any one or more embodiments or example.
Although an embodiment of the present invention has been shown and described, those of ordinary skill in the art can be appreciated that can carry out multiple change, amendment, replacement and modification to these embodiments when without departing from principles of the invention and objective, and the scope of the present invention is limited by claim and equivalent thereof.

Claims (8)

1. the space-time context video target tracking method in conjunction with particle filter, it is characterised in that comprise the steps:
S1, reads one-frame video data, selected video object frame, and initializes space-time contextual feature model and particle filter characteristic model;
S2, reads new one-frame video data, obtains particular video frequency objective result therein by space-time context tracking;
S3, adopts the variation tendency of confidence level to judge whether specific objective is subject to strong jamming, if be detected that result is recalibrated by the method that strong jamming then enables particle filter, thus obtaining video frequency object tracking result more accurately;
S4, uses the tracking result finally given to update the characteristic model of space-time contextual feature model and particle filter simultaneously; Jump to S2.
2. the space-time context video target tracking method in conjunction with particle filter according to claim 1, it is characterised in that described S1 includes:
Particle filter used here as color histogram modeling has five as the reason of the standby correction algorithm of space-time context track algorithm; One, color histogram calculates simplicity, and speed is fast, effective; Its two, use color histogram modeling to have a yardstick adaptability, can better and the cooperation of space-time context track algorithm; Its three, video frequency object tracking process can run into various interference factor, such as background clutter, deformation, dimensional variation etc., and more commonly multiple interference factor simultaneously acts on target to be tracked, at this moment particle filter has the capacity of resisting disturbance more higher than Kalman filter; They are four years old, space-time context track algorithm itself is a robustness track algorithm higher, fireballing, when only running into interference strongly just there will be follow the tracks of drift so that lose situation, this strong interference generally has rotation etc. in motion blur, rapid movement, quick face, and pixel coordinate is indifferent in color histogram modeling, for these interference, there is natural resistance, can effectively revise tracking result; They are five years old, space-time context track algorithm belongs to local candidate search algorithms, particle filter is then overall situation candidate search algorithms, so when video object is lost owing to strong jamming causes following the tracks of, particle filter can rely on the advantage of global search again give lost target for change and recover this tracking.
3. the space-time context video target tracking method in conjunction with particle filter according to claim 1, it is characterised in that described S1 includes:
Owing to this algorithm belongs to semi-supervised algorithm, the video object of the first frame must manual be chosen, and the video object frame that then this craft selects will be used to the characteristic model of characteristic model and the particle filter building initial space-time context.
4. the space-time context video target tracking method in conjunction with particle filter according to claim 1, it is characterised in that described S2 includes:
The spatial context of each frame is calculated by space-time context track algorithm formula:
m ( x ) = be - | x - x * α | β = h s c ( x ) ⊗ p ( x ) , p ( x ) = I ( x ) * e - | x - x * | 2 σ 2 ,
WhereinFor standard degree of confidence matrix; B is normaliztion constant, and �� is yardstick constant 2.25; �� is form parameter, in order to ensure pattern curve not only will not excess smoothness but also will not be too sharp-pointed, finally choose ��=1 and can obtain most suitable curve; hscX () is the spatial context of present frame to be asked; The context prior probability model that p (x) is known present frame; X is the coordinate of a certain pixel, x*Represent and follow the tracks of target's center's coordinate, x-x*Representing the x pixel Euclidean distance to target's center, I (x) represents the image pixel intensities through the x coordinate place of normalized; �� is the target scale factor;For Gauss weighting function, this is affected by biological vision system to excite the closer to the content of target's center this viewpoint more important and be modeled, and generates different weighted values according to the distance difference near target's center, namely more high the closer to target weight; FFT finally can be used to accelerate hscX the calculating of (), finally calculates hscUnder the formula of (x):
h s c ( x ) = F - 1 ( F ( be - | x - x * α | β ) F ( p ( x ) ) ) ,
Thus the spatial context model of calculated present frame, represents the relative space relation between present frame different pixels.
5. the space-time context video target tracking method in conjunction with particle filter according to claim 1, it is characterised in that described S2 also includes:
Being calculated when secondary tracking confidence map by space-time context track algorithm and follow the tracks of result, its formula is as follows
m t ( x ) = H t s t c ( x ) ⊗ p t ( x ) ,
Wherein mtX () is the confidence map of the prediction of t frame to be asked,It is the space-time context model of t frame prediction, is the special spatial context model of elapsed time low-pass filtering treatment, pt(x) beThe context prior probability model of calculated t frame, utilizes FFT to accelerate computing, and its final computing formula is as follows:
Wherein �� represents matrix dot multiplication, and then obtains the prediction confidence map matrix of t frame, takes the maximum confidence value of this matrix and respective coordinates thereof and as the forecast confidence of t frame and follows the tracks of target's center's point coordinates, and its computing formula is as follows:
x t * = arg max m t ( x ) .
6. the space-time context video target tracking method in conjunction with particle filter according to claim 1, it is characterised in that described S3 also includes:
The variation tendency adopting grader differentiation degree judges what whether target was blocked, and its formula is:
H k - 1 - H k H k - 1 > ξ ,
Wherein Hk-1Represent it have been found that the grader differentiation degree of kth-1 frame, in like manner HkRepresenting the grader differentiation degree of the kth frame of prediction, �� is the threshold value set,
To formulaImproved, adopt confidence level variation tendency as detection benchmark so that it is can not only to occlusion detection, and can as strongly disturbing general detection; Above-mentioned formula only uses kth frame to be tracked and kth-1 frame to compare simultaneously, in order to avoid occurring excessively judging strongly disturbing situation, to this, to formulaBeing improved, the formula after improvement is:
C m e a n = Σ i = 1 m C k - i m , C m e a n - C k C m e a n > ξ ,
Wherein CmeanRepresent kth-1, k-2 ... the average confidence of k-m frame, CkRepresent the confidence level of kth frame; Use the average confidence of front m frame as benchmark, whether exceeded threshold value with the confidence level fall that this judges kth frame, so can be prevented effectively from and excessively judge strongly disturbing situation.
7. the space-time context video target tracking method in conjunction with particle filter according to claim 6, it is characterised in that described S3 also includes:
If the confidence level of kth frame to be tracked has exceeded the threshold value of set 30% compared to front m frame average confidence fall, namely think that video object receives strong interference, the result that now space-time contextual target track algorithm directly obtains is insincere, and enable immediately particle filter result is modified generate final result.
8. the space-time context video frequency object tracking algorithm in conjunction with particle filter according to claim 6, it is characterised in that described S4 also includes:
Characteristic model in order to ensure space-time contextual feature model and particle filter can learn up-to-date maximally effective feature the moment, in order to is conducive to ensuing tracking; This algorithm updates two characteristic models by using present frame to follow the tracks of the final result obtained simultaneously, and this result unrelated is directly from following the tracks of in space-time context obtaining or after particle filter corrects;
The renewal of the characteristic model of space-time context track algorithm: useThe spatial context model of the t frame obtained updates the space-time context model of t+1 frame, and formula is as follows:
H t + 1 s t c = ( 1 - ρ ) H t s t c + ρh t s c
Wherein �� is learning parameter, and the more big renewal speed of �� value is more fast, and the feature before reservation is also more few;For waiting to ask the space-time context model of t+1 frame,It is the space-time context model of t frame,It it is the spatial context model of t frame.
CN201510956797.7A 2015-12-18 2015-12-18 With reference to the space-time context video target tracking method of particle filter Expired - Fee Related CN105631895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510956797.7A CN105631895B (en) 2015-12-18 2015-12-18 With reference to the space-time context video target tracking method of particle filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510956797.7A CN105631895B (en) 2015-12-18 2015-12-18 With reference to the space-time context video target tracking method of particle filter

Publications (2)

Publication Number Publication Date
CN105631895A true CN105631895A (en) 2016-06-01
CN105631895B CN105631895B (en) 2018-05-29

Family

ID=56046781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510956797.7A Expired - Fee Related CN105631895B (en) 2015-12-18 2015-12-18 With reference to the space-time context video target tracking method of particle filter

Country Status (1)

Country Link
CN (1) CN105631895B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127776A (en) * 2016-06-28 2016-11-16 北京工业大学 Based on multiple features space-time context robot target identification and motion decision method
CN106127798A (en) * 2016-06-13 2016-11-16 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106446933A (en) * 2016-08-31 2017-02-22 河南广播电视大学 Multi-target detection method based on context information
CN106780539A (en) * 2016-11-30 2017-05-31 航天科工智能机器人有限责任公司 Robot vision tracking
CN106952288A (en) * 2017-03-31 2017-07-14 西北工业大学 Based on convolution feature and global search detect it is long when block robust tracking method
CN107038431A (en) * 2017-05-09 2017-08-11 西北工业大学 Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information
CN107240120A (en) * 2017-04-18 2017-10-10 上海体育学院 The tracking and device of moving target in video
CN107346548A (en) * 2017-07-06 2017-11-14 电子科技大学 A kind of tracking for electric transmission line isolator
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
CN107610159A (en) * 2017-09-03 2018-01-19 西安电子科技大学 Infrared small object tracking based on curvature filtering and space-time context
CN107886525A (en) * 2017-11-28 2018-04-06 南京莱斯信息技术股份有限公司 A kind of redundant data data dictionary compressed sensing video target tracking method
CN108320300A (en) * 2018-01-02 2018-07-24 重庆信科设计有限公司 A kind of space-time context visual tracking method of fusion particle filter
CN109359663A (en) * 2018-08-22 2019-02-19 南京信息工程大学 A kind of video tracing method returned based on color cluster and space-time canonical
CN109448023A (en) * 2018-10-23 2019-03-08 武汉大学 A kind of satellite video Small object method for real time tracking of combination space confidence map and track estimation
CN110570451A (en) * 2019-08-05 2019-12-13 武汉大学 multithreading visual target tracking method based on STC and block re-detection
CN110636452A (en) * 2019-08-28 2019-12-31 福建工程学院 Wireless sensor network particle filter target tracking method
CN110738685A (en) * 2019-09-09 2020-01-31 桂林理工大学 space-time context tracking method with color histogram response fusion
CN111126127A (en) * 2019-10-23 2020-05-08 武汉大学 High-resolution remote sensing image classification method guided by multi-level spatial context characteristics
CN111563913A (en) * 2020-04-15 2020-08-21 上海摩象网络科技有限公司 Searching method and device based on tracking target and handheld camera thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110085702A1 (en) * 2009-10-08 2011-04-14 University Of Southern California Object tracking by hierarchical association of detection responses
CN103093480A (en) * 2013-01-15 2013-05-08 沈阳大学 Particle filtering video image tracking method based on dual model
CN104200495A (en) * 2014-09-25 2014-12-10 重庆信科设计有限公司 Multi-target tracking method in video surveillance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110085702A1 (en) * 2009-10-08 2011-04-14 University Of Southern California Object tracking by hierarchical association of detection responses
CN103093480A (en) * 2013-01-15 2013-05-08 沈阳大学 Particle filtering video image tracking method based on dual model
CN104200495A (en) * 2014-09-25 2014-12-10 重庆信科设计有限公司 Multi-target tracking method in video surveillance

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KAIHUA ZHANG 等: "Fast Visual Tracking via Dense Spatio-temporal Context Learning", 《ECCV 2014:COMPITER VISION-ECCV 2014》 *
SONGMIN.JIA 等: "Target tracking for mobile robot based on Spatio-Temporal Context model", 《2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS(ROBIO)》 *
X.G.WEI 等: "A Novel Visual Object Tracking Algorithm using Multiple Spatial Context Models and Bayesian Kalman Filter", 《2015 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS(ISCAS)》 *
钱琨 等: "基于引导滤波与时空上下文的红外弱小目标跟踪", 《光子学报》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127798B (en) * 2016-06-13 2019-02-22 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106127798A (en) * 2016-06-13 2016-11-16 重庆大学 Dense space-time contextual target tracking based on adaptive model
CN106127776A (en) * 2016-06-28 2016-11-16 北京工业大学 Based on multiple features space-time context robot target identification and motion decision method
CN106127776B (en) * 2016-06-28 2019-05-03 北京工业大学 It is identified based on multiple features space-time context robot target and moves decision-making technique
CN106446933A (en) * 2016-08-31 2017-02-22 河南广播电视大学 Multi-target detection method based on context information
CN106446933B (en) * 2016-08-31 2019-08-02 河南广播电视大学 Multi-target detection method based on contextual information
CN106780539B (en) * 2016-11-30 2019-08-20 航天科工智能机器人有限责任公司 Robot vision tracking
CN106780539A (en) * 2016-11-30 2017-05-31 航天科工智能机器人有限责任公司 Robot vision tracking
CN106952288B (en) * 2017-03-31 2019-09-24 西北工业大学 Based on convolution feature and global search detect it is long when block robust tracking method
CN106952288A (en) * 2017-03-31 2017-07-14 西北工业大学 Based on convolution feature and global search detect it is long when block robust tracking method
CN107240120B (en) * 2017-04-18 2019-12-17 上海体育学院 Method and device for tracking moving target in video
CN107240120A (en) * 2017-04-18 2017-10-10 上海体育学院 The tracking and device of moving target in video
CN107038431A (en) * 2017-05-09 2017-08-11 西北工业大学 Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information
CN107451601A (en) * 2017-07-04 2017-12-08 昆明理工大学 Moving Workpieces recognition methods based on the full convolutional network of space-time context
CN107346548A (en) * 2017-07-06 2017-11-14 电子科技大学 A kind of tracking for electric transmission line isolator
CN107610159A (en) * 2017-09-03 2018-01-19 西安电子科技大学 Infrared small object tracking based on curvature filtering and space-time context
CN107886525A (en) * 2017-11-28 2018-04-06 南京莱斯信息技术股份有限公司 A kind of redundant data data dictionary compressed sensing video target tracking method
CN108320300A (en) * 2018-01-02 2018-07-24 重庆信科设计有限公司 A kind of space-time context visual tracking method of fusion particle filter
CN109359663A (en) * 2018-08-22 2019-02-19 南京信息工程大学 A kind of video tracing method returned based on color cluster and space-time canonical
CN109448023A (en) * 2018-10-23 2019-03-08 武汉大学 A kind of satellite video Small object method for real time tracking of combination space confidence map and track estimation
CN109448023B (en) * 2018-10-23 2021-05-18 武汉大学 Satellite video small target real-time tracking method
CN110570451A (en) * 2019-08-05 2019-12-13 武汉大学 multithreading visual target tracking method based on STC and block re-detection
CN110570451B (en) * 2019-08-05 2022-02-01 武汉大学 Multithreading visual target tracking method based on STC and block re-detection
CN110636452A (en) * 2019-08-28 2019-12-31 福建工程学院 Wireless sensor network particle filter target tracking method
CN110636452B (en) * 2019-08-28 2021-01-12 福建工程学院 Wireless sensor network particle filter target tracking method
CN110738685A (en) * 2019-09-09 2020-01-31 桂林理工大学 space-time context tracking method with color histogram response fusion
CN111126127A (en) * 2019-10-23 2020-05-08 武汉大学 High-resolution remote sensing image classification method guided by multi-level spatial context characteristics
CN111563913A (en) * 2020-04-15 2020-08-21 上海摩象网络科技有限公司 Searching method and device based on tracking target and handheld camera thereof
CN111563913B (en) * 2020-04-15 2021-12-10 上海摩象网络科技有限公司 Searching method and device based on tracking target and handheld camera thereof

Also Published As

Publication number Publication date
CN105631895B (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN105631895A (en) Temporal-spatial context video target tracking method combining particle filtering
CN108921873B (en) Markov decision-making online multi-target tracking method based on kernel correlation filtering optimization
CN102881024B (en) Tracking-learning-detection (TLD)-based video object tracking method
Bae et al. Robust online multiobject tracking with data association and track management
CN104091349B (en) robust target tracking method based on support vector machine
CN104820997B (en) A kind of method for tracking target based on piecemeal sparse expression Yu HSV Feature Fusion
CN107464256B (en) A kind of target detection and possibility differentiate modified correlating method
CN108921880B (en) Visual multi-target tracking method based on multiple single trackers
CN111368938A (en) Multi-target vehicle tracking method based on MDP
CN104616318A (en) Moving object tracking method in video sequence image
CN103632382A (en) Compressive sensing-based real-time multi-scale target tracking method
CN103793926B (en) Method for tracking target based on sample reselection procedure
Leang et al. On-line fusion of trackers for single-object tracking
CN110348332A (en) The inhuman multiple target real-time track extracting method of machine under a kind of traffic video scene
CN113628245B (en) Multi-target tracking method, device, electronic equipment and storage medium
CN108288020A (en) Video shelter detecting system based on contextual information and method
Chau et al. Robust mobile object tracking based on multiple feature similarity and trajectory filtering
CN105809718A (en) Object tracking method with minimum trajectory entropy
Luo et al. Improved GM-PHD filter based on threshold separation clusterer for space-based starry-sky background weak point target tracking
CN102800105B (en) Target detection method based on motion vector
Carson et al. Predicting to improve: Integrity measures for assessing visual localization performance
CN106127798A (en) Dense space-time contextual target tracking based on adaptive model
CN109636834A (en) Video frequency vehicle target tracking algorism based on TLD innovatory algorithm
CN104680194A (en) On-line target tracking method based on random fern cluster and random projection
CN106485283B (en) A kind of particle filter pedestrian target tracking based on Online Boosting

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180529

Termination date: 20191218