CN103413323A - Object tracking method based on component-level appearance model - Google Patents

Object tracking method based on component-level appearance model Download PDF

Info

Publication number
CN103413323A
CN103413323A CN2013103174087A CN201310317408A CN103413323A CN 103413323 A CN103413323 A CN 103413323A CN 2013103174087 A CN2013103174087 A CN 2013103174087A CN 201310317408 A CN201310317408 A CN 201310317408A CN 103413323 A CN103413323 A CN 103413323A
Authority
CN
China
Prior art keywords
feature
frame
super pixel
cluster
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103174087A
Other languages
Chinese (zh)
Other versions
CN103413323B (en
Inventor
王美华
梁云
刘福明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN201310317408.7A priority Critical patent/CN103413323B/en
Publication of CN103413323A publication Critical patent/CN103413323A/en
Application granted granted Critical
Publication of CN103413323B publication Critical patent/CN103413323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an object tracking method based on a component-level appearance model. The method is based on a middle-level visual clue component-level updated appearance model. According to the method, super pixel segmentation is performed on images, super pixels are used for describing a target component of a tracked object, by the utilization of information of the target component, an object characteristic pool is constructed, the appearance model of the object is established and updated, and the model can accurately express the moving object under the conditions of deformation, sheltering and the like. When the appearance model of the object is updated, components of new tracking frames replace part of component sets to be replaced in the characteristic pool, a characteristic complementary set of the target object component is established, the characteristic complementary set serves as component description information of the new tracking frames and is added into the characteristic pool, then the appearance model is established according to the new characteristic pool, and updating of the appearance model is achieved. Along with tracking of the object, information of the appearance model is updated, the information of the tracked object is concentrated, the appearance model is more comprehensive, and under the conditions of big changes, such as the serious sheltering and deformation, of appearance, the guidance tracking can achieve a better effect.

Description

Object tracking method based on the component-level apparent model
Technical field
The present invention relates to computer vision field, more specifically, relate to a kind of method of object tracking based on the component-level apparent model.
Background technology
Object tracking is an important research content of computer vision field, has caused in recent years people's extensive concern, becomes current study hotspot.This technology has broad application prospects, and in a plurality of fields, plays an important role, as security monitoring, man-machine interaction, medical diagnosis and vehicle flow monitoring etc.Although people have proposed a large amount of object tracking methods, in the situation that illumination and object profile variation are large or have seriously and block, these methods often can not provide desirable tracking results, and Chang Wufa tracks target object.Therefore, propose a kind of effective object tracking method and have important using value and realistic meaning.
Current, Bayes's filtering principle is quite ripe on object tracking uses, and the feature extraction related to, sets up apparent model, search target, upgrades apparent model four major parts, and wherein Focal point and difficult point is that apparent model processes.Although a lot of successful object tracking algorithms are suggested, develop one can process complexity, the robust algorithm of scene remains a challenging problem dynamically.Because illumination changes, camera lens is moved, object generation deformation, target object occur partly or entirely to block etc. and can cause the outward appearance of scene to change a lot.These variations can only be processed by the adaptive approach of their expression of incremental update.Therefore, apparent expression is necessary for tracing task to online updating that can unceasing study for tracking object.
Existing object tracking method, when upgrading the object apparent model, based on template, be take frame as upgrading unit, namely in the Modelling feature pond, adds a frame information just from feature pool, rejecting a frame information.Such processing mode makes the object apparent model constantly update along with the variation of tracking target or scene outward appearance, yet, at rejecting one frame information, can cause the loss of part useful information.During the die-out time long enough, it is not comprehensive that result is upgraded the apparent model obtained often, can only obtain the apparent part of object is meaned in following the tracks of scene for frequent or part in target object deformation.Tracking under complicated background and the large scene of object cosmetic variation, do not have healthy and strong apparent model, and Chang Wufa obtains effective tracking results.
Summary of the invention
In order to overcome the deficiencies in the prior art, namely for the object tracking method based on the template renewal object time, apparent model easily causes by apparent model loss part tracking object information, model means not comprehensive to tracking object, the present invention proposes a kind of method of object tracking based on the component-level apparent model, the parts of take be to upgrade unit, be intended to strengthen apparent model the integrality of tracking object is meaned.
In order to overcome the deficiencies in the prior art, technical scheme of the present invention is:
A kind of method of object tracking based on the component-level apparent model comprises the following steps:
S1. create the feature pool for modeling: the simple target area of following the tracks of front m two field picture and recording every frame, centered by target area, expand to surrounding the zone that is expanded, each extended area of super pixel segmentation, the information of super pixel record object object part, extract the feature of each parts, and collect the feature construction feature pool of all frames;
S2. based on the feature set in feature pool, create the apparent model of object;
S3. establish the tracking that has completed front t two field picture, t >=m, calculate feature set and the degree of confidence of super pixel in the target area of t+1 two field picture and extended area thereof, the super pixel of record description target object parts according to apparent model;
S4. calculate the supplementary set of t+1 frame image features collection, when seriously not blocking, carry out S5, otherwise carry out S8;
S5. using feature pool middle distance current time frame at most as being replaced frame;
When the super pixel quantity of S6. describing the target object parts in being replaced frame is greater than β, β is predefined constant, from these super pixels, selecting β the supplementary set as the present frame feature set, wherein the Euclidean distance maximum of the eigenvector of the eigenvector of β super pixel and the super pixel that present frame is described target, proceed to S11; Otherwise proceed to S7;
S7. will be replaced in frame the supplementary set of the feature set of the super pixel of describing the target object parts as the present frame feature set, proceed to S11;
S8. select nearest the 3rd frame of feature pool middle distance current time as being replaced frame;
When the super pixel quantity of S9. describing the target object parts in being replaced frame is less than or equal to α, α is predefined constant, α<β, using the supplementary set of the feature set of these super pixels as the present frame feature set, proceed to S11, otherwise, when the super pixel quantity of describing the target object parts in being replaced frame is greater than α, proceed to S10;
S10., the feature set of α super pixel of degree of confidence maximum of super pixel of target object parts is described as the supplementary set of present frame feature set in being replaced frame;
S11. merge the feature set of present frame and supplementary set thereof as the new feature collection of present frame, the new feature collection is added to feature pool and deletes in feature pool the feature set that is replaced frame, complete a feature pool and upgrade;
If S12. meet, upgrade apparent Model Condition, according to the feature pool after upgrading, build the apparent model of object, realize the renewal of apparent model;
S13. proceed to step S3, until complete the tracking of whole sequence of video images.
Further, in described step S1, front m frame is based on the tracking of instructing without apparent model, and the mode that specifically creates feature pool is:
Given the first two field picture Frame 1Middle target area, comprise central point and area size, take the target area of the first two field picture to be template, with the alternative manner of simple match respectively from Frame 2..., Frame mMiddle calculating target area;
In target area peripheral extent sampling, as candidate target region, to the surrounding expansion λ zone that doubly is expanded, and respectively the extended area of m frame is surpassed to pixel segmentation and become N centered by target area iIndividual super pixel sp (i, j), i=1 ..., m, j=1 ..., N i
Extract respectively the HSI color character of the super pixel of each frame, use eigenvector
Figure BDA00003568643900033
Mean, and record each super pixel and whether belong to target component; Finally, the m frame feature vector is organized in order be used to creating the feature pool of parts apparent model F = { f t &prime; r | t &prime; = 1 , &CenterDot; &CenterDot; &CenterDot; , m ; r = 1 , &CenterDot; &CenterDot; &CenterDot; , N t &prime; } , and record.
Above-mentioned λ is constant, enough large in order to guarantee extended area, should cover each sampling; Because the HSI color space is consistent to the understanding of target component with human eye vision more near the human eye vision state, extract the HSI color character of the super pixel of each frame.
Further, the apparent model that creates object in described step S2 comprises by means Method the feature vector cluster in feature pool and each cluster degree of confidence two parts of calculating, adopt a base part of each cluster representative feature similarity, and mean that with confidence value parts are the probability of target component; Be implemented as follows:
According to means clustering algorithm to the eigenvector in feature pool
Figure BDA00003568643900032
Be clustered into n class clst (k) (k=1 ..., n), f cMean the cluster centre eigenvector, r c(k) be the radius of cluster clst (k) in feature space;
If S +(k) for the parts that belong to k cluster in feature pool, cover the area summation in target area, S -(k) for the parts that belong to k cluster in feature pool cover the area summation outside target area, the degree of confidence of cluster is expressed as: C k = S + ( k ) - S - ( k ) S + ( k ) + S - ( k ) , &ForAll; k = 1 , &CenterDot; &CenterDot; &CenterDot; , n .
Further, described step S3 establishes the tracking that completes front t two field picture, and t >=m calculates feature set and the degree of confidence of super pixel in the target area of t+1 two field picture and extended area thereof according to apparent model, and concrete grammar is as follows:
Central point and size to the target area with the t frame surpass pixel segmentation at the extended area of t+1 two field picture, and extract the HSI feature of each super pixel, use eigenvector Mean;
Eigenvector respectively with feature pool in eigenvector carry out similarity relatively, by the corresponding relation of eigenvector in feature pool and cluster, determine the super pixel of t+1 frame and the corresponding relation between cluster;
If λ dConstant, super pixel sp (t+1, j) belong to cluster clst (k), With cluster centre eigenvector f c(k) cluster weight is:
Figure BDA00003568643900044
, the degree of confidence of super pixel sp (t+1, j) is conf (t+1, j)=dist (j, k) * C k, , and the degree of confidence of the super pixel of record; Draw the degree of confidence figure of extended area, the value of the each point on figure is corresponding super pixel confidence value;
In extended area, adopt M T+1Individual sample
Figure BDA00003568643900046
As t+1 frame candidate target region, can obtain M by the corresponding relation of extended area and degree of confidence figure T+1The degree of confidence of individual sample, and estimate to using degree of confidence in each candidate target region and maximum as target area according to maximum a posteriori probability, and the super pixel of record description target object parts.
Further, described step S4 adds present frame to the Partial Feature that is replaced frame during step S11 adopts the keeping characteristics pond as the supplementary set of the feature set of present frame, then with new present frame feature set replacement, is replaced the regeneration characteristics pond method of frame; The criterion of seriously blocking in described step S4 is: establish θ oFor occlusion threshold, when the candidate target degree of confidence is less than θ oDuring with the product of extended area, be judged as and occurred seriously to block.
Step S4 is to for take whole frame when the regeneration characteristics pond, easily losing as replacing unit the super pixel characteristic that part is described target component in step S11, adopt the Partial Feature that is replaced frame in the keeping characteristics pond to add present frame as the supplementary set of the feature set of present frame, then with new present frame feature set replacement, be replaced the regeneration characteristics pond method of frame.For seriously blocking and, seriously do not block two kinds of situations, when calculating supplementary set, adopt different strategies.The tactful advantage in this regeneration characteristics pond has: the first, strengthen the degree of confidence with the super pixel that is retained the same cluster of super pixel; Second, retain because target appearance changes, cause same parts the feature of the super pixels of different descriptions arranged in being replaced frame and new frame, or make because blocking the super pixel that is described as target component is arranged in new frame is not replaced frame feature, enriched in the feature pool the description of parts, made the object apparent model more comprehensive.
Further, the apparent model after upgrading in described step S12 comprises cluster and calculates degree of confidence, creates apparent model and adopts the modeling of m frame: according to the mean shift clustering algorithm to the eigenvector in feature pool
Figure BDA00003568643900056
Carry out cluster, be clustered into n class clst (k) (k=1 ..., n), each cluster centre is f c(k), the membership table of each class is shown
Figure BDA00003568643900057
Calculate cluster confidence value part, calculate and in feature pool, preserve each two field picture super Pixel Information of cutting apart and the super Pixel Information of supplementing of calculating as each frame: establish
Figure BDA00003568643900051
Be any one feature in the feature supplementary set, it belongs to k cluster, and the area of its corresponding super pixel is Area (t', o), S +(k) for the super pixel that belongs to k cluster in feature pool, cover the area summation in target area: S +(k)=Σ N +(t ', r)+Σ Area (t ', o),
Figure BDA00003568643900053
S -(k) for the super pixel that belongs to k cluster in feature pool, cover the area summation outside target area: S - ( k ) = &Sigma; N - ( t &prime; , r ) , &ForAll; t &prime; , r &Element; { t &prime; , r | f t &prime; r &Element; clst ( k ) } , The cluster confidence value is: C k = S + ( k ) - S - ( k ) S + ( k ) + S - ( k ) , &ForAll; k = 1 , &CenterDot; &CenterDot; &CenterDot; , n .
Compared with prior art, the present invention has following beneficial effect:
1) middle level clue, can be more effectively, presentation video object information neatly.Significant tool in target, as to have obvious boundary information target partly is slit into numerous super pixels, and then by super pixel, describes the parts of tracking object, operates more directly perceived.
2) take parts is minimum operation unit, selecting the most close parts to replace each time, retaining the most dissimilar parts is the supplementary set of a new frame parts collection, makes the apparent more abundant information of object in the modeling information pool, and apparent model can be to the description of tracking object more comprehensively.
The accompanying drawing explanation
Fig. 1 is method execution step schematic diagram of the present invention.
Fig. 2 is that the inventive method and the frame of take are the method for upgrading unit effect contrast figure when tracking image sequence " Wwoman_sequence " the 63rd frame figure.
Fig. 3 is that the inventive method and the frame of take are the method for upgrading unit effect contrast figure when tracking image sequence " Wwoman_sequence " the 78th frame figure.
Embodiment
The present invention will be further described below in conjunction with accompanying drawing, but embodiments of the present invention are not limited to this.
Method execution step schematic diagram of the present invention as shown in Figure 1, specifically comprises the steps:
S1. create the feature pool stage for modeling: m frame before simple the tracking, in the present embodiment, m gets 7; Image the target area of recording every frame, at first, given the first two field picture Frame 1Middle target area, comprise central point and area size, take the target area of the first two field picture to be template, with the alternative manner of simple match respectively from Frame 2..., Frame 7Middle calculating target area; Then, to surrounding expansion λ doubly, λ is constant centered by target area, and in the present embodiment, λ gets 1.5, enough large in order to guarantee extended area, should cover each sampling; The zone that is expanded, and use the SLIC algorithm respectively the extended area of 7 frames to be surpassed to pixel segmentation to become N iIndividual super pixel sp (i, j) (i=1 ..., 7, j=1 ..., N i); Then, extract respectively the HSI color character of the super pixel of each frame, use eigenvector
Figure BDA00003568643900063
Mean, and establish N +Mean super pixel pixel number in target area, N -Mean that super pixel, at target area exterior pixel number, passes through N +/ (N -+ N +) the super pixel affiliated area of value judgement, on dutyly be greater than 0.5 record and should super pixel belong in target area, otherwise record super pixel, belong to outside zone; Finally, 7 frame feature vectors are organized in order be used to creating the feature pool of parts apparent model F = { f t &prime; r | t &prime; = 1 , &CenterDot; &CenterDot; &CenterDot; , 7 ; r = 1 , &CenterDot; &CenterDot; &CenterDot; , N t &prime; } , and record.
S2. create the initial apparent model order section of target: at first, according to means clustering algorithm to the eigenvector in feature pool
Figure BDA00003568643900062
Be clustered into n class clst (k) (k=1 ..., n), use f cMean the cluster centre eigenvector, r c(k) be the radius of cluster clst (k) in feature space.Then, establish S +(k) for the parts that belong to k cluster in feature pool cover the area summation in target area,
S -(k) for the parts that belong to k cluster in feature pool cover the area summation outside target area, the degree of confidence of cluster is expressed as: C k = S + ( k ) - S - ( k ) S + ( k ) + S - ( k ) , &ForAll; k = 1 , &CenterDot; &CenterDot; &CenterDot; , n .
S3. based on the image tracking target object stage of apparent model to new input: at first, the central point of the target area with the t frame and size are surpassed to pixel segmentation at the extended area of t+1 two field picture, and extract the HSI feature of each super pixel, use eigenvector
Figure BDA00003568643900072
Mean; Then, eigenvector respectively with feature pool in eigenvector carry out similarity relatively, by the corresponding relation of eigenvector in feature pool and cluster, determine the super pixel of t+1 frame and the corresponding relation between cluster; Thereby, establish λ dConstant, in the present embodiment, λ dGet 2, super pixel sp (t+1, j) belongs to cluster clst (k),
Figure BDA00003568643900073
With cluster centre eigenvector f c(k) cluster weight is: , and then the degree of confidence of super pixel sp (t+1, j) is conf (t+1, j)=dist (j, k) * C k,
Figure BDA00003568643900075
And the confidence value of the super pixel of record; Then, draw the degree of confidence figure of extended area, the value of the each point on figure is corresponding super pixel confidence value; Finally, in extended area, adopt M T+1Individual sample
Figure BDA00003568643900076
As t+1 frame candidate target region, can obtain M by the corresponding relation of extended area and degree of confidence figure T+1The degree of confidence of individual sample, and estimate to using degree of confidence in each candidate target region and maximum as target area according to maximum a posteriori probability, and whether the super pixel of record belongs to object judgement.
S4. the present frame feature is replaced in feature pool the frame feature stage that is replaced: in this stage, at first will define serious shadowing standard occurs: establish θ oFor occlusion threshold, in the present embodiment, for Wwoman_sequence image sequence, θ oGet-0.1; When the candidate target degree of confidence is less than θ oDuring with the product of extended area, be judged as and occurred seriously to block.When occurring seriously to block, forward step S6 to, otherwise, forward step S5 to.
S5. under non-serious circumstance of occlusion, the local replacement in feature pool of present frame feature is replaced the frame feature stage: using feature pool middle distance current time frame at most as being replaced frame.If β, β is predefined constant, in the present embodiment, β gets 25, when the super pixel quantity of describing the target object parts in being replaced frame is greater than β, from these super pixels, selecting β the supplementary set as the present frame feature set, wherein the Euclidean distance maximum of the eigenvector of the eigenvector of this β super pixel and the super pixel that present frame is described target; Otherwise, the feature set of super pixel of target object parts is described as the supplementary set of present frame feature set in being replaced frame.Forward step S7 to.
S6. under serious circumstance of occlusion, the present frame feature is local replaces in feature pool the frame feature stage that is replaced: select nearest the 3rd frame of feature pool middle distance current time as being replaced frame.If α, α are predefined constant, α<β wherein, in the present embodiment, α gets 15, when the super pixel quantity of describing the target object parts in being replaced frame is less than α, using the supplementary set of the feature set of these super pixels as the present frame feature set; Otherwise, the feature set of α super pixel of the value of the confidence maximum of super pixel of target object parts is described as the supplementary set of present frame feature set in being replaced frame.
S7. regeneration characteristics pond stage: at first, merge the feature set of present frame and supplementary set thereof the new feature collection as present frame, and the super pixel characteristic in supplementary set is recorded as to the super pixel characteristic of the description target object parts of present frame; Then, the new feature collection is added to feature pool, and delete the feature set that is replaced frame in feature pool.
S8. judge whether to meet and upgrade apparent Model Condition, while not meeting, forward step S3 to, otherwise, step S9 continued;
S10. upgrade apparent model order section: at first, according to the mean shift clustering algorithm to the eigenvector in feature pool ( F = { f t &prime; r | t &prime; = 1 , &CenterDot; &CenterDot; &CenterDot; , m ; r = 1 , &CenterDot; &CenterDot; &CenterDot; , N t &prime; } ) Carry out cluster, be clustered into n class clst (k) (k=1 ..., n), each cluster centre is f c(k), the membership table of each class is shown
Figure BDA00003568643900082
Then, calculate the cluster confidence value.Because the super Pixel Information that each two field picture of preserving in feature pool is cut apart has increased supplementary set, therefore, the method for calculating the cluster the value of the confidence needs corresponding change: establish
Figure BDA00003568643900083
Be any one feature in the feature supplementary set, it belongs to k cluster, and the area of its corresponding super pixel is Area (t', o), S +(k) for the super pixel that belongs to k cluster in feature pool, cover the area summation in target area: S +(k)=Σ N +(t ', r)+Σ Area (t ', o), r &Element; { t &prime; , r | f t &prime; r &Element; clst ( k ) } , S -(k) for the super pixel that belongs to k cluster in feature pool, cover the area summation outside target area: S - ( k ) = &Sigma; N - ( t &prime; , r ) , &ForAll; t &prime; , r &Element; { t &prime; , r | f t &prime; r &Element; clst ( k ) } , The cluster confidence value is: C k = S + ( k ) - S - ( k ) S + ( k ) + S - ( k ) , &ForAll; k = 1 , &CenterDot; &CenterDot; &CenterDot; , n .
If S11. complete the tracking of whole sequence of video images, termination routine, otherwise, forward step S3 to.
Fig. 2 adopts the inventive method to take parts to be the method for upgrading unit effect contrast figure when tracking image sequence " Wwoman_sequence " the 63rd frame figure as upgrading unit and take frame.Fig. 2 (a), Fig. 2 (b), Fig. 2 (c) are for take the implementation of frame as the method for renewal unit; Fig. 2 (d), Fig. 2 (e), Fig. 2 (f) are the implementation of this method.Fig. 2 (a) and Fig. 2 (d) surpass pixel segmentation figure for the extended area to target area; Fig. 2 (b) and Fig. 2 (e), for using apparent model to assess super pixel in extended area, draw out degree of confidence figure, adopt gray-scale map to mean, wherein the degree of confidence of dark part (black) is greater than 0, is identified as tracking target; On the contrary, the confidence value of light color (grey) part is less than 0, is identified as background; Fig. 2 (c) and Fig. 2 (f) result for following the tracks of, frame is the target frame.In Fig. 2, the tracking target entering part is blocked the tracking under environment, can two kinds of well modelings of method to the viewable portion of target, apparent model all can be identified the viewable portion of tracking target, and in degree of confidence figure, the confidence value of the super pixel of tracking target is greater than 0.Fig. 3 adopts the inventive method to take parts to be the method for upgrading unit effect contrast figure when tracking image sequence " Wwoman_sequence " the 78th frame figure as upgrading unit and take frame.Fig. 3 (a), Fig. 3 (b), Fig. 3 (c) are for take the implementation of frame as the method for renewal unit; Fig. 3 (d), Fig. 3 (e), Fig. 3 (f) are the implementation of this method.Fig. 3 (a) and Fig. 3 (d) surpass pixel segmentation figure for the extended area to target area; Fig. 3 (b) and Fig. 3 (e), for using apparent model to assess super pixel in extended area, draw out degree of confidence figure, adopt gray-scale map to mean, wherein the degree of confidence of dark part (black) is greater than 0, is identified as tracking target; On the contrary, the confidence value of light color (grey) part is less than 0, is identified as background; Fig. 3 (c) and Fig. 3 (f) are tracking results, and frame is the target frame.In Fig. 3, the tracking target part that is blocked starts to walk out the tracking of blocking, through the tracking of multiframe, take frame as the update method of unit the be blocked feature loss of part of tracked target, part can not be identified to being blocked; On the contrary, the inventive method be take parts and is carried out the information of local updating replacement for the information pool of modeling as unit, more completely retain the information of tracking target, apparent model can be than more comprehensive expression to tracking target, therefore, although the tracking target lower part is blocked over the m frame, walks out while blocking environment and can be identified when this part, as Fig. 3 (b) personage's leg portion, dotted line frame part in figure.
Above-described embodiments of the present invention, do not form the restriction to protection domain of the present invention.Any modification of having done within spiritual principles of the present invention, be equal to and replace and improvement etc., within all should being included in claim protection domain of the present invention.

Claims (6)

1. the method for the object tracking based on the component-level apparent model, is characterized in that, comprises the following steps:
S1. create the feature pool for modeling: the target area of following the tracks of front m two field picture and recording every frame, centered by target area, expand to surrounding the zone that is expanded, each extended area of super pixel segmentation, the information of super pixel record object object part, extract the feature of each parts, and collect the feature construction feature pool of all frames;
S2. based on the feature set in feature pool, create the apparent model of object;
S3. establish the tracking that has completed front t two field picture, t >=m, calculate feature set and the degree of confidence of super pixel in the target area of t+1 two field picture and extended area thereof, the super pixel of record description target object parts according to apparent model;
S4. calculate the supplementary set of t+1 frame image features collection, when seriously not blocking, carry out S5, otherwise carry out S8;
S5. using feature pool middle distance current time frame at most as being replaced frame;
When the super pixel quantity of S6. describing the target object parts in being replaced frame is greater than β, β is predefined constant, from these super pixels, selecting β the supplementary set as the present frame feature set, wherein the Euclidean distance maximum of the eigenvector of the eigenvector of β super pixel and the super pixel that present frame is described target, proceed to S11; Otherwise proceed to S7;
S7. will be replaced in frame the supplementary set of the feature set of the super pixel of describing the target object parts as the present frame feature set, proceed to S11;
S8. select nearest the 3rd frame of feature pool middle distance current time as being replaced frame;
When the super pixel quantity of S9. describing the target object parts in being replaced frame is less than or equal to α, α is predefined constant, α<β, using the supplementary set of the feature set of these super pixels as the present frame feature set, proceed to S11, otherwise, when the super pixel quantity of describing the target object parts in being replaced frame is greater than α, proceed to S10;
S10., the feature set of α super pixel of degree of confidence maximum of super pixel of target object parts is described as the supplementary set of present frame feature set in being replaced frame;
S11. merge the feature set of present frame and supplementary set thereof as the new feature collection of present frame, the new feature collection is added to feature pool and deletes in feature pool the feature set that is replaced frame, complete a feature pool and upgrade;
If S12. meet, upgrade apparent Model Condition, according to the feature pool after upgrading, build the apparent model of object, realize the renewal of apparent model;
S13. proceed to step S3, until complete the tracking of whole sequence of video images.
2. the method for the object tracking based on the component-level apparent model according to claim 1, is characterized in that, in described step S1, front m frame is based on the tracking of instructing without apparent model, and the mode that specifically creates feature pool is:
Given the first two field picture Frame 1Middle target area, comprise central point and area size, take the target area of the first two field picture to be template, with the alternative manner of simple match respectively from Frame 2..., Frame mMiddle calculating target area;
In target area peripheral extent sampling, as candidate target region, to the surrounding expansion λ zone that doubly is expanded, and respectively the extended area of m frame is surpassed to pixel segmentation and become N centered by target area iIndividual super pixel sp (i, j), i=1 ..., m, j=1 ..., N i
Extract respectively the HSI color character of the super pixel of each frame, use eigenvector
Figure FDA00003568643800024
Mean, and record each super pixel and whether belong to target component; Finally, the m frame feature vector is organized in order be used to creating the feature pool of parts apparent model F = { f t &prime; r | t &prime; = 1 , &CenterDot; &CenterDot; &CenterDot; , m ; r = 1 , &CenterDot; &CenterDot; &CenterDot; , N t &prime; } , and record.
3. the method for the object tracking based on the component-level apparent model according to claim 2, it is characterized in that, the apparent model that creates object in described step S2 comprises by means Method the feature vector cluster in feature pool and each cluster degree of confidence two parts of calculating, adopt a base part of each cluster representative feature similarity, and mean that with confidence value parts are the probability of target component; Be implemented as follows:
According to means clustering algorithm to the eigenvector in feature pool
Figure FDA00003568643800022
Be clustered into n class clst (k), k=1 ..., n, f cMean the cluster centre eigenvector, r c(k) be the radius of cluster clst (k) in feature space;
If S +(k) for the parts that belong to k cluster in feature pool, cover the area summation in target area, S -(k) for the parts that belong to k cluster in feature pool cover the area summation outside target area, the degree of confidence of cluster is expressed as: C k = S + ( k ) - S - ( k ) S + ( k ) + S - ( k ) , &ForAll; k = 1 , &CenterDot; &CenterDot; &CenterDot; , n .
4. the method for the object tracking based on the component-level apparent model according to claim 3, it is characterized in that, described step S3 establishes the tracking that completes front t two field picture, t >=m, according to apparent model, calculate feature set and the degree of confidence of super pixel in the target area of t+1 two field picture and extended area thereof, concrete grammar is as follows:
Central point and size to the target area with the t frame surpass pixel segmentation at the extended area of t+1 two field picture, and extract the HSI feature of each super pixel, use eigenvector Mean;
Eigenvector respectively with feature pool in eigenvector carry out similarity relatively, by the corresponding relation of eigenvector in feature pool and cluster, determine the super pixel of t+1 frame and the corresponding relation between cluster;
If λ dConstant, super pixel sp (t+1, j) belong to cluster clst (k),
Figure FDA00003568643800032
With cluster centre eigenvector f c(k) cluster weight is:
Figure FDA00003568643800033
, the degree of confidence of super pixel sp (t+1, j) is conf (t+1, j)=dist (j, k) * C k,
Figure FDA00003568643800034
And the degree of confidence of the super pixel of record; Draw the degree of confidence figure of extended area, the value of the each point on figure is corresponding super pixel confidence value;
In extended area, adopt M T+1Individual sample
Figure FDA00003568643800035
As t+1 frame candidate target region, can obtain M by the corresponding relation of extended area and degree of confidence figure T+1The degree of confidence of individual sample, and estimate to using degree of confidence in each candidate target region and maximum as target area according to maximum a posteriori probability, and the super pixel of record description target object parts.
5. the method for the object tracking based on the component-level apparent model according to claim 4, it is characterized in that, described step S4 adds present frame to the Partial Feature that is replaced frame during step S11 adopts the keeping characteristics pond as the supplementary set of the feature set of present frame, then with new present frame feature set replacement, is replaced the regeneration characteristics pond method of frame; The criterion of seriously blocking in described step S4 is: establish θ oFor occlusion threshold, when the candidate target degree of confidence is less than θ oDuring with the product of extended area, be judged as and occurred seriously to block.
6. the method for the object tracking based on the component-level apparent model according to claim 5, it is characterized in that, apparent model after upgrading in described step S12 comprises cluster and calculates degree of confidence, adopts the modeling of m frame: according to the mean shift clustering algorithm to the eigenvector in feature pool
Figure FDA00003568643800036
Carry out cluster, be clustered into n class clst (k) (k=1 ..., n), each cluster centre is f c(k), the membership table of each class is shown
Figure FDA00003568643800037
Calculate cluster confidence value part, calculate and in feature pool, preserve each two field picture super Pixel Information of cutting apart and the super Pixel Information of supplementing of calculating as each frame: establish
Figure FDA00003568643800041
Be any one feature in the feature supplementary set, it belongs to k cluster, and the area of its corresponding super pixel is Area (t', o), S +(k) for the super pixel that belongs to k cluster in feature pool, cover the area summation in target area: S +(k)=Σ N +(t ', r)+Σ Area (t ', o),
Figure FDA00003568643800043
For the super pixel that belongs to k cluster in feature pool covers the area summation outside target area: S - ( k ) = &Sigma; N - ( t &prime; , r ) , &ForAll; t &prime; , r &Element; { t &prime; , r | f t &prime; r &Element; clst ( k ) } , the cluster confidence value is: C k = S + ( k ) - S - ( k ) S + ( k ) + S - ( k ) , &ForAll; k = 1 , &CenterDot; &CenterDot; &CenterDot; , n .
CN201310317408.7A 2013-07-25 2013-07-25 Based on the object tracking methods of component-level apparent model Active CN103413323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310317408.7A CN103413323B (en) 2013-07-25 2013-07-25 Based on the object tracking methods of component-level apparent model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310317408.7A CN103413323B (en) 2013-07-25 2013-07-25 Based on the object tracking methods of component-level apparent model

Publications (2)

Publication Number Publication Date
CN103413323A true CN103413323A (en) 2013-11-27
CN103413323B CN103413323B (en) 2016-01-20

Family

ID=49606328

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310317408.7A Active CN103413323B (en) 2013-07-25 2013-07-25 Based on the object tracking methods of component-level apparent model

Country Status (1)

Country Link
CN (1) CN103413323B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding
CN104298968A (en) * 2014-09-25 2015-01-21 电子科技大学 Target tracking method under complex scene based on superpixel
CN104915677A (en) * 2015-05-25 2015-09-16 宁波大学 Three-dimensional video object tracking method
CN105678338A (en) * 2016-01-13 2016-06-15 华南农业大学 Target tracking method based on local feature learning
CN106846365A (en) * 2016-12-30 2017-06-13 中国科学院上海高等研究院 Method for tracking target based on HIS space
US10121251B2 (en) 2015-07-08 2018-11-06 Thomson Licensing Method for controlling tracking using a color model, corresponding apparatus and non-transitory program storage device
CN114140501A (en) * 2022-01-30 2022-03-04 南昌工程学院 Target tracking method and device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098239A1 (en) * 2005-08-31 2007-05-03 Siemens Corporate Research Inc Method for characterizing shape, appearance and motion of an object that is being tracked
US20090092282A1 (en) * 2007-10-03 2009-04-09 Shmuel Avidan System and Method for Tracking Objects with a Synthetic Aperture
CN102831439A (en) * 2012-08-15 2012-12-19 深圳先进技术研究院 Gesture tracking method and gesture tracking system
CN102982559A (en) * 2012-11-28 2013-03-20 大唐移动通信设备有限公司 Vehicle tracking method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070098239A1 (en) * 2005-08-31 2007-05-03 Siemens Corporate Research Inc Method for characterizing shape, appearance and motion of an object that is being tracked
US20090092282A1 (en) * 2007-10-03 2009-04-09 Shmuel Avidan System and Method for Tracking Objects with a Synthetic Aperture
CN102831439A (en) * 2012-08-15 2012-12-19 深圳先进技术研究院 Gesture tracking method and gesture tracking system
CN102982559A (en) * 2012-11-28 2013-03-20 大唐移动通信设备有限公司 Vehicle tracking method and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BO YANG等: "《12th European Conference on Computer Vision》", 13 October 2012 *
IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV): "《IEEE International Conference on Computer Vision (ICCV)》", 31 December 2011 *
XUE ZHOU 等: "《2012 IEEE International conference on image processing(ICIP2012)》", 30 September 2012 *
王澎: "基于中层视觉特征和高层结构信息的互补目标跟踪模型", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810723A (en) * 2014-02-27 2014-05-21 西安电子科技大学 Target tracking method based on inter-frame constraint super-pixel encoding
CN103810723B (en) * 2014-02-27 2016-08-17 西安电子科技大学 Method for tracking target based on interframe constraint super-pixel coding
CN104298968A (en) * 2014-09-25 2015-01-21 电子科技大学 Target tracking method under complex scene based on superpixel
CN104298968B (en) * 2014-09-25 2017-10-31 电子科技大学 A kind of method for tracking target under complex scene based on super-pixel
CN104915677A (en) * 2015-05-25 2015-09-16 宁波大学 Three-dimensional video object tracking method
CN104915677B (en) * 2015-05-25 2018-01-05 宁波大学 A kind of 3 D video method for tracking target
US10121251B2 (en) 2015-07-08 2018-11-06 Thomson Licensing Method for controlling tracking using a color model, corresponding apparatus and non-transitory program storage device
CN105678338A (en) * 2016-01-13 2016-06-15 华南农业大学 Target tracking method based on local feature learning
CN105678338B (en) * 2016-01-13 2020-04-14 华南农业大学 Target tracking method based on local feature learning
CN106846365A (en) * 2016-12-30 2017-06-13 中国科学院上海高等研究院 Method for tracking target based on HIS space
CN106846365B (en) * 2016-12-30 2020-02-07 中国科学院上海高等研究院 HIS space-based target tracking method
CN114140501A (en) * 2022-01-30 2022-03-04 南昌工程学院 Target tracking method and device and readable storage medium

Also Published As

Publication number Publication date
CN103413323B (en) 2016-01-20

Similar Documents

Publication Publication Date Title
CN103413323B (en) Based on the object tracking methods of component-level apparent model
Wegner et al. Road networks as collections of minimum cost paths
CN113963445B (en) Pedestrian falling action recognition method and equipment based on gesture estimation
US20180247126A1 (en) Method and system for detecting and segmenting primary video objects with neighborhood reversibility
CN103413120A (en) Tracking method based on integral and partial recognition of object
CN102034247B (en) Motion capture method for binocular vision image based on background modeling
CN103729858B (en) A kind of video monitoring system is left over the detection method of article
CN112950477B (en) Dual-path processing-based high-resolution salient target detection method
CN103164694A (en) Method for recognizing human motion
Montoya-Zegarra et al. Semantic segmentation of aerial images in urban areas with class-specific higher-order cliques
CN103871076A (en) Moving object extraction method based on optical flow method and superpixel division
CN103093198B (en) A kind of crowd density monitoring method and device
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN104966286A (en) 3D video saliency detection method
CN102622769A (en) Multi-target tracking method by taking depth as leading clue under dynamic scene
CN102324019A (en) Method and system for automatically extracting gesture candidate region in video sequence
CN102289948A (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN102147861A (en) Moving target detection method for carrying out Bayes judgment based on color-texture dual characteristic vectors
CN112906631B (en) Dangerous driving behavior detection method and detection system based on video
CN103208115A (en) Detection method for salient regions of images based on geodesic line distance
CN103020606A (en) Pedestrian detection method based on spatio-temporal context information
CN104268520A (en) Human motion recognition method based on depth movement trail
CN103136537A (en) Vehicle type identification method based on support vector machine
CN106097385A (en) A kind of method and apparatus of target following
CN102592115A (en) Hand positioning method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant