CN103366382A - Active contour tracing method based on superpixel - Google Patents

Active contour tracing method based on superpixel Download PDF

Info

Publication number
CN103366382A
CN103366382A CN2013102774746A CN201310277474A CN103366382A CN 103366382 A CN103366382 A CN 103366382A CN 2013102774746 A CN2013102774746 A CN 2013102774746A CN 201310277474 A CN201310277474 A CN 201310277474A CN 103366382 A CN103366382 A CN 103366382A
Authority
CN
China
Prior art keywords
test pattern
super pixel
profile
pixel
training sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102774746A
Other languages
Chinese (zh)
Inventor
周雪
邹见效
徐红兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN2013102774746A priority Critical patent/CN103366382A/en
Publication of CN103366382A publication Critical patent/CN103366382A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an active contour tracing method based on superpixel. The method comprises the following steps of carrying out the superpixel segmentation on a training image to obtain a training sample pool of a target and a background, adopting a measuring learning method to obtain a distance measuring projection matrix according to a training sample, establishing a judgment-type appearance model, carrying out the superpiexel segmentation on each frame of test image of a sequence image, obtaining a confidence diagram corresponding to the test image according to the established judgment-type appearance model, obtaining a speed field of the test image, substituting the speed field into an evolution equation of a level set method, and obtaining a contour trace result of the test image. Compared with the prior art, the contour evolution efficiency of each frame of test image can be improved, and the tracing accuracy and tracing efficiency of the sequence image can be improved.

Description

A kind of active contour tracing method based on super pixel
Technical field
The invention belongs to technical field of computer vision, more specifically say, relate to a kind of active contour tracing method based on super pixel.
Background technology
Target following under the vision monitoring scene is to process by the sequence of video images to shot by camera, detects, locates and follow the tracks of the wherein target of motion.Because contour feature can be described the shape information of target well, and these shape informations are provided convenience for behavior understanding and the identification of follow-up high level, and with respect to static video camera, initiatively contour tracing method can better be applicable to detection under the mobile camera, location and pursuit movement target.Therefore in recent years initiatively contour tracing method become gradually forward position and the focus of current academic research.On international top publication TPAMI, the IJCV and meeting ICCV, CVPR in computer vision and area of pattern recognition, initiatively profile is followed the tracks of and has all been occupied certain length and proportion.As a frontier nature research direction that merges the multidisciplinary intersections such as computer vision, image processing, pattern-recognition, machine learning, statistical study and stochastic process, its achievement in research has widely application potential in fields such as intelligent vision monitoring, motion analysis, man-machine interaction, intelligent navigation, video frequency searchings.
Initiatively the core concept of contour tracing method is to set up one about the energy function of profile according to practical problems, adopt variational method to minimize this energy function, finally obtain the evolution equation of profile, be about to initial profile and evolve according to the negative gradient direction of energy, until converge to the edge of target.According to the difference of the describing mode of profile and the image information considered, initiatively contour tracing method roughly can be divided into based on the edge with based on the two large classes in zone.
Based on the active contour tracing method at edge take the Snakes model as representative.What the Snakes model adopted is parameterized profile describing method, is about to the function that profile C directly explicitly is described as parameter s and time t.This model has been considered the gradient information of contour edge place image, when outline position more near the object edge place, the energy function then set up is minimum.This model is simple and easy to usefulness, but has a series of shortcoming: responsive to the initialization of profile, need the Reparameterization profile for self intersection and the situation such as overlapping, and this model can not be processed change in topology and have unsettled numerical solution.
A kind of active contour tracing method based on the zone---Level Sets (level set) method that adopts the implicit expression profile to describe is subject to extensive concern gradually at present.Level Sets method is the profile of expressing a n dimension with the null value of the Level Sets function of a n+1 dimension.Level Sets function commonly used is signed distance function.Advantage based on the active contour tracing method in zone is to consider the area information of image, and not only is confined to around the profile.Commonly used estimating is some statistical natures, such as: the histogram of average, variance, texture or institute's consideration of regional etc.The people such as Yilmaz use respectively Density Estimator and Gabor Wavelets Modeling to color and the textural characteristics in zone, and the Level Sets evolutionary rate function of each pixel depends on that interior all pixels of its neighborhood belong to the similarity degree of target and background.But specific algorithm list of references: A.Yilmaz, X.Li and M.Shah.Contour-based Object Tracking with Occlusion Handling in Video Acquired Using Mobile Cameras[J], IEEE Trans.on Pattern Analysis and Machine Intelligence, 2004,26 (11): 1531-1536.The people such as Sun have proposed a Level Sets tracking that supervision is arranged, and the discriminative model that the method is set up single pixel based on online boosting makes up the energy function of Level Sets.But specific algorithm list of references: X.Sun, H.X.Yao and S.P.Zhang.A Novel Supervised Level Set Method for Non-Rigid Object Tracking[C], IEEE Conference on Computer Vision and Pattern Recognition, 2011,3393-3400.More than based on the active contour tracing method in zone when making up energy function, no matter adopt production model or discriminative model, all be to adopt the bottom pixel characteristic, and with its elementary cell of evolving as profile, therefore cause easily profile to be evolved being subjected to the problems such as noise jamming, efficient be low.
In recent years, owing to be rich in semantic information and processing mode flexibly, super pixel (Superpixel) is widely used in image segmentation and field of target recognition as a kind of instrument of very effective iamge description.It is divided into the set of pixel with image, and the pixel in each set has certain similar characteristic, and is similar etc. such as color, brightness or texture.Super pixel has the counting yield height, is rich in semanteme, keeps the advantages such as border.Therefore the elementary cell of these super pixels being processed as image is carried out follow-up modeling and excavation, than only considering that bottom visual signature pixel is more effective.At document S.Wang, H.C.Lu, F.Yang and M.H.Yang.Superpixel Tracking[C] among the .IEEE International Conference on Computer Vision.2011.1323-1330., the people such as Wang have proposed a kind of bounding box tracking based on super pixel, utilize mean shift algorithm to set up the discriminant apparent model, judge that each super pixel belongs to target or background.In the discriminant apparent model, the distance measure that adopts plays an important role to its performance.At present Euclidean distance commonly used has been owing to ignored the statistical law of data, all situations all adopted samely estimate, and lacks specific aim, therefore relatively more difficultly obtains satisfied effect.Particularly most of actual tracking in the scene, target and background all has the apparent form of multiple color or texture, even come from same classification, estimate the otherness that calculates still clearly according to Euclidean distance, therefore adopt Euclidean distance to measure and unreliable.In fact, the distance measure that meets the data regularity of distribution can obtain from the data learning of demarcating in advance.
It is emphasized that, although the iamge description based on super pixel has been successfully applied to target tracking domain, but only limiting to traditional bounding box follows the tracks of, how to be introduced into initiatively profile tracking framework, set up more effective apparent model and still have many difficult points, therefore be still a challenge.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, a kind of active contour tracing method based on super pixel is provided, adopt and to estimate learning method and set up discriminant apparent model take super pixel as elementary cell, improve accuracy and the robustness of profile tracking.
For achieving the above object, the present invention is based on the active contour tracing method of super pixel, it is characterized in that, may further comprise the steps:
S1: training image is divided into target and background two parts, surpasses pixel segmentation, extract the proper vector of each super pixel, establishing target training sample pond T ObjWith background training sample pond T Bac
S2: the projection matrix L that learning method obtains distance measure is estimated in employing according to training sample, and projection matrix L is every m, and m 〉=1 frame test pattern upgrades once;
S3: according to target training sample pond T Obj, background training sample pond T BacWith the projection matrix L of distance measure, make up the discriminant apparent model based on super pixel, wherein the confidence score of each super pixel Computing formula be:
S c sp = 1 - P ( sp | bac ) / P ( sp | obj ) 1 + P ( sp | bac ) / P ( sp | obj )
Wherein P (sp|obj) and P (sp|bac) represent that respectively super pixel sp belongs to the likelihood probability of target class obj and background classes bac, adopt non-parametric Density Estimator method to obtain;
S4: a selected regional area that comprises target in the present frame test pattern, this regional area is surpassed pixel segmentation, super pixel quantity is designated as N, extracts to obtain each super pixel sp k, the proper vector f of 1≤k≤N kCalculate the confidence score of each super pixel according to the confidence score computing formula among the step S3
Figure BDA00003460204300033
Obtain the confidence map of test pattern;
S5: the velocity field that makes up test pattern according to the confidence map that obtains among the step S4
Figure BDA00003460204300034
F data i , j = S c sp k if x i , j ∈ { sp k } k = 1 N - 1 if x i , j ∉ { sp k } k = 1 N
Wherein, the coordinate of pixel in (i, j) expression test pattern;
S6: with the velocity field of the test pattern that obtains among the step S5
Figure BDA00003460204300042
The evolution equation of substitution Level Set Method carries out profile with the profile tracking results of previous frame test pattern as initial value and evolves, and obtains the profile tracking results of target;
S7: according to the profile tracking results that obtains among the step S6, the super pixel of target and background is put into respectively corresponding training sample pond the training sample pond is upgraded, return step S2 next frame test pattern in the sequence image is carried out the profile tracking.
Wherein, the method for selecting of regional area is among the step S4: when the selected regional area of the first frame test pattern, manually the initial profile of intended target is determined regional area according to initial profile; Follow-up each frame test pattern is determined regional area according to the profile tracking results of previous frame test pattern.
Wherein, evolution equation is among the step S6:
Φ t - Φ t - 1 Δt + ( F data i , j + F curv ) · | ▿ Φ t - 1 | = 0
Φ wherein tThe level set function of the t time iteration, Φ T-1The level set function of the t-1 time iteration, the initial value Φ of level set function 0Be the level set function of the profile tracking results of previous frame test pattern, Δ t is the iteration step length of presetting, F Curv=ε κ is the inside evolutionary rate of only being correlated with contour curvature κ, and ε is the constant of presetting,
Figure BDA00003460204300044
Φ T-1The gradient norm.
Wherein, the training sample pond adopts the formation mode to upgrade when upgrading among the step S7, and newly-increased sample comes the formation end, when sample size surpasses default queue length, deletes the old sample of queue front.
The present invention is based on the active contour tracing method of super pixel, training image is surpassed the training sample pond that pixel segmentation obtains target and background, the projection matrix that learning method obtains distance measure is estimated in employing according to training sample, make up the discriminant apparent model, every frame test pattern of sequence image is surpassed pixel segmentation, obtain confidence map corresponding to test pattern according to the discriminant apparent model that builds, thereby obtain the velocity field of test pattern, with the evolution equation of velocity field substitution Level Set Method, obtain the profile tracking results of test pattern.
The active contour tracing method that the present invention is based on super pixel has following beneficial effect:
1., set up the elementary cell take super pixel as iamge description, extract the middle level feature that possesses certain semantic description, this process can facilitate for follow-up modeling and profile evolution;
2., the projection matrix of distance measure adopts and estimates learning method and obtain, to another space that more can reflect data intrinsic propesties, the distance measure that calculates under this space is more true and reliable with the primitive character space projection;
3., the profile evolvement method of proposition take super pixel as elementary cell.Because all pixels have apparent similarity in the super pixel, be directly reflected in the Level Sets velocity field, be that all pixels are consistent in direction and the size of evolutionary rate in the same super pixel, more can improve the efficiency of evolution of profile than the single pixel of direct consideration.
Description of drawings
Fig. 1 is a kind of embodiment process flow diagram that the present invention is based on the active contour tracing method of super pixel;
Fig. 2 is that the confidence map that the present invention is based in the active contour tracing method of super pixel based on the apparent model of super pixel obtains exemplary plot;
Fig. 3 the present invention is based on the velocity field contrast synoptic diagram that super pixel and pixel obtain;
Fig. 4 is the comparative examples figure of the present invention and prior art iterations;
Fig. 5 is the comparative examples figure that the present invention and prior art are followed the tracks of accuracy rate.
Embodiment
Below in conjunction with accompanying drawing the specific embodiment of the present invention is described, so that those skilled in the art understands the present invention better.What need to point out especially is that in the following description, when perhaps the detailed description of known function and design can desalinate main contents of the present invention, these were described in here and will be left in the basket.
Fig. 1 is a kind of embodiment process flow diagram that the present invention is based on the active contour tracing method of super pixel.As shown in Figure 1, the active contour tracing method that the present invention is based on super pixel may further comprise the steps:
S101: make up the training sample pond:
Training image is divided into target and background two parts, surpasses pixel segmentation, extract the proper vector of each super pixel, the training sample pond of establishing target and background, wherein target training sample pond is designated as T Obj, background training sample pond is designated as T BacIn actual applications, generally select the test pattern of former frames as training image.
S102: the projection matrix that calculates distance measure:
The projection matrix L that learning method obtains distance measure is estimated in employing according to training sample, and projection matrix L is every m, and m 〉=1 frame test pattern upgrades once.
Because traditional Euclidean distance lacks the cognition to the data statistics rule, the essential characteristic that can not reflect strictly according to the facts data, particularly for the situation that has multiple apparent pattern (multiple color or texture) to exist in the same classification, based on the satisfied result of the more difficult acquisition of the discriminant apparent model of Euclidean distance.Therefore, estimate learning method in the present invention's introducing and calculate distance measure, set up multi-modal discriminant apparent model with this.Estimating study in fact is exactly to find the solution a projection matrix L, the primitive character space projection more can be reflected the feature space of data intrinsic propesties by this projection matrix to another one.So Euclidean distance just changes into mahalanobis distance (Mahalanobis distance).
Adopt LMNN (Large Margin Nearest Neighbor in the present embodiment, large edge arest neighbors) estimates learning method, this distance measure study can be from the projection of data centralization study with label, with primitive character space projection to a New Characteristics space, final goal be so that after the projection sample only and limited the sample that belongs in the same classification keep neighbor relationships, and keep at least the distance (being margin, the edge) of a unit length with the sample of different labels.In the present invention, namely be from the projection of training sample pond learning with " target " and " background " two labels, differentiate better each super pixel in the test pattern.
In the present invention, because the training sample pond can be upgraded according to the profile tracking results of every frame test pattern, consider counting yield, all recomputate one time distance measure at every turn when the training sample pond is upgraded, but adopt the strategy that upgrades a distance measure every a few frame test patterns.When upgrading, the projection matrix that the last time obtains is inputted as the initial value that upgrades, the initial value that calculates the projection matrix employing for the first time is unit matrix at every turn.
S103: make up the discriminant apparent model based on super pixel:
According to target training sample pond T Obj, background training sample pond T BacWith projection matrix L, make up test pattern based on the discriminant apparent model of super pixel.Because projection matrix L obtains by estimating learning method, so the discriminant apparent model that obtains among the present invention more can embody Characteristic of Image.
In the discriminant apparent model, for each super pixel sp, need to provide the similarity degree that a confidence score (Confidence Score) reflects itself and target or background.Among the present invention, the confidence score of definition
Figure BDA00003460204300061
Computing formula is:
S c sp = 1 - P ( sp | bac ) / P ( sp | obj ) 1 + P ( sp | bac ) / P ( sp | obj ) - - - ( 1 )
Wherein P (sp|obj) and P (sp|bac) represent that respectively super pixel sp belongs to the likelihood probability of target class obj and background classes bac, adopt non-parametric Density Estimator method to obtain.
Confidence score
Figure BDA00003460204300071
Span between-1 and 1, have the discrimination properties of following symmetry:
- 1 < S c sp < 0 P ( sp | bac ) > P ( sp | obj ) 0 P ( sp | bac ) = P ( sp | obj ) 0 < S c sp < 1 P ( sp | bac ) < P ( sp | obj ) - - - ( 2 )
In the present invention, adopt non-parametric Density Estimator method to obtain likelihood probability P (sp|obj) and P (sp|bac), namely the distance of other samples of the super pixel sp of basis in the training sample pond is come close approximation likelihood probability P (sp|obj) and P (sp|bac).In the present embodiment, select gaussian kernel function, and adopt the k nearest neighbor hypothesis, likelihood probability can be by the following formula close approximation than P (sp|bac) P (sp|obj):
P ( sp | bac ) P ( sp | obj ) = 1 | T bac | &Sigma; n = 1 | T bac | exp ( - D 2 ( f sp , f n ) 2 &sigma; 2 ) 1 | T obj | &Sigma; n = 1 | T obj | exp ( - D 2 ( f sp , f n ) 2 &sigma; 2 ) &ap; 1 | T bac NN | &Sigma; j = 1 | T bac NN | exp ( - D 2 ( f sp , f n NN ) 2 &sigma; 2 ) 1 | T obj NN | &Sigma; j = 1 | T obj NN | exp ( - D 2 ( f sp , f n NN ) 2 &sigma; 2 ) - - - ( 3 )
Wherein
Figure BDA00003460204300074
With Expression is from target training sample pond T ObjWith background training sample pond T BacSubset, that comprise respectively is super pixel sp kFront K neighbour's sample, symbol || represent the number of samples in the training sample pond.D (f Sp, f n) be super pixel characteristic vector f SpWith sampling feature vectors f nBetween distance measure, σ is the calculating parameter of presetting.
Middle distance of the present invention is estimated D (f Sp, f n) be mahalanobis distance, computing formula is:
D(f sp,f n)=||L(f sp-f n)|| 2=(f sp-f n) TM(f sp-f n) (4)
Wherein, M=L TL.
S104: test pattern is surpassed pixel segmentation and feature extraction, obtain confidence map:
This step mainly is that test pattern is carried out pre-service, and at first a selected regional area that comprises target in image surpasses pixel segmentation to this regional area, and super pixel quantity is designated as N, extracts to obtain each super pixel sp k, the proper vector f of 1≤k≤N kIts concrete steps comprise:
3.1, a regional area around the selected target.
When the selected regional area of the first frame test pattern, the manual initial profile of intended target, determine regional area according to initial profile, namely this regional area is included initial profile, this regional area adopts prior art to obtain fast, generally is to obtain by initial profile is carried out a certain proportion of expansion.When the regional area of follow-up each frame test pattern select target, can adopt the profile tracking results of previous frame to select foundation as regional area.
3.2, localized region surpasses pixel segmentation, super number of pixels is designated as N.
In the present embodiment, employing be the super pixel segmentation method of SLIC (Simple Linear Iterative Clustering, simple linear iteration cluster).The method can obtain with lower computation complexity the super pixel segmentation of the rule of specified quantity.Specific algorithm list of references: R.Achanta, A.Shaji, K.Smith and A.Lucchi.SLIC Superpixels Compared to State-of-the-Art Superpixel Methods[J] .IEEE Trans.on Pattern Analysis and Machine Intelligence, 2012,34 (11): 2274-2282.
3.3, cut apart the super pixel that obtains for each and carry out feature extraction, obtain each super pixel sp k, the proper vector f of 1≤k≤N k
The feature of sample is consistent in the feature of test pattern and the training sample pond.In the present embodiment, to each super pixels statistics wherein color and the textural characteristics of all pixels.Wherein color characteristic adopts HIS(Hue-Saturation-Intensity, tone-saturation degree-intensity) the normalized color histogram in space.Texture adopts LBP (Local Binary Pattern, local binary patterns) method, in 3 * 3 neighborhood zones, neighborhood territory pixel gray-scale value and center pixel are compared, if the surrounding pixel value is greater than center pixel value, then the position of corresponding neighborhood territory pixel point is marked as 1, otherwise is 0.Like this, 8 points in 3 * 3 neighborhoods can produce the unsigned number of 8bit, and these 8 unsigned numbers are changed into corresponding decimal number (0~255), and then each pixel can obtain a LBP value.With the LBP value of all pixels in the same super pixel of statistics with histogram, and carry out normalization.Color histogram after the normalization and LBP statistic histogram are merged, obtain super pixel sp kFinal proper vector f k
3.4, the proper vector f of each super pixel that will obtain kSubstitution obtains the confidence score of each super pixel of test pattern based on the apparent model of super pixel
Figure BDA00003460204300081
Obtain the confidence map of test pattern.
Fig. 2 is that the confidence map that the present invention is based in the active contour tracing method of super pixel based on the apparent model of super pixel obtains exemplary plot.As shown in Figure 2, Fig. 2 (a) is the principle schematic that LMNN estimates learning method, (b) is the regional area around the target, (c) is super pixel segmentation result, (d) is the confidence map that obtains.Can find out that owing to adopted and estimate learning method, the super pixel gathering of the neighbour of super pixel sp and its same label tightr then kept the Edge Distance (Margin) of a unit length with the sample of different labels.Can find out that based on the discriminant apparent model of estimating study, the energy Effective Raise is to the discriminating power of multi-modal target.Relevant LMNN estimates the detail list of references of learning method: K.Q.Weinberger and L.K.Saul.Distance Metric Learning for Large Margin Nearest Neighbor Classification[J] .Journal of Machine Learning Research.2009.10:207-244.
S105:Level Sets profile is evolved, and obtains the profile tracking results.Concrete steps comprise:
5.1, make up velocity field:
In the Level Sets profile evolutionary process, the velocity field that the guiding profile is evolved plays vital effect to accuracy rate and the efficient of evolving.And just in time can be regarded as this velocity field based on the confidence map that the discriminant apparent model obtains, because the numerical symbol in the confidence map (between-1 to 1) just in time corresponding to Level Sets according to profile normal Direction of evolution (inwardly or outwards), absolute value has then been specified the size of speed.Such as, as P (sp k| bac)>P (sp k| in the time of obj), super pixel sp kConfidence score
Figure BDA00003460204300091
Symbol for negative, suppose this moment super pixel in profile inside, the then effect that inwardly pushes away along normal direction of speed term handlebar profile is if instead super pixel then has the acting force that is pulled outwardly on the contrary in the profile outside.But because the enforcement that Level Sets evolves is based on pixel, therefore need to further expand into the pixel velocity field to the confidence map based on the test pattern of super pixel, namely belong to all pixels in the same super pixel, have the evolutionary rate identical with super pixel, all pixels outside the target area are owing to belong to background, therefore the speed assignment is-1, thereby forms the velocity field of view picture test pattern, uses
Figure BDA00003460204300092
Expression, that is:
F data i , j = S c sp k if x i , j &Element; { sp k } k = 1 N - 1 if x i , j &NotElement; { sp k } k = 1 N - - - ( 5 )
Wherein, the coordinate of pixel in (i, j) expression test pattern.
Fig. 3 the present invention is based on the velocity field contrast synoptic diagram that super pixel and pixel obtain.As shown in Figure 3, each net point represents a pixel, and Fig. 3 (a) is based on the pixel speed field synoptic diagram that super pixel obtains, and Fig. 3 (b) is the pixel speed field synoptic diagram that adopts commonsense method to obtain.Can find out through contrast, the pixel (same symbolic representation) that belongs in the same super pixel all keeps consistency on the size and Orientation of speed, and lack rule based on the velocity field that pixel obtains at size and Orientation, can reduce efficiency of evolution and the accuracy rate of profile.In the present invention, the elementary cell of evolving take super pixel as profile, namely all pixels keep consistency in direction and the size of evolutionary rate in the same super pixel, more can improve the efficiency of evolution of profile than the single pixel of direct consideration.
5.2, with the evolution equation of velocity field substitution Level Sets method, the evolution equation that adopts in the present embodiment is:
&Phi; t - &Phi; t - 1 &Delta;t + ( F data i , j + F curv ) &CenterDot; | &dtri; &Phi; t - 1 | = 0 - - - ( 6 )
Φ wherein tThe Level Sets function of the t time iteration, Φ T-1The Level Sets function of the t-1 time iteration, the initial function Φ of level set function 0It is the Level Sets function of the profile tracking results of previous frame test pattern.Δ t is the iteration step length of presetting.F Curv=ε κ is the inside evolutionary rate of only being correlated with contour curvature κ, and ε is the constant of presetting, F CurvPlay a part level and smooth profile.
Figure BDA00003460204300102
Φ T-1The gradient norm.
Carry out profile when evolving at every frame test pattern, the profile tracking results of previous frame test pattern as initial profile, under the effect of formula (6), is upgraded Φ by continuous iteration, thereby obtained the profile tracking results of test pattern.The iteration termination condition generally adopts dual mode to determine: preset iterations, with last profile as the profile tracking results; Or after each iteration, calculate Φ tAnd Φ T-1Difference, preset difference threshold, work as Φ tAnd Φ T-1Difference during less than threshold value, will this moment Φ tThe profile of expression is as the profile tracking results.When the first two field picture carried out the profile evolution, its initial profile can be by manually specifying.
S106: online updating training sample pond.
After the profile tracking results that obtains a frame test pattern, the super pixel in profile inner (target) and profile outside (background) can be put into respectively in the training sample pond of target and background, and the profile that is used for the next frame test pattern is followed the tracks of.Consider computation complexity and in order to prevent that old sample from being substituted by disposable, present embodiment training sample pond adopts the formation mode to upgrade when upgrading, newly-increased sample comes the formation end, when sample size surpasses default queue length, namely delete the old sample of queue front, hold queue length is constant.Return step S101 and rebuild the training sample pond according to renewal.
Embodiment
In order to implement concretism of the present invention, done relatively emulation experiment at a plurality of video sequences.For ease of comparing quantitatively, defined the tracking accuracy rate
Figure BDA00003460204300101
Reflect true nominal data C GtWith tracking results C tSimilarity degree.
Fig. 4 is the comparative examples figure of the present invention and prior art evolution equation iterations.As shown in Figure 3, the active contour tracing method of super pixel will be the present invention is based on, namely based on super pixel and consider to estimate study (SP-based with DML) and super pixel and do not consider to estimate study (SP-based without DML), compare emulation based on the same video sequence of method (Pixel-based) employing of pixel.Can find out that the active contour tracing method based on super pixel proposed by the invention can be restrained quickly.
Fig. 5 is the comparative examples figure that the present invention and prior art are followed the tracks of accuracy rate.As shown in Figure 5, present embodiment carries out emulation to 4 kinds of video sequences respectively, and wherein Fig. 5 (a) is the video sequence of clown fish, and Fig. 5 (b) is basket baller's video sequence, Fig. 5 (c) is the video sequence of monkey, and Fig. 5 (d) is the video sequence of single skiing.The contrast prior art that adopts comprises ADL (Adaboost-based Level Set Method), referring to document: X.Sun, H.X.Yao and S.P.Zhang.A Novel Supervised Level Set Method for Non-Rigid Object Tracking[C], IEEE Conference on Computer Vision and Pattern Recognition, 2011,3393-3400), SPDL (Superpixel Driven Level Set Method), referring to document: X.Zhou, X.Li, T.J.Chin and D.Suter.Superpixel-Driven Level Set Tracking[C] .IEEE International Conference on Image Processing.2012.409-412.) and SPT (Superpixel Tracking Method), referring to document: S.Wang, H.C.Lu, F.Yang and M.H.Yang.Superpixel Tracking[C] .IEEE International Conference on Computer Vision.2011.1323-1330.), wherein ADL is based on the Level Sets tracking of the pixel scale of Adaboost; SPDL is the Level Sets method of super pixel driver, but does not consider to estimate study in the apparent model building process; And SPT is based on the bounding box tracking of super pixel, and the method adopts the mean shift clustering algorithm to obtain the confidence map of super pixel, carries out Level Sets tracking based on this confidence map and obtains the profile result.As can be seen from Figure 5, the active contour tracing method that the present invention is based on super pixel has higher accuracy rate and robustness.
Although the above is described the illustrative embodiment of the present invention; so that those skilled in the art understand the present invention; but should be clear; the invention is not restricted to the scope of embodiment; to those skilled in the art; as long as various variations appended claim limit and the spirit and scope of the present invention determined in, these variations are apparent, all utilize innovation and creation that the present invention conceives all at the row of protection.

Claims (5)

1. the active contour tracing method based on super pixel is characterized in that, may further comprise the steps:
S1: training image is divided into target and background two parts, surpasses pixel segmentation, extract the proper vector of each super pixel, establishing target training sample pond T ObjWith background training sample pond T Bac
S2: the projection matrix L that learning method obtains distance measure is estimated in employing according to training sample, and projection matrix L is every m, and m 〉=1 frame test pattern upgrades once;
S3: according to the projection matrix L of training sample pond and distance measure, make up the discriminant apparent model based on super pixel, wherein the confidence score of each super pixel
Figure FDA00003460204200011
Computing formula be:
S c sp = 1 - P ( sp | bac ) / P ( sp | obj ) 1 + P ( sp | bac ) / P ( sp | obj )
Wherein P (sp|obj) and P (sp|bac) represent that respectively super pixel sp belongs to the likelihood probability of target class obj and background classes bac, adopt non-parametric Density Estimator method to obtain;
S4: a selected regional area that comprises target in the present frame test pattern, this regional area is surpassed pixel segmentation, super pixel quantity is designated as N, extracts to obtain each super pixel sp k, the proper vector f of 1≤k≤N kCalculate the confidence score of each super pixel according to the confidence score computing formula among the step S3
Figure FDA00003460204200013
Obtain the confidence map of test pattern;
S5: the velocity field that makes up test pattern according to the confidence map that obtains among the step S4
Figure FDA00003460204200014
F data i , j = S c sp k if x i , j &Element; { sp k } k = 1 N - 1 if x i , j &NotElement; { sp k } k = 1 N
Wherein, the coordinate of pixel in (i, j) expression test pattern;
S6: with the velocity field of the test pattern that obtains among the step S5
Figure FDA00003460204200016
The evolution equation of substitution Level Set Method carries out profile with the profile tracking results of previous frame test pattern as initial value and evolves, and obtains the profile tracking results of target;
S7: according to the profile tracking results that obtains among the step S6, the super pixel of target and background is put into respectively corresponding training sample pond the training sample pond is upgraded, return step S1 and rebuild target training sample pond T ObjWith background training sample pond T Bac
2. active contour tracing method according to claim 1 is characterized in that, estimating learning method among the described step S2 is that large edge arest neighbors LMNN estimates learning method.
3. active contour tracing method according to claim 1, it is characterized in that, the method for selecting of regional area is among the described step S4: when the selected regional area of the first frame test pattern, manually the initial profile of intended target is determined regional area according to initial profile; Follow-up each frame test pattern is determined regional area according to the profile tracking results of previous frame test pattern.
4. active contour tracing method according to claim 1 is characterized in that, evolution equation is among the described step S6:
&Phi; t - &Phi; t - 1 &Delta;t + ( F data i , j + F curv ) &CenterDot; | &dtri; &Phi; t - 1 | = 0
Φ wherein tThe level set function of the t time iteration, Φ T-1The level set function of the t-1 time iteration, the initial function Φ of level set function 0Be the level set function of the profile tracking results of previous frame test pattern, Δ t is the iteration step length of presetting, F Curv=ε κ is the inside evolutionary rate of only being correlated with contour curvature κ, and ε is the constant of presetting.
5. according to claim 1 to 4 arbitrary described active contour tracing methods, it is characterized in that the training sample pond adopts the formation mode to upgrade when upgrading among the described step S7, newly-increased sample comes the formation end, when sample size surpasses default queue length, delete the old sample of queue front.
CN2013102774746A 2013-07-04 2013-07-04 Active contour tracing method based on superpixel Pending CN103366382A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102774746A CN103366382A (en) 2013-07-04 2013-07-04 Active contour tracing method based on superpixel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102774746A CN103366382A (en) 2013-07-04 2013-07-04 Active contour tracing method based on superpixel

Publications (1)

Publication Number Publication Date
CN103366382A true CN103366382A (en) 2013-10-23

Family

ID=49367650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102774746A Pending CN103366382A (en) 2013-07-04 2013-07-04 Active contour tracing method based on superpixel

Country Status (1)

Country Link
CN (1) CN103366382A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778439A (en) * 2014-01-23 2014-05-07 电子科技大学 Body contour reconstruction method based on dynamic time-space information digging
CN104596484A (en) * 2015-01-30 2015-05-06 黄河水利委员会黄河水利科学研究院 Method of measuring drift ice density in ice flood season of Yellow River
CN104732551A (en) * 2015-04-08 2015-06-24 西安电子科技大学 Level set image segmentation method based on superpixel and graph-cup optimizing
CN105678338A (en) * 2016-01-13 2016-06-15 华南农业大学 Target tracking method based on local feature learning
CN105809206A (en) * 2014-12-30 2016-07-27 江苏慧眼数据科技股份有限公司 Pedestrian tracking method
CN106023155A (en) * 2016-05-10 2016-10-12 电子科技大学 Online object contour tracking method based on horizontal set
CN107230219A (en) * 2017-05-04 2017-10-03 复旦大学 A kind of target person in monocular robot is found and follower method
CN107273905A (en) * 2017-06-14 2017-10-20 电子科技大学 A kind of target active contour tracing method of combination movable information
CN108629337A (en) * 2018-06-11 2018-10-09 深圳市益鑫智能科技有限公司 A kind of face recognition door control system based on block chain
CN108648212A (en) * 2018-04-24 2018-10-12 青岛科技大学 Adaptive piecemeal method for tracking target based on super-pixel model
CN108789431A (en) * 2018-06-11 2018-11-13 深圳万发创新进出口贸易有限公司 A kind of intelligently guiding robot
CN108830219A (en) * 2018-06-15 2018-11-16 北京小米移动软件有限公司 Method for tracking target, device and storage medium based on human-computer interaction
US10249046B2 (en) * 2014-05-28 2019-04-02 Interdigital Ce Patent Holdings Method and apparatus for object tracking and segmentation via background tracking
CN110688965A (en) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN111160180A (en) * 2019-12-16 2020-05-15 浙江工业大学 Night green apple identification method of apple picking robot
CN111630559A (en) * 2017-10-27 2020-09-04 赛峰电子与防务公司 Image restoration method
CN113313672A (en) * 2021-04-28 2021-08-27 贵州电网有限责任公司 Active contour model image segmentation method based on SLIC superpixel segmentation and saliency detection algorithm

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102254326A (en) * 2011-07-22 2011-11-23 西安电子科技大学 Image segmentation method by using nucleus transmission
US20120275703A1 (en) * 2011-04-27 2012-11-01 Xutao Lv Superpixel segmentation methods and systems
CN103164858A (en) * 2013-03-20 2013-06-19 浙江大学 Adhered crowd segmenting and tracking methods based on superpixel and graph model
US20130156314A1 (en) * 2011-12-20 2013-06-20 Canon Kabushiki Kaisha Geodesic superpixel segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120275703A1 (en) * 2011-04-27 2012-11-01 Xutao Lv Superpixel segmentation methods and systems
CN102254326A (en) * 2011-07-22 2011-11-23 西安电子科技大学 Image segmentation method by using nucleus transmission
US20130156314A1 (en) * 2011-12-20 2013-06-20 Canon Kabushiki Kaisha Geodesic superpixel segmentation
CN103164858A (en) * 2013-03-20 2013-06-19 浙江大学 Adhered crowd segmenting and tracking methods based on superpixel and graph model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KILIAN Q. WEINBERGER ET AL.: "Distance Metric Learning for Large Margin Nearest Neighbor Classification", 《JOURNAL OF MACHINE LEARNING RESEARCH》, vol. 10, 28 February 2009 (2009-02-28) *
WEIMING HU ET AL.: "Active Contour-Based Visual Tracking by Integrating Colors,Shapes,and Motions", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 22, no. 5, 31 May 2013 (2013-05-31), XP011497082, DOI: doi:10.1109/TIP.2012.2236340 *
XUE ZHOU ET AL.: "SUPERPIXEL-DRIVEN LEVEL SET TRACKING", 《2012 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING(ICIP 2012)》, 30 September 2012 (2012-09-30) *
周雪 等: "融合颜色和增量形状先验的目标轮廓跟踪", 《自动化学报》, vol. 35, no. 11, 30 November 2009 (2009-11-30) *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103778439A (en) * 2014-01-23 2014-05-07 电子科技大学 Body contour reconstruction method based on dynamic time-space information digging
CN103778439B (en) * 2014-01-23 2016-08-17 电子科技大学 Human body contour outline reconstructing method based on dynamic space-time information excavating
US10249046B2 (en) * 2014-05-28 2019-04-02 Interdigital Ce Patent Holdings Method and apparatus for object tracking and segmentation via background tracking
CN105809206A (en) * 2014-12-30 2016-07-27 江苏慧眼数据科技股份有限公司 Pedestrian tracking method
CN104596484A (en) * 2015-01-30 2015-05-06 黄河水利委员会黄河水利科学研究院 Method of measuring drift ice density in ice flood season of Yellow River
CN104732551A (en) * 2015-04-08 2015-06-24 西安电子科技大学 Level set image segmentation method based on superpixel and graph-cup optimizing
CN105678338A (en) * 2016-01-13 2016-06-15 华南农业大学 Target tracking method based on local feature learning
CN105678338B (en) * 2016-01-13 2020-04-14 华南农业大学 Target tracking method based on local feature learning
CN106023155A (en) * 2016-05-10 2016-10-12 电子科技大学 Online object contour tracking method based on horizontal set
CN106023155B (en) * 2016-05-10 2018-08-07 电子科技大学 Online target profile tracing method based on level set
CN107230219A (en) * 2017-05-04 2017-10-03 复旦大学 A kind of target person in monocular robot is found and follower method
CN107273905A (en) * 2017-06-14 2017-10-20 电子科技大学 A kind of target active contour tracing method of combination movable information
CN107273905B (en) * 2017-06-14 2020-05-08 电子科技大学 Target active contour tracking method combined with motion information
CN111630559A (en) * 2017-10-27 2020-09-04 赛峰电子与防务公司 Image restoration method
CN108648212A (en) * 2018-04-24 2018-10-12 青岛科技大学 Adaptive piecemeal method for tracking target based on super-pixel model
CN108629337A (en) * 2018-06-11 2018-10-09 深圳市益鑫智能科技有限公司 A kind of face recognition door control system based on block chain
CN108789431A (en) * 2018-06-11 2018-11-13 深圳万发创新进出口贸易有限公司 A kind of intelligently guiding robot
CN108830219A (en) * 2018-06-15 2018-11-16 北京小米移动软件有限公司 Method for tracking target, device and storage medium based on human-computer interaction
CN108830219B (en) * 2018-06-15 2022-03-18 北京小米移动软件有限公司 Target tracking method and device based on man-machine interaction and storage medium
CN110688965A (en) * 2019-09-30 2020-01-14 北京航空航天大学青岛研究院 IPT (inductive power transfer) simulation training gesture recognition method based on binocular vision
CN110688965B (en) * 2019-09-30 2023-07-21 北京航空航天大学青岛研究院 IPT simulation training gesture recognition method based on binocular vision
CN111160180A (en) * 2019-12-16 2020-05-15 浙江工业大学 Night green apple identification method of apple picking robot
CN113313672A (en) * 2021-04-28 2021-08-27 贵州电网有限责任公司 Active contour model image segmentation method based on SLIC superpixel segmentation and saliency detection algorithm

Similar Documents

Publication Publication Date Title
CN103366382A (en) Active contour tracing method based on superpixel
Wang et al. Saliency-aware geodesic video object segmentation
Vishwakarma et al. Hybrid classifier based human activity recognition using the silhouette and cells
Ma et al. Maximum weight cliques with mutex constraints for video object segmentation
Shi et al. Scene text recognition using part-based tree-structured character detection
Kumar et al. Review of lane detection and tracking algorithms in advanced driver assistance system
Zhang et al. Real-time visual tracking via online weighted multiple instance learning
Zhang et al. Learning semantic scene models by object classification and trajectory clustering
CN107273905B (en) Target active contour tracking method combined with motion information
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
Budvytis et al. Semi-supervised video segmentation using tree structured graphical models
Ye et al. Self-learning scene-specific pedestrian detectors using a progressive latent model
Timofte et al. Combining traffic sign detection with 3D tracking towards better driver assistance
CN101924871A (en) Mean shift-based video target tracking method
CN103886619A (en) Multi-scale superpixel-fused target tracking method
CN103679142A (en) Target human body identification method based on spatial constraint
Wang et al. Human action recognition based on pyramid histogram of oriented gradients
Zhang et al. Boosted exemplar learning for action recognition and annotation
CN103942563A (en) Multi-mode pedestrian re-identification technology
CN103955671A (en) Human behavior recognition method based on rapid discriminant common vector algorithm
Liu et al. A real time expert system for anomaly detection of aerators based on computer vision and surveillance cameras
Mannan et al. Classification of degraded traffic signs using flexible mixture model and transfer learning
Jiang et al. Robust visual tracking via laplacian regularized random walk ranking
CN103577804A (en) Abnormal human behavior identification method based on SIFT flow and hidden conditional random fields
Kalinin et al. A graph based approach to hierarchical image over-segmentation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20131023