CN103854290A - Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors - Google Patents
Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors Download PDFInfo
- Publication number
- CN103854290A CN103854290A CN201410115395.XA CN201410115395A CN103854290A CN 103854290 A CN103854290 A CN 103854290A CN 201410115395 A CN201410115395 A CN 201410115395A CN 103854290 A CN103854290 A CN 103854290A
- Authority
- CN
- China
- Prior art keywords
- point
- frame
- image
- axis
- distribution field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention provides an extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors. The extended target tracking method comprises the following steps: firstly, preprocessing images to be processed by using Gaussian smooth filtering to remove influence of noise on a subsequent algorithm; and subsequently, cutting the smoothed images by using a fuzzy C-means clustering algorithm FCM to obtain binary images, carrying out Hough transform on the cut and obtained binary images to detect a part with an obvious linear characteristic on the target, extracting characteristic points from a part without obvious linear characteristic on the target by using a skeleton, then carrying out linear fitting on the characteristic points on the skeleton to obtain a target axis, taking a intersection point of the straight lines of the obtained axis as an initial track point, then calculating a distribution field in a certain neighborhood field of the point and searching a best position in the neighborhood field when a next frame comes, so as to obtain the current best track point. Through the extended target tracking method, the fuzziness problem is solved; and the extended target tracking method is capable of stably tracking the extended target under a condition with a great posture change.
Description
Technical field
The present invention relates to a kind of motor-driven expansion method for tracking target, particularly one and utilize Hough conversion, skeletal extraction unique point and distribution field to follow the tracks of motor-driven expansion order calibration method, be mainly used in image processing, computer vision.Belong to target detection tracing technical field in photoeletric measuring system.
Background technology
In photoeletric measuring system, in order to improve tracking accuracy, the visual field of detector is all smaller, and target size is bigger than normal again.Therefore in detector, target presents the form of expansion.Distant object imaging, because the degraded factors such as the aberration of atmospheric turbulence, thrashing and optical system cause target very fuzzy in the imaging of system, poor contrast; In addition, target is without texture information, different, without characterizing and identification clarification of objective information.Target also exists attitude to change obvious feature, and along with the variation of targeted attitude, trace point also can drift about thereupon.Choosing stable unique point and carry out locking tracking, is a great problem that expansion target following faces.
At present, the conventional algorithm for expansion target is coupling, comprises the coupling of the aspect such as gray scale, feature.Due to the motion of target, may there is the variations such as size, shape, attitude in target, add the various interference such as background, illumination, and image is processed the precision problem of minimum measurement unit, coupling is followed the tracks of and be can not get definitely best matched position, and this can bring the drift of trace point.Because target is without texture and notable feature, attitude changes greatly, and traditional tracking based on gray feature is easy of losing target in the time that target occurs that larger attitude changes, can not meet the requirement of practical application, therefore in the urgent need to studying the engineering application demand of new method to adapt to follow the tracks of.
Summary of the invention
The technology of the present invention is dealt with problems: for the deficiencies in the prior art, provide one to utilize Hough conversion, skeletal extraction unique point, while is in conjunction with the expansion method for tracking target of distribution field descriptor, from in essence by out abstract the geometry information of motor-driven expansion target, combine with local feature simultaneously, the tenacious tracking of realize target under larger attitude changes.
For realizing such object, technical scheme of the present invention: a kind of expansion method for tracking target that utilizes Hough conversion, skeletal extraction unique point and distribution field, comprises the steps:
Step 1, image pre-service: adopt Gaussian smoothing filtering to process pending image, remove the impact of noise, obtain filtered smoothed image;
Step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, obtain bianry image;
Step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain are processed, and detect the straight line of the obvious part of linear feature on aircraft as the axis of this part;
Step 4, the image that utilizes the method for skeletal extraction to obtain step 2 are processed, and extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, and the straight line obtaining is as the axis of this part on aircraft;
Step 5, obtain the axial equation that step 3 and step 4 obtain, calculate the intersection point of these two axis place straight lines, set it as thick trace point;
Around step 6, the thick trace point that obtains in step 5, choose certain neighborhood, calculate the distribution field descriptor in this region.Then smooth distribution field picture under different scale, obtains object module.
Step 7, next interim when next frame, the optimum position of following the tracks of frame in the object module search present frame obtaining according to previous frame, will follow the tracks of the best trace point of frame place-centric as present frame.
Wherein, in described step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, the method that obtains bianry image is:
Step (21), initialization: given cluster classification is counted C=2 in C(the present invention), set iteration stopping threshold epsilon, initialization fuzzy partition matrix U
(0), iterations l=0, Fuzzy Weighting Exponent m (m=2 in the present invention);
Step (22), by U
(l)substitution formula (5), calculates cluster centre matrix V
(l):
Wherein n is number of pixels to be clustered, and m is FUZZY WEIGHTED index, and c is cluster classification number, u
ikfuzzy classification matrix U while being the l time iteration
(l)in the capable k column element of i, x
kfor k pixel value in image to be clustered, v
icluster centre matrix V while being the l time iteration
(l)in i cluster centre value;
Step (23), according to formula (6), utilize V
(l)upgrade U
(l), obtain new fuzzy classification matrix U
(l+1):
Wherein d
ikfor the Euclidean distance of k element and i cluster centre in image to be clustered, d
jkfor the Euclidean distance of k element and j cluster centre in image to be clustered;
Step (24) if || U
(l)-U
l+1|| < ε, iteration stopping.Otherwise, put l=l+1, return to step (22);
Step (25), calculate the Euclidean distance of the cluster centre value that each pixel distance above-mentioned steps (21)-(24) in image to be clustered obtain, the pixel value nearest apart from cluster centre is set to 1, otherwise is set to 0, obtain thus the bianry image after cutting apart.
Wherein, in described step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain is processed, and the straight line that detects the obvious part of linear feature on aircraft as the method for the axis of this part is:
Step (31), the target image that step 2 is obtained are asked size, quantize according to the possible span of parameter in parameter space, according to a quantized result totalizer array A of structure (ρ, θ), are initialized as 0;
Step (32), the set point in each XY space is got all over all probable values by θ, calculated ρ by formula (7), according to the value of ρ and θ A:A (ρ, θ)=A (ρ, θ)+1 of adding up;
ρ=xcosθ+ysinθ (7)
Corresponding ρ and the θ of maximal value in step (33), the cumulative rear A of basis, makes the straight line (being axes of aircraft) in XY by formula (7), and the maximal value in A has represented the number of set point on this straight line.
Wherein, in described step 4, the image that utilizes the method for skeletal extraction to obtain step 2 is processed, extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, the straight line obtaining is as the method for the axis of this part on aircraft, and the present invention adopts successively the iterative refinement algorithm of cancellation frontier point to extract skeleton, and algorithm is as follows:
If known target point is labeled as 1, background dot is labeled as 0.Definition frontier point is that itself is labeled as 1, and in its 8-connected region, has at least a point to be labeled as 0 point.Algorithm is considered the 8-neighborhood centered by frontier point, and note central point is p
1, 8 points of its neighborhood are designated as respectively p around central point clockwise
2, p
3..., p
9, wherein p
2at p
1top;
Comprise frontier point carried out to two step operations:
(41) mark meets the frontier point of following condition simultaneously:
(411)2≤N(p
1)≤6;
(412)S(p
1)=1;
(413)p
2·p
4·p
6=0;
(414)p
4·p
6·p
8=0;
Wherein N (p
1) be p
1non-zero adjoint point number, S (p
1) be with p
2, p
3..., p
9, p
2the number of the value of these points from 0 → 1 during for order.When to after all boundary point inspection, by all marks point remove.(42) mark meets the frontier point of following condition simultaneously:
(42) mark meets the frontier point of following condition simultaneously:
(421)1≤N(p
1)≤6
(422)S(p
1)=1;
(423)p
2·p
4·p
8=0;
(424)p
2·p
6·p
8=0;
Above two steps operations form an iteration, algorithm iterate until not point meet again flag condition, at this moment remaining some composition skeleton point.Extracted after the skeleton point of target, using the tie point in skeleton point as unique point, adopted the straight line of acquisition repeatedly of fitting a straight line, using this as aircraft on the axis of fuselage.
Wherein, in described step 5, the axial equation that utilizes step 3 and step 4 to obtain, the method for calculating the intersection point of these two axis place straight lines is:
Process step 3 and step 4 are calculated the fuselage and the wing axis place straight-line equation that obtain respectively and are:
y1=k1*x1+b1 (8)
y2=k2*x2+b2 (9)
Wherein k1, b1 are respectively slope and the intercept of fuselage axis place straight line, and k2, b2 are respectively slope and the intercept of wing axis place straight line.Separating above-mentioned system of linear equations obtains its intersecting point coordinate and is:
In formula, (x, y) is the intersecting point coordinate of wing and fuselage axis place straight line.
Wherein, in described step 7, when next frame comes temporarily, the optimum position of following the tracks of frame in the object module search present frame obtaining according to previous frame, using following the tracks of frame place-centric as the method for the best trace point of present frame is:
(71) when next frame comes interim, reappear in estimation present frame and follow the tracks of frame starting point according to previous frame target location, then the Euclidean distance of the distribution field image of search and previous frame in the neighborhood around starting point, the starting point that starting point corresponding minor increment is followed the tracks of to frame in present frame, continue search, until can not find minimum point, this is as the starting point of the final tracking frame of present frame;
(72) upgrade object module.
The present invention's beneficial effect is compared with prior art:
(1) the present invention carries out Hough conversion to the whole target image after cutting apart, and target is not carried out to rim detection, can effectively retain like this correlativity between object pixel, strengthens the noise immunity of algorithm.
(2) the present invention adopts the method for skeletal extraction to obtain the architectural feature point on Aircraft Targets to image after cutting apart, then unique point is carried out to fitting a straight line, has solved Hough change detection and do not go out the problem of the axis of the unconspicuous object of linear feature.
(3) the invention provides the motor-driven expansion method for tracking target of a kind of combination framework characteristic point and distribution field descriptor, compared with the tracking combining with local patch with skeleton based on distribution field descriptor merely, skeleton unique point and distribution field are merged in the present invention, due to skeleton reflection is the global geometry of target, be not subject to the impact of grey scale pixel value, thereby to illumination variation robust; Distribution field energy is expressed the uncertainty of tracked target, can in the abundant information that seizure histogram comprises, keep the space structure of target, solve ambiguity, but to illumination variation sensitivity, the two is changed to the expansion target in situation greatly to attitude and carry out tenacious tracking in conjunction with realizing.
Brief description of the drawings
Fig. 1 is the inventive method realization flow figure;
Fig. 2 is that the present invention adopts Hough change detection wing straight line result;
Fig. 3 is that the present invention adopts Hough conversion and skeletal extraction combine detection fuselage and wing axis result;
Fig. 4 is that the distribution field that adopts of the present invention is described, and (a) be source images, is (b) distribution field after expansion, is (c) distribution field after smoothly;
Fig. 5 is the present invention carries out track and localization result to the 288th two field picture of sequence used;
Fig. 6 is the present invention carries out track and localization result to the 431st two field picture of sequence used;
Fig. 7 is the present invention carries out track and localization result to the 667th two field picture of sequence used;
Fig. 8 is the present invention carries out track and localization result to the 976th two field picture of sequence used.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated.The present embodiment is implemented under taking technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
The present invention is based on the expansion method for tracking target of a kind of combination framework characteristic point and distribution field descriptor, input picture is single station flash ranging model plane image sequence.
As shown in Figure 1, the invention provides the expansion method for tracking target of a kind of combination framework characteristic point and distribution field descriptor, comprise following steps:
Step 1, image pre-service.Due to the defect of illumination or imaging system, the pending image obtaining can be subject to the impact of noise, thereby affects follow-up processing.Therefore,, before carrying out follow-up Processing Algorithm, pending image is carried out to pre-service.This method adopts Gaussian smoothing filtering to remove the impact of noise, obtains filtered smoothed image.
Step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, obtain bianry image.In essence, to cut apart be a process of pixel being classified based on certain attribute to image.The property complicated and changeable of natural image has determined that it is uncertain that many pixels belong in the problem of which cluster at it, thereby considers that from the angle of fuzzy clustering image ratio of division is more reasonable.Fuzzy C-Means Cluster Algorithm (FCM, Fuzzy C-Means Cluster) be from hard C mean algorithm (HCM, Hard C-Means Cluster) develop, its essence is a kind of nonlinear iteration optimization method of based target function, the Weighted Similarity in objective function employing image between each pixel and each cluster centre is estimated.The task of FCM algorithm is exactly by iteration, selects a rational fuzzy membership matrix cluster centre, makes objective function reach minimum, thereby obtains optimal segmentation result.
Fuzzy C-Means Cluster Algorithm is divided by the iteration optimization of objective function is realized to set, and it can express the degree that each pixel of image belongs to a different category.If n is pixel count to be clustered, c is classification number (c=2 in the present invention), and m is FUZZY WEIGHTED index (getting m=2 in the present invention), and it controls degree of membership all kinds of shared degree.The value of objective function is that in image, each pixel, to the weighted sum of squares of C cluster centre, can be expressed as:
Wherein, u
ikbe the degree of membership of k pixel to i class, d
ikbe the distance of k pixel to i class, U is fuzzy classification matrix, and V is cluster centre set.
Clustering criteria will be sought best group exactly to (U, V), makes J
m(U, V) minimum.J
mminimization can be realized by iterative algorithm below:
(2.1) initialization: given cluster classification is counted C=2 in C(the present invention), set iteration stopping threshold epsilon, initialization fuzzy partition matrix U
(0), iterations l=0, Fuzzy Weighting Exponent m (m=2 in the present invention);
(2.2) by U
(l)substitution formula (13), calculates cluster centre matrix V
(l):
Wherein n is number of pixels to be clustered, and m is FUZZY WEIGHTED index, and c is cluster classification number, u
ikfuzzy classification matrix U while being the l time iteration
(l)in the capable k column element of i, x
kfor k pixel value in image to be clustered, v
icluster centre matrix V while being the l time iteration
(l)in i cluster centre value;
(2.3) according to formula (14), utilize V
(l)upgrade U
(l), obtain new fuzzy classification matrix U
(l+1):
Wherein d
ikfor the Euclidean distance of k element and i cluster centre in image to be clustered, similarly, d
jkfor the Euclidean distance of k element and j cluster centre in image to be clustered;
(2.4) if || U
(l)-U
l+1|| < ε, iteration stopping.Otherwise put l=l+1, return to step (2.2);
(2.5) calculate the Euclidean distance of the cluster centre value that each pixel distance above-mentioned steps (2.1)-(2.4) in image to be clustered obtain, the pixel value nearest apart from cluster centre is set to 1, otherwise is set to 0, obtain thus the bianry image after cutting apart.
Experiment is found, adopts fuzzy C-means clustering method to cut apart image and can obtain than the better effect of method that adopts Threshold segmentation, especially obvious to natural image.This is because natural image is complicated and changeable, level complexity, and each pixel belongs to the definite boundary of which kind of neither one.The degree that fuzzy C-means clustering belongs to each pixel which kind of shows with the form of probability, and directly think unlike hard C mean cluster (HCM) method that each pixel determines which kind of belongs to, therefore fuzzy C-means clustering method is cut apart to the property complicated and changeable that can better embody natural image for image.
Step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain are processed, and detect the straight line of the obvious part of linear feature on aircraft as the axis of this part.The principle of Hough conversion is that the straight-line detection problem in image space is converted to the maximizing problem in parameter Accumulator space, and the parameter corresponding to unit of accumulated value maximum is the parameter of required straight line.Because the line feature of aircaft configuration is obvious, therefore consider to adopt Hough to convert to detect the axis on aircraft.Adopt the method for Hough change detection straight line as follows:
(3.1) target image step 2 being obtained is asked size, quantizes according to the possible span of parameter in parameter space, according to a quantized result totalizer array A of structure (ρ, θ), is initialized as 0;
(3.2) set point in each XY space is got all over all probable values by θ, calculated ρ by formula (15), according to the value of ρ and θ A:A (ρ, θ)=A (ρ, θ)+1 of adding up;
ρ=xcosθ+ysinθ (15)
(3.3) according to corresponding ρ and the θ of maximal value in A after cumulative, make the straight line (being axes of aircraft) in XY by formula (15), the maximal value in A has represented the number of set point on this straight line.
Utilize the binary object image that method obtains above-mentioned steps two described in step 3 to carry out Hough conversion, testing result as shown in Figure 2.In figure, black line is the wing axis detecting.As can be seen from the figure,, for the obvious aircraft wing of linear feature position, use Hough conversion can well detect its axis, and to the unconspicuous fuselage of line feature position, Hough conversion can not detect.For resolving of follow-up attitude angle, wing and fuselage axis all need be detected.On the other hand, skeleton has the topological sum shape information identical with the original, can effectively describe object, is a kind of geometric properties of function admirable.Similarly, axis is also the geometry feature of reflection object.Therefore, consider Hough conversion to combine with skeletal extraction, utilize the obvious straight line of Hough change detection linear feature, and utilize skeleton point to carry out the unconspicuous part of fitting a straight line feature, and then obtain whole axis of object.
Step 4, the image that utilizes the method for skeletal extraction to obtain step 2 are processed, and extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, and the straight line obtaining is as the axis of this part on aircraft.Skeleton has the topological sum shape information identical with the original, can effectively describe object, is a kind of geometric properties of function admirable.The method that realizes skeletal extraction has multiple thinking, and Medial-Axis Transformation (medial axis transform, MAT) is the more effective technology of one.But the method need to be calculated the distance of all frontier points to All Ranges internal point, calculated amount is very large.Therefore, the present invention adopts successively the iterative refinement algorithm of cancellation frontier point to extract skeleton.
If known target point is labeled as 1, background dot is labeled as 0.Definition frontier point is that itself is labeled as 1, and in its 8-connected region, has at least a point to be labeled as 0 point.Algorithm is considered the 8-neighborhood centered by frontier point, and note central point is p
1, 8 points of its neighborhood are designated as respectively p around central point clockwise
2, p
3..., p
9, wherein p
2at p
1top.
Algorithm comprises frontier point is carried out to two step operations:
(41) mark meets the frontier point of following condition simultaneously:
(411)2≤N(p
1)≤6;
(412)S(p
1)=1;
(413)p
2·p
4·p
6=0;
(414)p
4·p
6·p
8=0;
Wherein N (p
1) be p
1non-zero adjoint point number, S (p
1) be with p
2,p
3..., p
9, p
2the number of the value of these points from 0 → 1 during for order.When to after all boundary point inspection, by all marks point remove.
(42) mark meets the frontier point of following condition simultaneously:
(421)1≤N(p
1)≤6
(422)S(p
1)=1;
(423)p
2·p
4·p
8=0;
(424)p
2·p
6·p
8=0;
Above two steps operations form an iteration, algorithm iterate until not point meet again flag condition, at this moment remaining some composition skeleton point.Extracted after the skeleton point of target, using the tie point in skeleton point as unique point, adopted the straight line of acquisition repeatedly of fitting a straight line, using this as aircraft on the axis of fuselage.Model plane image is tested, and the axis obtaining after the method that adopts Hough conversion and skeletal extraction to combine as shown in Figure 3.On figure middle machine body, black line is the axis that adopts the method for skeletal extraction to obtain.Can see, Hough conversion is combined with skeletal extraction and can accurately extract fuselage and wing axis on aircraft, lay the first stone for subsequent characteristics point extracts.
Step 5, the axial equation that utilizes step 3 and step 4 to obtain, calculate the intersection point of these two axis place straight lines, sets it as thick trace point, and method is as follows:
Process step 3 and step 4 are calculated the fuselage and the wing axis place straight-line equation that obtain respectively and are:
y1=k1*x1+b1 (16)
y2=k2*x2+b2 (17)
Wherein k1, b1 are respectively slope and the intercept of fuselage axis place straight line, and k2, b2 are respectively slope and the intercept of wing axis place straight line.Separating above-mentioned system of linear equations obtains its intersecting point coordinate and is:
In formula, (x, y) is the intersecting point coordinate of wing and fuselage axis place straight line, using this intersection point as thick trace point.Experiment finds, in the time that aircraft larger attitude occurs changes (as pitching, driftage and rolling) (head except recording geometry, only have in this case wing visible), fuselage and wing total energy are visible.And for the aircraft of different model, the feature of fuselage and wing is roughly the same.Therefore, consider to be feasible using the intersection point place of fuselage and wing as trace point.Like this, obtaining after the axis of fuselage and wing, utilizing the geometry feature of aircraft to take out last unique point, setting it as initial trace point.
Around step 6, the thick trace point that obtains in step 5, choose certain neighborhood, calculate the distribution field descriptor in this region.Then smooth distribution field picture under different scale, obtains object module.Change although the trace point that the method by above-mentioned skeletal extraction obtains can adapt to certain attitude, skeleton reflection be the global characteristics of target, want the more stable tracking of acquisition, need to be in conjunction with the local feature of robust more, to realize tenacious tracking.Because distribution field descriptor (DF) can be expressed the uncertainty of tracked target, can in the abundant information that seizure histogram comprises, keep the space structure of target, solve ambiguity, be a kind of based on template and the combination based on histogram descriptor.On the other hand, because DF is divided into multiple territories according to grey scale pixel value by target area, therefore to illumination variation sensitivity.And skeleton reflection is the global geometry of target, be not subject to the impact of grey scale pixel value, thereby to illumination variation robust.Given this, by the two combination, utilize the intersection point of skeletal extraction fuselage and wing as initial trace point at the first frame, centered by this point, choose 60*60 neighborhood and utilize DF to set up object module.Distribution field descriptor is set up as follows:
(6.1) distribution field (Distribution Fields, DFs) is made up of the probability distribution of a stack features.First adopt Kronecker Delta function that image I is extended to DFs:
Wherein i, j is pixel coordinate in image; K is the value of characteristics of image, and different k values represents different or layer.For gray level image, image size is line number and the columns that m × n(m and n are respectively image), characteristics of image is got gradation of image value, and k value gets 0~255, can produce the distributed in three dimensions field that a size is m × n × b, and b is the value number of gradation of image value.If characteristics of image is high dimensional feature, as image gradient, each territory of DFs is a Two dimensional Distribution, and so complete DFs descriptor has four dimensions.Adopt Kronecker Delta function to expand image, when characteristics of image is gray scale, the DFs after expansion does not reduce the information that source images comprises, as shown in Figure 4 (b).
(6.2) adopt gaussian filtering to carry out smoothly, can reducing like this sensitivity of object module to illumination variation and ground unrest to the DFs after expansion.Can adopt formula (21) to carry out smoothing processing to a distributed in three dimensions field:
Wherein, f
s(k) be level and smooth rear distribution field; F (k) is the corresponding distribution field of k;
expression standard deviation is σ
sdimensional Gaussian kernel function; * be convolution algorithm.General image smoothing can cause image space information dropout, makes source image information destroyed.And adopting gaussian kernel function when distribution field distinguish to smoothing processing, the information of each pixel is not lost, and is that location of pixels becomes uncertain, as shown in Fig. 4 (c).Like this be smoothly to carry out in each territory, can also carry out smoothly whole feature space equally, the DFs obtaining after level and smooth can be in the motion of sub-pix hierarchy description, shade and the impact of illumination variation on object module.By formula (22), whole DFs is carried out on feature space level and smooth:
Wherein, f
s(i, j) is the corresponding distribution field after formula (21) is level and smooth in picture position (i, j);
expression standard deviation is σ
fone dimension gaussian kernel function.
Step 7, interim when next frame, reappear in estimation present frame and follow the tracks of frame starting point according to previous frame target location, then the Euclidean distance of the distribution field image of search and previous frame in the neighborhood around starting point, the starting point that starting point corresponding minor increment is followed the tracks of to frame in present frame, continue search, until can not find minimum point, this,, as the starting point of the final tracking frame of present frame, upgrades object module simultaneously.
(7.1), in the time that object module and candidate target model are compared, the present invention adopts L1 distance to measure.Suppose that image is respectively I
1and I
2, their distribution field is respectively f
1and f
2, L1 distance is:
Wherein, f
1(i, j, k) and f
2(i, j, k) is respectively image I
1and I
2corresponding k the distribution field after formula (21) is level and smooth in position (i, j),
for f
1and f
2between L1 distance.
(7.2) object module upgrades.In tracing process, the factors such as athletic posture variation, ambient lighting variation cause model to change.In order to improve the robustness of motion tracking, must carry out effective model modification.Update method is as follows:
f
t+1(i,j,k)=αf
t(i,j,k)+(1-α)f
t-1(i,j,k) (24)
Wherein, f
t+1(i, j, k) is the object module after upgrading, f
t(i, j, k) is the object module of t frame, f
t-1(i, j, k) is the object module of t-1 frame, and α is for upgrading the factor, and the present invention gets 0.95.
In order to verify the accuracy of the inventive method, experiment adopts model plane image sequence, totally 1116 frames, and the intercepting tracking results that wherein the 288th frame, the 431st frame, the 667th frame and the 976th two field picture obtain is respectively as shown in Fig. 5,6,7,8.In figure, grey rectangle frame is for following the tracks of frame, and the grey cross at rectangle frame center represents the last trace point extracting.As can be seen from the figure, when target generation attitude changes (driftage, pitching and rolling, as shown in Figure 6 and Figure 7), when illumination variation or fuzzy (Fig. 7), the method of utilizing framework characteristic point to combine with distribution field descriptor can obtain trace point accurately, realizes the tenacious tracking to motor-driven expansion target.
For algorithm is carried out to quantitative test, the method (abbreviation skeleton_DFs) that adopts respectively method (being called for short skeleton_patch) that single distribution field method (be called for short DFs), skeleton be combined with local patch and framework characteristic point to combine with distribution field descriptor to above-mentioned model plane sequence, the interframe error obtaining respectively along the standard deviation of X and Y-direction as shown in Table 1.As can be seen from the table, adopt the method that framework characteristic point is combined with distribution field descriptor can obtain minimum standard deviation, show that thus the inventive method can obtain maximum stability.
Table one adopts interframe error that distinct methods obtains along X and Y-direction standard deviation
Non-elaborated part of the present invention belongs to those skilled in the art's known technology.
Those of ordinary skill in the art will be appreciated that, above embodiment is only for the present invention is described, and be not used as limitation of the invention, as long as within the scope of connotation of the present invention, the above embodiment is changed, and modification all will drop in the scope of the claims in the present invention book.
Claims (4)
1. in conjunction with an expansion method for tracking target for framework characteristic point and distribution field descriptor, it is characterized in that comprising the steps:
Step 1, image pre-service: adopt Gaussian smoothing filtering to process pending image, remove the impact of noise, obtain filtered smoothed image;
Step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, obtain bianry image;
Step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain are processed, and detect the straight line of the obvious part of linear feature on aircraft as the axis of this part, i.e. wing axis;
Step 4, the image that utilizes the method for skeletal extraction to obtain step 2 are processed, and extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, and the straight line obtaining is as the axis of this part on aircraft, i.e. fuselage axis;
Step 5, obtain the axial equation that step 3 and step 4 obtain, calculate the intersection point of fuselage axis and these two axis place straight lines of wing axis, using this intersection point as thick trace point;
Around step 6, the thick trace point that obtains in step 5, choose certain neighborhood, calculate the distribution field descriptor in this region, then smooth distribution field picture under different scale, obtains object module;
Step 7, next interim when next frame, the optimum position of following the tracks of frame in the object module search present frame obtaining according to previous frame, will follow the tracks of frame place-centric as the best trace point of present frame.
2. the expansion method for tracking target of combination framework characteristic point according to claim 1 and distribution field descriptor, is characterized in that: in described step 5, the process of calculating the intersection point of fuselage axis and these two axis place straight lines of wing axis is:
Process step 3 and step 4 are calculated the fuselage and the wing axis place straight-line equation that obtain respectively and are:
y1=k1*x1+b1 (1)
y2=k2*x2+b2 (2)
Wherein k1, b1 are respectively slope and the intercept of fuselage axis place straight line, and k2, b2 are respectively slope and the intercept of wing axis place straight line, separate above-mentioned system of linear equations and obtain its intersecting point coordinate and be:
Wherein (x, y) is the intersecting point coordinate of wing and fuselage axis place straight line.
3. the expansion method for tracking target of combination framework characteristic point according to claim 1 and distribution field descriptor, is characterized in that: the certain neighborhood in described step 6 is the region of size for 60*60.
4. the expansion method for tracking target of combination framework characteristic point according to claim 1 and distribution field descriptor, it is characterized in that: described in step 7 when next frame interim, the optimum position of following the tracks of frame in the object module search present frame obtaining according to previous frame, using following the tracks of frame place-centric as the detailed process of the best trace point of present frame is:
(41) when next frame comes interim, reappear in estimation present frame and follow the tracks of frame starting point according to previous frame target location, then the Euclidean distance of the distribution field image of search and previous frame in the neighborhood around starting point, the starting point that starting point corresponding minor increment is followed the tracks of to frame in present frame, continue search, until can not find minimum point, this is as the starting point of the final tracking frame of present frame;
(42) upgrade object module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410115395.XA CN103854290A (en) | 2014-03-25 | 2014-03-25 | Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410115395.XA CN103854290A (en) | 2014-03-25 | 2014-03-25 | Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103854290A true CN103854290A (en) | 2014-06-11 |
Family
ID=50861902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410115395.XA Pending CN103854290A (en) | 2014-03-25 | 2014-03-25 | Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103854290A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104077774A (en) * | 2014-06-28 | 2014-10-01 | 中国科学院光电技术研究所 | Extended target tracking method and device combined with framework and generalized Hough transformation |
CN104881630A (en) * | 2015-03-31 | 2015-09-02 | 浙江工商大学 | Vehicle identification method based on window segmentation and fuzzy characteristics |
CN108257155A (en) * | 2018-01-17 | 2018-07-06 | 中国科学院光电技术研究所 | A kind of extension target tenacious tracking point extracting method based on part and Global-Coupling |
CN109740537A (en) * | 2019-01-03 | 2019-05-10 | 广州广电银通金融电子科技有限公司 | The accurate mask method and system of pedestrian image attribute in crowd's video image |
CN111027110A (en) * | 2019-11-27 | 2020-04-17 | 中国科学院光电技术研究所 | Comprehensive optimization method for topology and shape and size of continuum structure |
WO2021007744A1 (en) * | 2019-07-15 | 2021-01-21 | 广东工业大学 | Kernel fuzzy c-means fast clustering algorithm with integrated spatial constraints |
CN113048884A (en) * | 2021-03-17 | 2021-06-29 | 西安工业大学 | Extended target tracking experiment platform and experiment method thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617328A (en) * | 2013-12-08 | 2014-03-05 | 中国科学院光电技术研究所 | Airplane three-dimensional attitude computation method |
CN103632381A (en) * | 2013-12-08 | 2014-03-12 | 中国科学院光电技术研究所 | Method for tracking extended targets by means of extracting feature points by aid of frameworks |
-
2014
- 2014-03-25 CN CN201410115395.XA patent/CN103854290A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103617328A (en) * | 2013-12-08 | 2014-03-05 | 中国科学院光电技术研究所 | Airplane three-dimensional attitude computation method |
CN103632381A (en) * | 2013-12-08 | 2014-03-12 | 中国科学院光电技术研究所 | Method for tracking extended targets by means of extracting feature points by aid of frameworks |
Non-Patent Citations (1)
Title |
---|
S. LAURA等: "Distribution fields for tracking", 《COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, 31 December 2012 (2012-12-31), pages 4 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104077774A (en) * | 2014-06-28 | 2014-10-01 | 中国科学院光电技术研究所 | Extended target tracking method and device combined with framework and generalized Hough transformation |
CN104881630A (en) * | 2015-03-31 | 2015-09-02 | 浙江工商大学 | Vehicle identification method based on window segmentation and fuzzy characteristics |
CN104881630B (en) * | 2015-03-31 | 2018-12-04 | 浙江工商大学 | Vehicle identification method based on vehicle window segmentation and fuzzy feature |
CN108257155A (en) * | 2018-01-17 | 2018-07-06 | 中国科学院光电技术研究所 | A kind of extension target tenacious tracking point extracting method based on part and Global-Coupling |
CN108257155B (en) * | 2018-01-17 | 2022-03-25 | 中国科学院光电技术研究所 | Extended target stable tracking point extraction method based on local and global coupling |
CN109740537B (en) * | 2019-01-03 | 2020-09-15 | 广州广电银通金融电子科技有限公司 | Method and system for accurately marking attributes of pedestrian images in crowd video images |
CN109740537A (en) * | 2019-01-03 | 2019-05-10 | 广州广电银通金融电子科技有限公司 | The accurate mask method and system of pedestrian image attribute in crowd's video image |
WO2021007744A1 (en) * | 2019-07-15 | 2021-01-21 | 广东工业大学 | Kernel fuzzy c-means fast clustering algorithm with integrated spatial constraints |
CN112424828A (en) * | 2019-07-15 | 2021-02-26 | 广东工业大学 | Nuclear fuzzy C-means fast clustering algorithm integrating space constraint |
CN112424828B (en) * | 2019-07-15 | 2024-02-02 | 广东工业大学 | Nuclear fuzzy C-means quick clustering algorithm integrating space constraint |
CN111027110A (en) * | 2019-11-27 | 2020-04-17 | 中国科学院光电技术研究所 | Comprehensive optimization method for topology and shape and size of continuum structure |
CN113048884A (en) * | 2021-03-17 | 2021-06-29 | 西安工业大学 | Extended target tracking experiment platform and experiment method thereof |
CN113048884B (en) * | 2021-03-17 | 2022-12-27 | 西安工业大学 | Extended target tracking experiment platform and experiment method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103854290A (en) | Extended target tracking method based on combination of skeleton characteristic points and distribution field descriptors | |
Cheng et al. | Accurate urban road centerline extraction from VHR imagery via multiscale segmentation and tensor voting | |
Alidoost et al. | A CNN-based approach for automatic building detection and recognition of roof types using a single aerial image | |
CN103617328B (en) | A kind of airplane three-dimensional attitude computation method | |
CN105139412A (en) | Hyperspectral image corner detection method and system | |
CN103136525B (en) | A kind of special-shaped Extended target high-precision locating method utilizing Generalized Hough Transform | |
CN104751187A (en) | Automatic meter-reading image recognition method | |
CN102629380B (en) | Remote sensing image change detection method based on multi-group filtering and dimension reduction | |
CN105740798A (en) | Structure analysis based identification method for object in point cloud scene | |
CN102903109B (en) | A kind of optical image and SAR image integration segmentation method for registering | |
CN111340855A (en) | Road moving target detection method based on track prediction | |
CN102521597B (en) | Hierarchical strategy-based linear feature matching method for images | |
CN103886619A (en) | Multi-scale superpixel-fused target tracking method | |
CN107798691B (en) | A kind of unmanned plane independent landing terrestrial reference real-time detection tracking of view-based access control model | |
CN103839274B (en) | A kind of Extended target tracking based on geometric proportion relation | |
Cheng et al. | Efficient sea–land segmentation using seeds learning and edge directed graph cut | |
CN102663740B (en) | SAR image change detection method based on image cutting | |
Lu et al. | A cnn-transformer hybrid model based on cswin transformer for uav image object detection | |
CN109034213B (en) | Hyperspectral image classification method and system based on correlation entropy principle | |
CN104156976A (en) | Multiple characteristic point tracking method for detecting shielded object | |
Lu et al. | A lightweight real-time 3D LiDAR SLAM for autonomous vehicles in large-scale urban environment | |
CN102663773A (en) | Dual-core type adaptive fusion tracking method of video object | |
Li et al. | Detecting building changes using multi-modal Siamese multi-task networks from very high resolution satellite images | |
CN104574435B (en) | Based on the moving camera foreground segmentation method of block cluster | |
CN105787505A (en) | Infrared image clustering segmentation method combining sparse coding and spatial constraints |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140611 |
|
RJ01 | Rejection of invention patent application after publication |