CN103632381A - Extended target tracking method for extracting feature points by using skeleton - Google Patents
Extended target tracking method for extracting feature points by using skeleton Download PDFInfo
- Publication number
- CN103632381A CN103632381A CN201310656022.9A CN201310656022A CN103632381A CN 103632381 A CN103632381 A CN 103632381A CN 201310656022 A CN201310656022 A CN 201310656022A CN 103632381 A CN103632381 A CN 103632381A
- Authority
- CN
- China
- Prior art keywords
- point
- image
- straight line
- fuzzy
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000001914 filtration Methods 0.000 claims abstract description 4
- 238000009499 grossing Methods 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 30
- 239000011159 matrix material Substances 0.000 claims description 17
- 239000000284 extract Substances 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 9
- 238000003709 image segmentation Methods 0.000 claims description 5
- 238000012546 transfer Methods 0.000 claims description 5
- 230000001186 cumulative effect Effects 0.000 claims description 4
- 238000005192 partition Methods 0.000 claims description 3
- 238000006467 substitution reaction Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims 1
- 230000008859 change Effects 0.000 abstract description 7
- 230000009466 transformation Effects 0.000 abstract description 2
- 238000007781 pre-processing Methods 0.000 abstract 1
- 238000006243 chemical reaction Methods 0.000 description 15
- 238000001514 detection method Methods 0.000 description 8
- 230000011218 segmentation Effects 0.000 description 5
- 206010034719 Personality change Diseases 0.000 description 4
- 230000008878 coupling Effects 0.000 description 4
- 238000010168 coupling process Methods 0.000 description 4
- 238000005859 coupling reaction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000004807 localization Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005352 clarification Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000005096 rolling process Methods 0.000 description 2
- 239000004575 stone Substances 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
An extended target tracking method for extracting feature points by utilizing a framework comprises the steps of preprocessing an image to be processed by adopting Gaussian smoothing filtering to remove the influence of noise on a subsequent algorithm, segmenting the smoothed image by adopting a Fuzzy C-Means clustering algorithm (FCM) to obtain a binary image, carrying out Hough transformation on the segmented binary target image to detect a part with obvious linear feature on a target, extracting the framework for the part with the unobvious linear feature on the target to obtain the feature points, carrying out linear fitting on the feature points on the framework to obtain a target axis, and taking the intersection point of the obtained straight line with the axis as a final tracking point, thereby obviously realizing stable tracking under the condition of large posture change.
Description
Technical field
The present invention relates to a kind of motor-driven expansion method for tracking target, particularly a kind of Hough of utilization conversion, skeletal extraction unique point are followed the tracks of motor-driven expansion order calibration method, are mainly used in image processing, computer vision.Belong to target detection tracing technical field in photoeletric measuring system.
Background technology
In photoeletric measuring system, in order to improve tracking accuracy, the visual field of detector is all smaller, and target size is bigger than normal again.Therefore in detector, target presents the form of expansion.Distant object imaging, because the degraded factors such as aberration of atmospheric turbulence, thrashing and optical system cause target very fuzzy in the imaging of system, poor contrast; In addition, target is without texture information, different, without characterizing and identification clarification of objective information.Target also exists attitude to change obvious feature, and along with the variation of targeted attitude, trace point also can drift about thereupon.Choosing stable unique point and carry out locking tracking, is a great problem that expansion target following faces.
At present, the conventional algorithm for expansion target is coupling, comprises the coupling of the aspects such as gray scale, feature.Motion due to target, may there is the variations such as size, shape, attitude in target, add the various interference such as background, illumination, and image is processed the precision problem of minimum measurement unit, coupling is followed the tracks of and be can not get definitely best matched position, and this can bring the drift of trace point.Because target is without characterizing and identification clarification of objective information, attitude changes greatly, and traditional tracking based on gray feature is easy of losing target when target occurs that larger attitude changes, can not meet the requirement of practical application, therefore in the urgent need to studying new method, to annex, adapt to the engineering application demand of following the tracks of.
Summary of the invention
The technology of the present invention is dealt with problems: for the deficiencies in the prior art, the expansion method for tracking target of a kind of Hough of utilization conversion, skeletal extraction unique point is provided, from in essence that the geometry information of motor-driven expansion target is out abstract, the tenacious tracking of realize target under larger attitude changes.
For realizing such object, technical scheme of the present invention: a kind of expansion method for tracking target that utilizes skeletal extraction unique point, comprises the steps:
Step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, obtain bianry image;
Step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain are processed, and detect the straight line of the obvious part of linear feature on aircraft as the axis of this part;
Step 4, utilize the image that the method for skeletal extraction obtains step 2 to process, extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, the straight line obtaining is as the axis of this part on aircraft;
Step 5, obtain the axial equation that step 3 and step 4 obtain, calculate the intersection point of these two axis place straight lines, using it as thick trace point;
Step 6, when next frame arrives, the trace point that utilizes interframe continuity Information revision step 5 to obtain, using revised unique point as the last trace point of present frame.
Wherein, in described step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, the method that obtains bianry image is:
Step (21), initialization: given cluster classification is counted C=2 in C(the present invention), set iteration stopping threshold epsilon, initialization fuzzy partition matrix U
(0), iterations l=0, Fuzzy Weighting Exponent m (m=2 in the present invention);
Step (22), by U
(0)substitution formula (9), calculates cluster centre matrix V
(l):
Wherein n is number of pixels to be clustered, and m is FUZZY WEIGHTED index, and c is cluster classification number, u
ikfuzzy classification matrix U while being the l time iteration
(l)in the capable k column element of i, x
kfor k pixel value in image to be clustered, v
icluster centre matrix V while being the l time iteration
(l)in i cluster centre value;
Step (23), according to formula (10), utilize V
(l)upgrade U
(l), obtain new fuzzy classification matrix U
(l+1):
D wherein
ikfor the Euclidean distance of k element and i cluster centre in image to be clustered, similarly, d
jkeuclidean distance for k element and j cluster centre in image to be clustered;
Step (24) if || U
(l)-U
l+1|| < ε, iteration stopping.Otherwise put l=l+1, return to step (22);
Step (25), calculate in image to be clustered the Euclidean distance of the cluster centre value that each pixel distance above-mentioned steps (21)-(24) obtains, the pixel value nearest apart from cluster centre is set to 1, otherwise is set to 0, the bianry image after being cut apart thus.
Wherein, in described step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain is processed, detect linear feature on aircraft obviously the straight line of part as the method for the axis of this part, be:
Step (31), the target image that step 2 is obtained are asked size, according to the possible span of parameter in parameter space, quantize, and according to a quantized result structure totalizer array A (ρ, θ), are initialized as 0;
Step (32), the set point in each XY space is got all over all probable values by θ, with formula (9), calculated ρ, according to the value of ρ and θ A:A (ρ, θ)=A (ρ, θ)+1 of adding up;
ρ=x cosθ+y sinθ (11)
Wherein ρ and θ are respectively two parameters (amplitude and angle) in parameter space, and (x, y) is the point coordinate in image space.
Corresponding ρ and the θ of maximal value in step (33), the cumulative rear A of basis, makes the straight line (being axes of aircraft) in XY by formula (11), and the maximal value in A has represented the number of set point on this straight line.
Wherein, in described step 4, utilize the image that the method for skeletal extraction obtains step 2 to process, extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, the straight line obtaining is as the method for the axis of this part on aircraft, and the present invention adopts successively the iterative refinement algorithm of cancellation frontier point to extract skeleton, and algorithm is as follows:
If known target point is labeled as 1, background dot is labeled as 0.Definition frontier point is that itself is labeled as 1, and in its 8-connected region, has at least a point to be labeled as 0 point.Algorithm is considered the 8-neighborhood centered by frontier point, and note central point is p
1, 8 points of its neighborhood are designated as respectively p around central point clockwise
2, p
3..., p
9, p wherein
2at p
1top.
Algorithm comprises frontier point is carried out to two step operations:
(41) mark meets the frontier point of following condition simultaneously:
(411)2≤N(p
1)≤6;
(412)S(p
1)=1;
(413)p
2·p
4·p
6=0;
(414)p
4·p
6·p
8=0;
N (p wherein
1) be p
1non-zero adjoint point number, S (p
1) be with p
2, p
3..., p
9, p
2the number of the value of these points from 0 → 1 during for order.When to after all boundary point check, by all marks point remove.
(42) mark meets the frontier point of following condition simultaneously:
(421)1≤N(p
1)≤6;
(422)S(p
1)=1;
(423)p
2·p
4·p
8=0;
(424)p
2·p
6·p
8=0;
Above two steps operations form an iteration, algorithm iterate until not point meet again flag condition, at this moment remaining point forms skeleton point.Extracted after the skeleton point of target, using the tie point in skeleton point as unique point, adopted the method for fitting a straight line to obtain straight line, usingd this axis as fuselage on aircraft.
Wherein, in described step 5, the axial equation that utilizes step 3 and step 4 to obtain, the method for calculating the intersection point of these two axis place straight lines is:
Process step 3 and step 4 are calculated the fuselage and the wing axis place straight-line equation that obtain respectively:
y1=k1*x1+b1 (12)
y2=k2*x2+b2 (13)
Wherein k1, b1 are respectively slope and the intercept of fuselage axis place straight line, and k2, b2 are respectively slope and the intercept of wing axis place straight line, separate above-mentioned system of linear equations and obtain its intersecting point coordinate and be:
Wherein (x, y) is the intersecting point coordinate of wing and fuselage axis place straight line.
Wherein, in described step 6, when next frame arrives, the intersection point that utilizes interframe continuity Information revision step 5 to obtain, the method using revised unique point as last trace point is:
(61), when next frame arrives, adopt step 1~five to calculate the intersection point of wing and fuselage axis place straight line in present frame;
(62) calculate the Euclidean distance of (61) gained intersection point and the trace point of previous frame, if gained is apart from being less than threshold value d=10, by (61) gained intersection point as present frame trace point; Otherwise upgrade present frame trace point according to formula (16):
P
n=α*P
l+(1-α)*P (16)
P wherein
n=(x
n, y
n) be the trace point coordinate after upgrading, P
l=(x
l, y
l) be the trace point coordinate of previous frame, the thick trace point coordinate of P=(x, y) for calculating through step (62) in present frame, α is for upgrading the factor, and the present invention is set to 0.8.
The present invention's beneficial effect is compared with prior art:
(1) the present invention adopts fuzzy mean cluster FCM(Fuzzy C-Means Cluster to the image after level and smooth) method is by target and background segment out, compare with traditional method based on gray level threshold segmentation, owing to considering the property complicated and changeable of natural image, FCM considers that from the angle of fuzzy clustering image cuts apart, and the image of acquisition more meets truth.
(2) the present invention carries out Hough conversion to the whole target image after cutting apart, and target is not carried out to rim detection, can effectively retain the correlativity between object pixel like this, strengthens the noise immunity of algorithm.
(3) the present invention adopts the method for skeletal extraction to obtain the architectural feature point on Aircraft Targets to image after cutting apart, then unique point is carried out to fitting a straight line, has solved the problem that Hough change detection does not go out the axis of the unconspicuous object of linear feature.
(4) the present invention is in conjunction with the method for Hough conversion and skeletal extraction, can accurately extract fuselage on aircraft and the axis of wing, for follow-up accurate extract minutiae lays the first stone.
(5) the invention provides the motor-driven expansion method for tracking target of a kind of Hough of utilization conversion and skeletal extraction unique point, compare with the method based on gray-scale value coupling, the inventive method extracts the geometry information of expansion target, can process larger attitude and change.
Accompanying drawing explanation
Fig. 1 is the inventive method realization flow figure;
Fig. 2 is that the present invention adopts Hough change detection wing straight line result;
Fig. 3 is that the present invention adopts Hough conversion and skeletal extraction combine detection fuselage and wing axis result;
Fig. 4 is that the present invention carries out the result of track and localization to the 1st two field picture of sequence used;
Fig. 5 is that the present invention carries out the result of track and localization to the 61st two field picture of sequence used;
Fig. 6 is that the present invention carries out the result of track and localization to the 150th two field picture of sequence used;
Fig. 7 is that the present invention carries out the result of track and localization to the 860th two field picture of sequence used.
Embodiment
Below in conjunction with accompanying drawing, embodiments of the invention are elaborated.The present embodiment is implemented take technical solution of the present invention under prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to following embodiment.
The present invention is based on the expansion method for tracking target of a kind of Hough of utilization conversion, skeletal extraction unique point, input picture is single station flash ranging model plane image sequence.
As shown in Figure 1, the invention provides the expansion method for tracking target of a kind of Hough of utilization conversion, skeletal extraction unique point, comprise following steps:
Step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, obtain bianry image.In essence, to cut apart be a process of pixel being classified based on certain attribute to image.The property complicated and changeable of natural image has determined that it is uncertain that many pixels belong in the problem of which cluster at it, thereby considers that from the angle of fuzzy clustering image ratio of division is more reasonable.Fuzzy C-Means Cluster Algorithm (FCM, Fuzzy C-Means Cluster) be from hard C mean algorithm (HCM, Hard C-Means Cluster) develop, its essence is a kind of nonlinear iteration optimization method of based target function, the Weighted Similarity in objective function employing image between each pixel and each cluster centre is estimated.The task of FCM algorithm is exactly by iteration, selects a rational fuzzy membership matrix cluster centre, makes objective function reach minimum, thereby obtains optimal segmentation result.
Fuzzy C-Means Cluster Algorithm is divided by the iteration optimization of objective function is realized to set, and it can express the degree that each pixel of image belongs to a different category.If n is pixel count to be clustered, c is classification number (c=2 in the present invention), and m is FUZZY WEIGHTED index (getting m=2 in the present invention), and it controls degree of membership all kinds of shared degree.The value of objective function is that in image, each pixel, to the weighted sum of squares of C cluster centre, can be expressed as:
Wherein, u
ikbe the degree of membership of k pixel to i class, d
ikbe k pixel to the distance of i class, U is fuzzy classification matrix, V is cluster centre set, J
m(U, V) for when FUZZY WEIGHTED index is m in image each pixel arrive the weighted sum of squares of C cluster centre.
Clustering criteria will be sought best group exactly to (U, V), makes J
m(U, V) minimum.J
mminimization can be realized by iterative algorithm below:
(2.1) initialization: given cluster classification is counted C=2 in C(the present invention), set iteration stopping threshold epsilon, initialization fuzzy partition matrix U
(0), iterations l=0, Fuzzy Weighting Exponent m (m=2 in the present invention);
(2.2) by U
(l)substitution formula (18), calculates cluster centre matrix V
(l):
Wherein n is number of pixels to be clustered, and m is FUZZY WEIGHTED index, and c is cluster classification number, u
ikfuzzy classification matrix U while being the l time iteration
(l)in the capable k column element of i, x
kfor k pixel value in image to be clustered, v
icluster centre matrix V while being the l time iteration
(l)in i cluster centre value;
(2.3) according to formula (19), utilize V
(l)upgrade U
(l), obtain new fuzzy classification matrix U
(l+1):
D wherein
ikfor the Euclidean distance of k element and i cluster centre in image to be clustered, similarly, d
jkeuclidean distance for k element and j cluster centre in image to be clustered;
(2.4) if || U
(l)-U
l+1|| < ε, iteration stopping.Otherwise put l=l+1, return to step (2.2);
(2.5) calculate the Euclidean distance of the cluster centre value that in image to be clustered, each pixel distance above-mentioned steps (2.1)-(2.4) obtains, the pixel value nearest apart from cluster centre is set to 1, otherwise is set to 0, the bianry image after being cut apart thus.
Experiment is found, adopts fuzzy C-means clustering method to cut apart image and can obtain than the better effect of method that adopts Threshold segmentation, especially obvious to natural image.This is because natural image is complicated and changeable, and level is complicated, and each pixel belongs to the definite boundary of which kind of neither one.The degree that fuzzy C-means clustering belongs to each pixel which kind of shows with the form of probability, and unlike hard C mean cluster (HCM) method, directly think that each pixel determines which kind of belongs to, therefore fuzzy C-means clustering method is cut apart to the property complicated and changeable that can better embody natural image for image.
Step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain are processed, and detect the straight line of the obvious part of linear feature on aircraft as the axis of this part.The principle of Hough conversion is that the straight-line detection problem in image space is converted to the maximizing problem in parameter Accumulator space, and the parameter corresponding to unit of accumulated value maximum is the parameter of required straight line.Because the line feature of aircaft configuration is obvious, therefore consider to adopt Hough to convert to detect the axis on aircraft.Adopt the method for Hough change detection straight line as follows:
(3.1) target image step 2 being obtained is asked size, according to the possible span of parameter in parameter space, quantizes, and according to a quantized result structure totalizer array A (ρ, θ), is initialized as 0;
(3.2) set point in each XY space is got all over all probable values by θ, by formula (9), calculated ρ, according to cumulative A:A (ρ, θ)=A (ρ, θ)+1 of the value of ρ and θ;
ρ=x cosθ+y sinθ (20)
Wherein ρ and θ are respectively two parameters (amplitude and angle) in parameter space, and (x, y) is the point coordinate in image space.
(3.3) according to corresponding ρ and the θ of maximal value in A after cumulative, by formula (16), make the straight line (being axes of aircraft) in XY, the maximal value in A has represented the number of set point on this straight line.
Utilize the binary object image that method obtains above-mentioned steps two described in step 3 to carry out Hough conversion, testing result as shown in Figure 2.In figure, black line is detected wing axis.As can be seen from the figure,, for the obvious aircraft wing of linear feature position, use Hough conversion can well detect its axis, and to the unconspicuous fuselage of line feature position, Hough conversion can not detect.For resolving of follow-up attitude angle, wing and fuselage axis all need be detected.On the other hand, skeleton has the topological sum shape information identical with the original, can effectively describe object, is a kind of geometric properties of function admirable.Similarly, axis is also the geometry feature of reflection object.Therefore, consider Hough conversion to combine with skeletal extraction, utilize the obvious straight line of Hough change detection linear feature, and utilize skeleton point to carry out the unconspicuous part of fitting a straight line feature, and then obtain whole axis of object.
Step 4, utilize the image that the method for skeletal extraction obtains step 2 to process, extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, the straight line obtaining is as the axis of this part on aircraft.Skeleton has the topological sum shape information identical with the original, can effectively describe object, is a kind of geometric properties of function admirable.The method that realizes skeletal extraction has multiple thinking, and Medial-Axis Transformation (medial axis transform, MAT) is a kind of more effective technology.Yet the method need to be calculated all frontier points to the distance of All Ranges internal point, and calculated amount is very large.Therefore, the present invention adopts successively the iterative refinement algorithm of cancellation frontier point to extract skeleton.
If known target point is labeled as 1, background dot is labeled as 0.Definition frontier point is that itself is labeled as 1, and in its 8-connected region, has at least a point to be labeled as 0 point.Algorithm is considered the 8-neighborhood centered by frontier point, and note central point is p
1, 8 points of its neighborhood are designated as respectively p around central point clockwise
2, p
3..., p
9, p wherein
2at p
1top.
Algorithm comprises frontier point is carried out to two step operations:
(4.1) mark meets the frontier point of following condition simultaneously:
(4.1) mark meets the frontier point of following condition simultaneously:
(4.1.1)2≤N(p
1)≤6;
(4.1.2)S(p
1)=1;
(4.1.3)p
2·p
4·p
6=0;
(4.1.4)p
4·p
6·p
8=0;
N (p wherein
1) be p
1non-zero adjoint point number, S (p
1) be with p
2, p
3..., p
9, p
2the number of the value of these points from 0 → 1 during for order.When to after all boundary point check, by all marks point remove.
(4.2) mark meets the frontier point of following condition simultaneously:
(4.2.1)1≤N(p
1)≤6
(4.2.2)S(p
1)=1;
(4.2.3)p
2·p
4·p
8=0;
(4.2.4)p
2·p
6·p
8=0;
Above two steps operations form an iteration, algorithm iterate until not point meet again flag condition, at this moment remaining point forms skeleton point.Extracted after the skeleton point of target, using the tie point in skeleton point as unique point, adopted the straight line of acquisition repeatedly of fitting a straight line, usingd this axis as fuselage on aircraft.Model plane image is tested, and the axis obtaining after the method that adopts Hough conversion and skeletal extraction to combine as shown in Figure 3.The axis of black line for adopting the method for skeletal extraction to obtain on figure middle machine body.Can see, Hough conversion is combined with skeletal extraction and can accurately extract fuselage and wing axis on aircraft, for subsequent characteristics point extracts, lay the first stone.
Step 5, the axial equation that utilizes step 3 and step 4 to obtain, calculate the intersection point of these two axis place straight lines, and using it as thick trace point, method is as follows:
Process step 3 and step 4 are calculated the fuselage and the wing axis place straight-line equation that obtain respectively:
y1=k1*x1+b1 (21)
y2=k2*x2+b2 (22)
Wherein k1, b1 are respectively slope and the intercept of fuselage axis place straight line, and k2, b2 are respectively slope and the intercept of wing axis place straight line, separate above-mentioned system of linear equations and obtain its intersecting point coordinate and be:
In formula, (x, y) is the intersecting point coordinate of wing and fuselage axis place straight line, using this intersection point as thick trace point.Experiment finds, when aircraft larger attitude occurs changes (as pitching, driftage and rolling) (head except recording geometry, only have in this case wing visible), fuselage and wing total energy are visible.And for the aircraft of different model, the feature of fuselage and wing is roughly the same.Therefore, consider as tenacious tracking point, to be feasible using the intersection point place of fuselage and wing.Like this, after having obtained the axis of fuselage and wing, utilize the geometry feature of aircraft to take out last unique point, the point using it as tenacious tracking.
Step 6, when next frame arrives, the intersection point that utilizes interframe continuity Information revision step 5 to obtain, using revised unique point as last trace point.Owing to being subject to the impact of the precision such as Target Segmentation, axis extraction, utilize the intersection point that step 5 calculates may have deviation, in order further to revise trace point error, to utilize between consecutive frame and there is the trace point that this feature of continuity comes further step of updating five to obtain, its method is as follows:
(6.1), when next frame arrives, adopt step 1~five to calculate the intersection point of wing and fuselage axis place straight line in present frame;
(6.2) calculate the Euclidean distance of (6.1) gained intersection point and the trace point of previous frame, if gained is apart from being less than threshold value d=10, by (6.1) gained intersection point as present frame trace point; Otherwise upgrade present frame trace point according to formula (25):
P
n=α*P
l+(1-α)*P (25)
P wherein
n=(x
n, y
n) be the trace point coordinate after upgrading, P
l=(x
l, y
l) be the trace point coordinate of previous frame, the thick trace point coordinate of P=(x, y) for calculating through step (62) in present frame, α is for upgrading the factor, and the present invention is set to 0.8.Like this, through after revising, the impact that further precision such as noise decrease, Target Segmentation, axis extraction causes the extraction accuracy of subsequent characteristics point, thus guaranteed the stability of tracking.
In order to verify the accuracy of the inventive method, experiment adopts model plane image sequence, and totally 1116 frames, intercept the tracking results that wherein the 1st frame, the 61st frame, the 150th frame and the 860th two field picture obtain and distinguish as shown in FIG. 4,5,6, 7.In figure, black line is respectively and extracts to obtain wing and fuselage axis, and grey rectangle frame is for following the tracks of frame, and the grey cross at rectangle frame center represents the last trace point extracting.As can be seen from the figure, when target generation attitude changes (driftage, pitching and rolling, as shown in Figure 6 and Figure 7), when illumination variation or fuzzy (Fig. 7), utilize the method that Hough changes and skeletal extraction combines can accurately extract wing and fuselage place axis, thereby further obtain trace point accurately, realize the tenacious tracking to motor-driven expansion target.
Non-elaborated part of the present invention belongs to those skilled in the art's known technology.
Those of ordinary skill in the art will be appreciated that, above embodiment is only for the present invention is described, and be not used as limitation of the invention, as long as within the scope of connotation of the present invention, the above embodiment is changed, and modification all will drop in the scope of the claims in the present invention book.
Claims (6)
1. an expansion method for tracking target that utilizes skeletal extraction unique point, its feature comprises the steps:
Step 1, image pre-service: adopt Gaussian smoothing filtering to process pending image, remove the impact of noise, obtain filtered smoothed image;
Step 2, use the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1, obtain bianry image;
Step 3, the bianry image that utilizes Hough transfer pair step 2 to obtain are processed, and detect the straight line of the obvious part of linear feature on aircraft as the axis of this part;
Step 4, utilize the image that the method for skeletal extraction obtains step 2 to process, extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, the straight line obtaining is as the axis of this part on aircraft;
Step 5, obtain the axial equation that step 3 and step 4 obtain, calculate the intersection point of these two axis place straight lines, using it as thick trace point;
Step 6, when next frame arrives, the trace point that utilizes interframe continuity Information revision step 5 to obtain, using revised unique point as the last trace point of present frame.
2. the expansion method for tracking target that utilizes skeletal extraction unique point according to claim 1, is characterized in that: the method for utilizing the Image Segmentation Using after level and smooth that Fuzzy C-Means Cluster Algorithm FCM (Fuzzy C-Means Cluster) obtains step 1 described in step 2 is achieved as follows:
Step (21), initialization: given cluster classification is counted C, set iteration stopping threshold epsilon, initialization fuzzy partition matrix U
(0), iterations l=0, Fuzzy Weighting Exponent m;
Step (22), by U
(l)substitution formula (1), calculates cluster centre matrix V
(l):
Wherein n is number of pixels to be clustered, and m is FUZZY WEIGHTED index, and c is cluster classification number, u
ikfuzzy classification matrix U while being the l time iteration
(l)in the capable k column element of i, x
kfor k pixel value in image to be clustered, v
icluster centre matrix V while being the l time iteration
(l)in i cluster centre value;
Step (23), utilize V
(l)upgrade U
(l), obtain new fuzzy classification matrix U
(l+1):
D wherein
ikfor the Euclidean distance of k element and i cluster centre in image to be clustered, d
jkeuclidean distance for k element and j cluster centre in image to be clustered;
Step (24) if || U
(l)-U
l+1|| < ε, iteration stopping.Otherwise, put l=l+1, return to step (22);
Step (25), calculate in image to be clustered the Euclidean distance of the cluster centre value that each pixel distance above-mentioned steps (21)-(24) obtains, the pixel value nearest apart from cluster centre is set to 1, otherwise is set to 0, the bianry image after being cut apart thus.
3. the expansion method for tracking target that utilizes skeletal extraction unique point according to claim 1, it is characterized in that: the bianry image that utilizes Hough transfer pair step 2 to obtain described in step 3 is processed, detect linear feature on aircraft obviously the straight line of part as the method for the axis of this part, be achieved as follows:
Step (31), the target image that step 2 is obtained are asked size, according to the possible span of parameter in parameter space, quantize, and according to a quantized result structure totalizer array A (ρ, θ), are initialized as 0;
Step (32), the set point in each XY space is got all over all probable values by θ, with formula (3), calculated ρ, according to the value of ρ and θ A:A (ρ, θ)=A (ρ, θ)+1 of adding up;
ρ=x cosθ+y sinθ (3)
Wherein ρ and θ are respectively two parameter-amplitudes and the angle in parameter space, and (x, y) is the point coordinate in image space;
Step (33), according to corresponding ρ and the θ of maximal value in A after cumulative, by formula (3), make the straight line in XY, i.e. axes of aircraft, the maximal value in A has represented the number of set point on this straight line.
4. the expansion method for tracking target that utilizes skeletal extraction unique point according to claim 1, it is characterized in that: the image that the method for utilizing skeletal extraction described in step 4 obtains step 2 is processed, extract after the skeleton point on aircraft, choose some feature skeleton points and carry out fitting a straight line, the straight line obtaining adopts successively the iterative refinement algorithm of cancellation frontier point to extract skeleton as the method for the axis of this part on aircraft, is achieved as follows:
If known target point is labeled as 1, background dot is labeled as 0, and definition frontier point is that itself is labeled as 1, and in its 8-connected region, has at least a point to be labeled as 0 point.Algorithm is considered the 8-neighborhood centered by frontier point, and note central point is p
1, 8 points of its neighborhood are designated as respectively p around central point clockwise
2, p
3..., p
9, p wherein
2at p
1top;
Comprise frontier point carried out to two step operations:
(41) mark meets the frontier point of following condition simultaneously:
(411)2≤N(p
1)≤6;
(412)S(p
1)=1;
(413)p
2·p
4·p
6=0;
(414)p
4·p
6·p
8=0;
N (p wherein
1) be p
1non-zero adjoint point number, S (p
1) be with p
2, p
3..., p
9, p
2the number of the value of these points from 0 → 1 during for order; When to after all boundary point check, by all marks point remove; (42) mark meets the frontier point of following condition simultaneously:
(42) mark meets the frontier point of following condition simultaneously:
(421)1≤N(p
1)≤6
(422)S(p
1)=1;
(423)p
2·p
4·p
8=0;
(424)p
2·p
6·p
8=0;
Above two step operations form an iteration, iterate until do not have point to meet again flag condition, at this moment remaining point forms skeleton point, extracted after the skeleton point of target, using the tie point in skeleton point as unique point, adopt the straight line of acquisition repeatedly of fitting a straight line, using this axis as fuselage on aircraft.
5. the expansion method for tracking target that utilizes skeletal extraction unique point according to claim 1, it is characterized in that: the described axial equation that utilizes step 3 and step 4 to obtain, the intersection point that calculates these two axis place straight lines, using it as thick trace point, is achieved as follows:
Process step 3 and step 4 are calculated the fuselage and the wing axis place straight-line equation that obtain respectively:
y1=k1*x1+b1 (4)
y2=k2*x2+b2 (5)
Wherein k1, b1 are respectively slope and the intercept of fuselage axis place straight line, and k2, b2 are respectively slope and the intercept of wing axis place straight line, separate above-mentioned system of linear equations and obtain its intersecting point coordinate and be:
Wherein (x, y) is the intersecting point coordinate of wing and fuselage axis place straight line.
6. the expansion method for tracking target that utilizes skeletal extraction unique point according to claim 1, it is characterized in that: described in step 6 when next frame arrives, the intersection point that utilizes interframe continuity Information revision step 5 to obtain, using revised unique point as last trace point, is achieved as follows:
(61), when next frame arrives, adopt step 1~five of claim 1 to calculate the intersection point of wing and fuselage axis place straight line in present frame;
(62) Euclidean distance of the trace point of calculation procedure (61) gained intersection point and previous frame, if gained distance is less than threshold value d=10, by (61) gained intersection point as present frame trace point; Otherwise upgrade present frame trace point according to formula (8):
P
n=α*P
l+(1-α)*P (8)
P wherein
n=(x
n, y
n) be the trace point coordinate after upgrading, P
l=(x
l, y
l) be the trace point coordinate of previous frame, the thick trace point coordinate of P=(x, y) for calculating through step (62) in present frame, α is for upgrading the factor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310656022.9A CN103632381B (en) | 2013-12-08 | Extended target tracking method for extracting feature points by using Hough transformation and skeleton |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310656022.9A CN103632381B (en) | 2013-12-08 | Extended target tracking method for extracting feature points by using Hough transformation and skeleton |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103632381A true CN103632381A (en) | 2014-03-12 |
CN103632381B CN103632381B (en) | 2016-11-30 |
Family
ID=
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103839274A (en) * | 2014-03-25 | 2014-06-04 | 中国科学院光电技术研究所 | Extended target tracking method based on geometric proportion relation |
CN103854290A (en) * | 2014-03-25 | 2014-06-11 | 中国科学院光电技术研究所 | Extended target tracking method combining skeleton characteristic points and distribution field descriptors |
CN103927541A (en) * | 2014-04-21 | 2014-07-16 | 西北工业大学 | Target edge line detection method applicable to space tethered robot capture process |
CN105046229A (en) * | 2015-07-27 | 2015-11-11 | 浙江理工大学 | Crop row identification method and apparatus |
CN106443661A (en) * | 2016-09-08 | 2017-02-22 | 河南科技大学 | Maneuvering extended target tracking method based on unscented Kalman filter |
CN107607993A (en) * | 2017-09-07 | 2018-01-19 | 中国石油大学(北京) | A kind of method, apparatus and system for determining stack velocity |
CN108256258A (en) * | 2018-01-31 | 2018-07-06 | 西安科技大学 | Cemented fill mechanical response characteristic Forecasting Methodology based on SEM image |
CN108257155A (en) * | 2018-01-17 | 2018-07-06 | 中国科学院光电技术研究所 | Extended target stable tracking point extraction method based on local and global coupling |
CN108663026A (en) * | 2018-05-21 | 2018-10-16 | 湖南科技大学 | A kind of vibration measurement method |
CN108711131A (en) * | 2018-04-28 | 2018-10-26 | 北京溯斐科技有限公司 | Water mark method based on Image Feature Matching and device |
CN108828583A (en) * | 2018-06-15 | 2018-11-16 | 西安电子科技大学 | One kind being based on fuzzy C-mean algorithm point mark cluster-dividing method |
CN110738214A (en) * | 2019-09-26 | 2020-01-31 | 天津大学 | Fuzzy C-means clustering light interference fringe pattern binarization method |
CN111028271A (en) * | 2019-12-06 | 2020-04-17 | 浩云科技股份有限公司 | Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection |
CN111080712A (en) * | 2019-12-06 | 2020-04-28 | 浩云科技股份有限公司 | Multi-camera personnel positioning, tracking and displaying method based on human body skeleton detection |
CN112465832A (en) * | 2020-11-25 | 2021-03-09 | 重庆大学 | Single-sided tree point cloud skeleton line extraction method and system based on binocular vision |
CN112801075A (en) * | 2021-04-15 | 2021-05-14 | 速度时空信息科技股份有限公司 | Automatic rural road boundary line extraction method based on aerial image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136525A (en) * | 2013-02-28 | 2013-06-05 | 中国科学院光电技术研究所 | High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation |
CN103247032A (en) * | 2013-04-26 | 2013-08-14 | 中国科学院光电技术研究所 | Weak extended target positioning method based on attitude compensation |
CN103400388A (en) * | 2013-08-06 | 2013-11-20 | 中国科学院光电技术研究所 | Method for eliminating Brisk key point error matching point pair by using RANSAC |
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136525A (en) * | 2013-02-28 | 2013-06-05 | 中国科学院光电技术研究所 | High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation |
CN103247032A (en) * | 2013-04-26 | 2013-08-14 | 中国科学院光电技术研究所 | Weak extended target positioning method based on attitude compensation |
CN103400388A (en) * | 2013-08-06 | 2013-11-20 | 中国科学院光电技术研究所 | Method for eliminating Brisk key point error matching point pair by using RANSAC |
Non-Patent Citations (5)
Title |
---|
XIANG BAI: "Skeleton Pruning by Contour Partitioning with Discrete Curve Evolution", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
刘春阁: "基于Hough变换的直线提取与匹配", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
林霖: "基于几何特征的飞机部位识别跟踪算法", 《红外》 * |
眭新光: "线状目标特征提取及其机场目标识别中的应用", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑 》 * |
陈晓飞: "基于骨架的目标表示和识别技术研究", 《中国优秀博硕士学位论文全文数据库 (博士) 信息科技辑》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103854290A (en) * | 2014-03-25 | 2014-06-11 | 中国科学院光电技术研究所 | Extended target tracking method combining skeleton characteristic points and distribution field descriptors |
CN103839274A (en) * | 2014-03-25 | 2014-06-04 | 中国科学院光电技术研究所 | Extended target tracking method based on geometric proportion relation |
CN103839274B (en) * | 2014-03-25 | 2016-09-21 | 中国科学院光电技术研究所 | Extended target tracking method based on geometric proportion relation |
CN103927541B (en) * | 2014-04-21 | 2017-02-22 | 西北工业大学 | Target edge line detection method applicable to space tethered robot capture process |
CN103927541A (en) * | 2014-04-21 | 2014-07-16 | 西北工业大学 | Target edge line detection method applicable to space tethered robot capture process |
CN105046229A (en) * | 2015-07-27 | 2015-11-11 | 浙江理工大学 | Crop row identification method and apparatus |
CN105046229B (en) * | 2015-07-27 | 2018-11-02 | 浙江理工大学 | A kind of recognition methods of crops row and device |
CN106443661A (en) * | 2016-09-08 | 2017-02-22 | 河南科技大学 | Maneuvering extended target tracking method based on unscented Kalman filter |
CN106443661B (en) * | 2016-09-08 | 2019-07-19 | 河南科技大学 | Motor-driven extension method for tracking target based on Unscented kalman filtering |
CN107607993A (en) * | 2017-09-07 | 2018-01-19 | 中国石油大学(北京) | A kind of method, apparatus and system for determining stack velocity |
CN107607993B (en) * | 2017-09-07 | 2019-05-31 | 中国石油大学(北京) | A kind of method, apparatus and system of determining stack velocity |
CN108257155B (en) * | 2018-01-17 | 2022-03-25 | 中国科学院光电技术研究所 | Extended target stable tracking point extraction method based on local and global coupling |
CN108257155A (en) * | 2018-01-17 | 2018-07-06 | 中国科学院光电技术研究所 | Extended target stable tracking point extraction method based on local and global coupling |
CN108256258B (en) * | 2018-01-31 | 2019-01-25 | 西安科技大学 | Cemented fill mechanical response characteristic prediction technique based on SEM image |
CN108256258A (en) * | 2018-01-31 | 2018-07-06 | 西安科技大学 | Cemented fill mechanical response characteristic Forecasting Methodology based on SEM image |
CN108711131A (en) * | 2018-04-28 | 2018-10-26 | 北京溯斐科技有限公司 | Water mark method based on Image Feature Matching and device |
CN108711131B (en) * | 2018-04-28 | 2022-08-16 | 北京数科网维技术有限责任公司 | Watermark method and device based on image feature matching |
CN108663026B (en) * | 2018-05-21 | 2020-08-07 | 湖南科技大学 | Vibration measuring method |
CN108663026A (en) * | 2018-05-21 | 2018-10-16 | 湖南科技大学 | A kind of vibration measurement method |
CN108828583A (en) * | 2018-06-15 | 2018-11-16 | 西安电子科技大学 | One kind being based on fuzzy C-mean algorithm point mark cluster-dividing method |
CN108828583B (en) * | 2018-06-15 | 2022-06-28 | 西安电子科技大学 | Point trace clustering method based on fuzzy C mean value |
CN110738214A (en) * | 2019-09-26 | 2020-01-31 | 天津大学 | Fuzzy C-means clustering light interference fringe pattern binarization method |
CN111028271A (en) * | 2019-12-06 | 2020-04-17 | 浩云科技股份有限公司 | Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection |
CN111080712A (en) * | 2019-12-06 | 2020-04-28 | 浩云科技股份有限公司 | Multi-camera personnel positioning, tracking and displaying method based on human body skeleton detection |
CN111028271B (en) * | 2019-12-06 | 2023-04-14 | 浩云科技股份有限公司 | Multi-camera personnel three-dimensional positioning and tracking system based on human skeleton detection |
CN111080712B (en) * | 2019-12-06 | 2023-04-18 | 浩云科技股份有限公司 | Multi-camera personnel positioning, tracking and displaying method based on human body skeleton detection |
CN112465832A (en) * | 2020-11-25 | 2021-03-09 | 重庆大学 | Single-sided tree point cloud skeleton line extraction method and system based on binocular vision |
CN112465832B (en) * | 2020-11-25 | 2024-04-16 | 重庆大学 | Single-side tree point cloud skeleton line extraction method and system based on binocular vision |
CN112801075A (en) * | 2021-04-15 | 2021-05-14 | 速度时空信息科技股份有限公司 | Automatic rural road boundary line extraction method based on aerial image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111091105B (en) | Remote sensing image target detection method based on new frame regression loss function | |
CN109684921B (en) | Road boundary detection and tracking method based on three-dimensional laser radar | |
CN103617328B (en) | Aircraft three-dimensional attitude calculation method | |
CN105046688B (en) | A kind of many plane automatic identifying methods in three-dimensional point cloud | |
CN103727930B (en) | A kind of laser range finder based on edge matching and camera relative pose scaling method | |
CN103714541B (en) | Method for identifying and positioning building through mountain body contour area constraint | |
CN103854290A (en) | Extended target tracking method combining skeleton characteristic points and distribution field descriptors | |
CN102622587B (en) | Hand back vein recognition method based on multi-scale second-order differential structure model and improved watershed algorithm | |
CN104751187A (en) | Automatic meter-reading image recognition method | |
CN103136525B (en) | High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation | |
CN105740798A (en) | Structure analysis based identification method for object in point cloud scene | |
CN102521597B (en) | Hierarchical strategy-based linear feature matching method for images | |
CN103839265A (en) | SAR image registration method based on SIFT and normalized mutual information | |
CN104933709A (en) | Automatic random-walk CT lung parenchyma image segmentation method based on prior information | |
CN102254319A (en) | Method for carrying out change detection on multi-level segmented remote sensing image | |
CN103632363A (en) | Object-level high-resolution remote sensing image change detection method based on multi-scale fusion | |
CN102136155A (en) | Object elevation vectorization method and system based on three dimensional laser scanning | |
CN103839274B (en) | Extended target tracking method based on geometric proportion relation | |
CN103150731B (en) | A kind of fuzzy clustering image partition method | |
CN105574527A (en) | Quick object detection method based on local feature learning | |
CN111145228A (en) | Heterogeneous image registration method based on local contour point and shape feature fusion | |
CN103530590A (en) | DPM (direct part mark) two-dimensional code recognition system | |
CN103886619A (en) | Multi-scale superpixel-fused target tracking method | |
CN104504709A (en) | Feature ball based classifying method of three-dimensional point-cloud data of outdoor scene | |
CN104156976A (en) | Multiple characteristic point tracking method for detecting shielded object |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |