CN108256394A - A kind of method for tracking target based on profile gradients - Google Patents

A kind of method for tracking target based on profile gradients Download PDF

Info

Publication number
CN108256394A
CN108256394A CN201611239192.7A CN201611239192A CN108256394A CN 108256394 A CN108256394 A CN 108256394A CN 201611239192 A CN201611239192 A CN 201611239192A CN 108256394 A CN108256394 A CN 108256394A
Authority
CN
China
Prior art keywords
target
image
similarity
point
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611239192.7A
Other languages
Chinese (zh)
Other versions
CN108256394B (en
Inventor
李波
左春婷
蔡宇
黄艳金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sino Forest Xinda (beijing) Science And Technology Information Co Ltd
Original Assignee
Sino Forest Xinda (beijing) Science And Technology Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sino Forest Xinda (beijing) Science And Technology Information Co Ltd filed Critical Sino Forest Xinda (beijing) Science And Technology Information Co Ltd
Priority to CN201611239192.7A priority Critical patent/CN108256394B/en
Publication of CN108256394A publication Critical patent/CN108256394A/en
Application granted granted Critical
Publication of CN108256394B publication Critical patent/CN108256394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention specifically discloses a kind of method for tracking target based on profile gradients, by the dividing candidate target from the initial frame of the video of input or image sequence, and extracts reference picture;Reference picture profile is extracted as standard feature template, and carries out scale, angular transformation, obtains multiple dimensioned multi-angle template sequence;The next frame of input video or image sequence extracts the Gradient Features of the target image;Multiple dimensioned multi-angle template sequence is carried out the sliding window that step-length is 1 on the characteristic image of target image to scan, and calculate the matched similarity of the two;According to information such as matching position, scale factor, angular factors, the subregion where being partitioned into target in the target image is updated as the reference picture detected next time;Until video or all frame detections of image sequence finish.The present invention seeks to:It is to provide a kind of method for tracking target based on profile gradients, to adapt to the dimensional variation of target and/or angle change, improves the accuracy of target following.

Description

A kind of method for tracking target based on profile gradients
Technical field
The invention belongs to computer image processing technology fields, specifically disclose a kind of target following side based on profile gradients Method.
Background technology
In safety monitoring task, in order to realize the demand of Automatic Target Tracking, it is often necessary in video or sequential chart Target location as determining area-of-interest in sequence, and the target location in each frame image is mapped.
The research contents of target following technology is broadly divided into following two aspects:One, in the video sequence that is captured Moving target is detected, tracks, identifies and extracts information needed, such as the track of target and associ-ated motion parameters such as speed, adds Speed, position etc..Two, it is predicted and is estimated with aid decision using every motion parameters on target of acquisition.Therefore, accurately Extraction moving target is characterized in improving the premise of target following, identification and nicety of grading;And the accuracy tracked influences The accurate and degree of difficulty of decision of the senior level.
Traditional target following technical solution can be described as follows:
(1) in the image sequence of acquisition on determine a template, commonly referred to as reference picture has been recorded in reference picture Need tracked target;
(2) using each pixel of reference picture as characteristic point, primitive character point set is formed;Alternatively, in order to improve fortune The efficiency of calculation is separated by out of reference picture and equidistantly pixel is uniformly extracted (process is referred to alternatively as equidistantly adopting Sample), form primitive character point set;
(3) operation is made to primitive character point set and its neighborhood, obtains new feature point set, treated according to new feature point set It matches and matching area is determined on image;
(4) gray scale or texture information between matching area and reference template are calculated, using the method for minimizing error, is led to Iteration is crossed, obtains the matching factor matrix between matching area and reference picture, the wherein corresponding region of matching factor maximum value As tracked target.
(5) step (3)-(4) are repeated to the image collected sequence, by the template matches between picture frame and frame, most Target is realized eventually continuously to track.
Traditional method for tracking target, has the following disadvantages:
(1) after obtaining reference picture, as standard form, this is no longer updated.But in practical tracking system, with Video camera when movement, scale or rotation occur for the movements of image capture devices, target, with this information it is possible to determine reference picture meeting portion The picture acquisition range of point removal video camera, so as to reference picture subregion not in the image of subsequent image sequence, According to initial reference image, then tracking can be caused to fail, go out the situation of active target.Therefore, design is a kind of has relatively by force The tracking of antijamming capability has good application value.
(2) arbitrariness is also big for the characteristic point of the reference picture obtained, and the image information generally comprised is less, it is impossible to good table Characteristics of image is levied, reliability, stability is not high so that track algorithm does not have good robustness.
(3) characteristic matching calculating is difficult to reach real-time, affects the response speed of target following.
Invention content
The purpose of the present invention is:It is to provide a kind of method for tracking target based on profile gradients, to adapt to the ruler of target Degree variation and/or angle change improve the accuracy of target following;The profile of target is specified by sobel operator extractions first And its then gradient vector obtains the template sequence of multiple dimensioned multi-angle as standard scale template by scale and angular samples Row;Secondly, Optimum Matching information is obtained by template matching method, including position, scale factor and angular factors;Finally, root The Optimum Matching region on target image is cut into according to Optimum Matching information, and using its profile and its gradient vector as newer Standard scale template.
The purpose of the present invention is what is be achieved through the following technical solutions:
A kind of method for tracking target based on profile gradients, includes the following steps;
Step 1:The dividing candidate target from the initial frame of the video of input or image sequence, extracts comprising candidate target Reference picture;
Step 2:The profile of target reference picture is extracted, obtains each pixel coordinate sequence p of reference picturei=(xi,yi)T And its corresponding horizontal, vertical direction gradient sequence di=(ti,ui)T, as standard feature template;
If the size of target reference picture is m × n, detect that the number of marginal point is L according to preset threshold condition T, then i =1,2,3 ..., m × n, wherein only L point is sobel marginal points, and the gradient corresponding to other non-edge points is defined as (0,0);
The process of sobel edge detections is exactly to utilize all pixels of sobel operators and reference picture point and its neighborhood work Then convolution determines the process of marginal point according to preset threshold condition T;
By sobel edge detections, point p can be obtainedi=(xi,yi)TAnd its corresponding gradient direction di=(ti,ui)T
Step 3:Scale, angle d are carried out to standard feature templatei=(ti,ui)TTransformation, obtains multiple dimensioned multi-angle template Sequence (Pi1',di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di)T, to enhance matched robustness;
Step 4:The next frame of input video or image sequence extracts the Gradient Features g of the target imager=(vr,wr)T
Sobel edges are carried out using the method identical with step 2 and preset threshold condition for M × N target images to size Detection, obtains the Gradient Features g of target imager=(vr,wr)T, wherein, r=1,2,3 ..., M × N, L' be the mesh detected The number of marginal point in logo image, corresponding gradient has value when pixel is edge, is otherwise (0,0);
Step 5:By the multiple dimensioned multi-angle Prototype drawing (P of the M groups obtained in step 3i1',di)T, (Pi2',di)T, (Pi3',di )T..., (PiM',di)TStep-length is carried out with the characteristic image of target image respectively to scan, and calculate for the sliding window of definite value The matched similarity of the two, the similarity obtained under more all templates, all the window's positions, then similarity obtain maximum value institute Corresponding the window's position, as best match position are the final results that present frame realizes target following;
Step 6:According to where best match position, scale factor and angular factors are partitioned into target in the target image Subregion, and be updated the subregion next time as the reference picture detected;
Step 7:Step 2 is repeated to the content of step 6, until video or all frame detections of image sequence finish.
Further, standard feature template is carried out in the step 3 appropriate scale, angular transformation method, including:
A) by standard scale template sequence (pi,di)TEvery bit pi=(xi,yi)TAbscissa xiAmplification/diminution sxTimes, it indulges Coordinate yiAmplification/diminution syTimes;
B) pass through different transformation factor sx、syAfter processing, multiple Analysis On Multi-scale Features template sequence (P are can obtaini1,di)T, (Pi2,di)T, (Pi3,di)T..., (Pik,di)T, wherein k is the number for carrying out different scale transformation;
C) appropriate angular transformation is carried out to all Analysis On Multi-scale Features templates b) obtained, to Analysis On Multi-scale Features template sequence (Pij,di)T, (j=1,2,3 ..., k) carry out angle be θ rotation, if rotating to be positive direction to the right, template sequence (Pij,di)T Every bit Pij=(Xij,Yij)T, θ is rotated clockwise centered on (0,0), obtains point Pij'=(Xij',Yij')T, with mathematics public affairs Formula can be expressed as:
D) the multiple dimensioned multi-angle template sequence (P of M groups is obtainedi1',di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di)T
Further, the change of scale factor sx、syRanging from 0.9~1.1, ranging from -30 ° to 30 ° of the angle, θ.
Further, in the step 5 similarity computational methods, including:
1) g is denoted as to any point (x, y), gradient direction on target imagex,y=(vx,y,wx,y)T
2) when the feature templates window of reference picture is matched with an equal amount of range of certain in target image to be detected, 2 points of matching similarity s can be defined as:Same coordinate position in the eigenmatrix of feature templates matrix and target image The sum of corresponding gradient vector angular separation normalization cosine value of point;
3) the gradient vector value of non-edge point is (0,0), therefore similarity can be reduced to the marginal point pair of same coordinate position Answer the sum of gradient vector angular separation normalization cosine value:
In above-mentioned formula, (x, y) value is the coordinate in the sliding window position upper left corner in target image, generally can be direct It with the point coordinates or is converted to the coordinate of window center point and represents the window's position, the value range of similarity s is 0~1;
4) by the multiple dimensioned multi-angle template (P of M groupsi1',di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di)T The sliding window that step-length is 1 is carried out on the eigenmatrix of target image respectively to scan, more all templates, all the window's positions Under obtained similarity, then similarity obtain the window's position corresponding to maximum value, as best match position, be present frame reality The final result of existing target following.
Further, in the step 5 similarity quick calculation method, rolled up using Fast Fourier Transform (FFT) (FFT) Product operation, frequency domain is transformed by similarity, including:
(1) for any one multiple dimensioned multi-angle feature templates (pik, dik)T, define two gradient direction Component Matrices Tx、 Ty, the size of matrix is m × n, and the value of m, n are respectively by pikThe maximin of horizontal, vertical coordinate determine;pik=(xik, yik)T, dik=(tik,uik)T, according to pikIn coordinate value respectively to Tx、TyAssignment, by TxMiddle xthikArrange yikRow element is assigned a value of tik, by TyMiddle xthikArrange yikRow element is assigned a value of;
(2) (1) same mode is used, assignment obtains two gradient direction Component Matrices O of target imagex、Oy, assignment side Formula is similar;
(3)TxAnd OxConvolutional calculation, which is realized by FFT, is obtained
Wherein,
It is calculated:
Similarly, it is calculated:
(4) according to convolution and the equivalence relation of Fast Fourier Transform (FFT) (FFT), above-mentioned result of calculation is handled as follows Obtained similarity matrix:That is the set of matching similarity s.
Beneficial effects of the present invention are as follows:
Firstth, the present invention handles Objective extraction profile gradients as feature, and using Grads threshold, has obtained base In reasonable, the effective feature templates that marginal point gradient vector is characterized, the tracking of target can be effectively realized.
Secondth, the present invention has carried out appropriate scale and rotation transformation to feature templates, obtains multiple dimensioned multi-angle feature Template, avoid because the movement of the image capture devices such as video camera or target, the scale of target or it is rotationally-varying caused by target Loss problem.
Third, the present invention are partitioned into target and carry out real-time update to template in time, avoid in traditional approach using solid Tracking failure problem caused by solid plate.
4th, compared with prior art, the present invention this method is used is characterized template, non-side based on marginal point gradient vector Edge point is then not involved in the calculating process of characteristic matching, greatly reduces the calculation amount of characteristic matching.
5th, method of the invention is also characterized matched implementation and provides a kind of quick calculation method, realizes calculating It reduces to the order of magnitude of amount, the requirement of real-time that can be well adapted in safety defense monitoring system.
Description of the drawings
To make the objectives, technical solutions, and advantages of the present invention clearer, the present invention is made below in conjunction with attached drawing into The detailed description of one step, wherein:
Fig. 1, which is that the present invention is based on the method for tracking target of profile gradients, to realize flow chart;
Fig. 2 is the schematic diagram of the slave initial frame dividing candidate target of the present invention;
Fig. 3 is the schematic diagram of the sobel operators of the present invention;
Fig. 4 is the image neighborhood of pixels schematic diagram of the present invention;
Fig. 5 is the standard feature template schematic diagram of the present invention.
Specific embodiment
Hereinafter reference will be made to the drawings, and the preferred embodiment of the present invention is described in detail.It should be appreciated that preferred embodiment Only for illustrating the present invention, the protection domain being not intended to be limiting of the invention.
As shown in Figs. 1-5, embodiment one:A kind of method for tracking target based on profile gradients is provided, including:Feature templates Foundation, the matching between feature templates and target image and feature templates three aspects of update content;
Step 1. is extracted from dividing candidate target in the initial frame of the video of input or image sequence comprising candidate target Reference picture.Step diagram is as shown in Figure 2.According to the demand of practical application, the first step of the step as whole flow process, Realization can be marked by hand, can also be realized by implementing complete partitioning algorithm to target.
Step 2. extracts profile to target reference picture, obtains each pixel coordinate sequence p of reference picturei=(xi,yi)T And its corresponding horizontal, vertical direction gradient sequence di=(ti,ui)T, as standard feature template.
It should be strongly noted that if the size of target reference picture is m × n, side is detected according to preset threshold condition T The number of edge point is L, then i=1,2,3 ..., m × n, wherein only L point is sobel marginal points, and other non-edge point institutes Corresponding gradient is defined as (0,0).
The process of sobel edge detections is exactly the institute using sobel operators (Gx, Gy as shown in Figure 3) and reference picture There are pixel and its neighborhood to make convolution, the process of marginal point is then determined according to preset threshold condition T.
If certain pixel (x, y) and its neighborhood territory pixel Distribution value in reference picture are as shown in Figure 4.
By sobel edge detections, point p can be obtainedi=(xi,yi)TAnd its corresponding gradient direction di=(ti,ui)T, Wherein, the coordinate value of gradient direction is as follows:
In this way, just obtain the blank of feature templates --- with the Gradient Features d of reference picturei TThe set/sequence of composition, Referred to herein as standard feature template, size is identical with reference picture, size be m × n, the example of the distribution of Gradient Features value As shown in Figure 5.
Step 3. carries out scale, angular transformation to standard feature template, obtains multiple dimensioned multi-angle template sequence (Pi1', di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di)T, to enhance matched robustness.
Appropriate change of scale is carried out to standard feature template, even if standard scale template sequence (pi,di)TEvery bit pi =(xi,yi)TAbscissa xiAmplify (diminution) sxTimes, ordinate yiAmplify (diminution) syTimes, take change of scale factor sx、syModel Enclose is 0.9~1.1.If use Pi=(Xi,Yi)TRepresent change of scale after as a result, then the process mathematical formulae can be expressed as:
By different transformation factor sx、syAfter processing, multiple Analysis On Multi-scale Features template sequence (P are can obtaini1,di)T, (Pi2, di)T, (Pi3,di)T..., (Pik,di)T, wherein k is the number for carrying out different scale transformation.
Further, appropriate angular transformation is carried out to obtained all Analysis On Multi-scale Features templates.With a left side for reference picture Centered on upper angle, to Analysis On Multi-scale Features template sequence (Pij,di)T, (j=1,2,3 ..., k) carry out angle be θ rotation, if Positive direction is rotated to be to the right, takes ranging from -30 ° to 30 ° of angle, θ.Template sequence (Pij,di)TEvery bit Pij=(Xij, Yij)T, θ is rotated centered on (0,0), obtains point Pij'=(Xij',Yij')T, can be expressed as with mathematical formulae:
Be not limited to herein in a manner that rotation center is in the reference picture upper left corner, if selection reference picture center etc. other Fixed point is the same as rotation center, essence with the present embodiment.
By above-mentioned processing, the multiple dimensioned multi-angle template sequence (P of M groups is finally obtainedi1',di)T, (Pi2',di)T, (Pi3', di)T..., (PiM',di)T
The multiple dimensioned multi-angle feature templates obtained using the above process, can overcome since the Image Acquisition such as video camera are set Standby or target movement, the scale of target or target caused by angle change lose problem, have good robustness, and transformed Journey simply, conveniently, has good real-time.
The next frame of step 4. input video or image sequence extracts the Gradient Features g of the target imager=(vr,wr)T:
Sobel edges are carried out using the method identical with step 2 and preset threshold condition for M × N target images to size Detection, obtains the Gradient Features g of target imager=(vr,wr)T, wherein, r=1,2,3 ..., M × N, L' be the mesh detected The number of marginal point in logo image, corresponding gradient has value when pixel is edge, is otherwise (0,0).
The specific implementation method of the step is shown in step2, and details are not described herein again.
Step 5. is by the multiple dimensioned multi-angle Prototype drawing (P of the M groups obtained in step 3i1',di)T, (Pi2',di)T, (Pi3',di )T..., (PiM',di)TThe sliding window that step-length is 1 is carried out with the characteristic image of target image respectively to scan, and calculate two The matched similarity of person.When similarity obtains maximum value, corresponding position is best match position, is carried out matched multiple dimensioned Multi-angle template sequence can provide the information such as scale factor, the angular factors corresponding to best match.Specific matching algorithm is as follows.
If g is denoted as to any point (x, y), gradient direction on target imagex,y=(vx,y,wx,y)T.The spy of reference picture Sign template window is with an equal amount of range of certain in target image to be detected when being matched, and 2 points of matching similarity s can be with It is defined as:Feature templates matrix gradient vector angular separation corresponding with the point of same coordinate position in the eigenmatrix of target image Normalize the sum of cosine value.
In fact, because the gradient vector value of non-edge point is (0,0), therefore similarity can be reduced to same coordinate position Marginal point pair answers the sum of gradient vector angular separation normalization cosine value.It can be expressed as with mathematical formulae:
In above-mentioned formula, (x, y) value is the coordinate in the sliding window position upper left corner in target image, generally can be direct It with the point coordinates or is converted to the coordinate of window center point and represents the window's position, the value range of similarity s is 0~1.
By the multiple dimensioned multi-angle template (P of M groupsi1',di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di)TPoint It carries out the sliding window that step-length is 1 not on the eigenmatrix of target image to scan, under more all templates, all the window's positions Obtained similarity, then the window's position corresponding to similarity acquirement maximum value, as best match position, are that present frame is realized The final result of target following.
It can be seen that feature is used as by sobel operator extraction profile gradients vectors and is matched, with regard to only need pair in calculating Marginal point is calculated, and compared with traditional feature templates, greatly reduces calculation amount, improves the real-time of algorithm.
Step 6. is partitioned into target in the target image according to information such as Optimum Matching position, scale factor, angular factors The subregion at place, and be updated the subregion next time as the reference picture detected.
Step 7. repeats step 2 to the content of step 6, until video or all frame detections of image sequence finish.
Embodiment two:The calculating of similarity in embodiment one, feature templates and the target image for being related to reference picture are special Levy convolution algorithm of the matrix in certain window ranges inside gradient characteristic value.Although feature based gradient corresponds to Grad in non-edge point For the definition of (0,0), which has greatly reduced calculation amount compared to the matching process of traditional characteristic.But the meter of convolution It calculates mechanical, lengthy and jumbled or a large amount of time can be consumed, cause, the calculating of above-mentioned matching similarity is difficult to reach real-time.
Therefore, a kind of method quickly calculated is provided here, convolution algorithm is carried out using Fast Fourier Transform (FFT) (FFT), Similarity is transformed into frequency domain, specific method is as follows:
Step1. for any one multiple dimensioned multi-angle feature templates (pik, dik)T, define two gradient direction Component Matrices Tx、Ty, the size of matrix is m × n, and the value of m, n are respectively by pikThe maximin of horizontal, vertical coordinate determine.pik= (xik,yik)T, dik=(tik,uik)TAccording to pikIn coordinate value respectively to Tx、TyAssignment, by TxMiddle xthikArrange yikRow element is assigned It is worth for tik, by TyMiddle xthikArrange yikRow element is assigned a value of.
Step2. the same manner is used, assignment obtains two gradient direction Component Matrices O of target imagex、Oy, assignment mode It is similar.
Step3. consider TxAnd OxConvolutional calculation, which is realized by FFT, is obtained
Wherein,
It is calculated:
Similarly, it is calculated:
Step4. according to convolution and the equivalence relation of Fast Fourier Transform (FFT) (FFT), above-mentioned result of calculation is carried out as follows Handle obtained similarity matrix:That is the set of matching similarity s.
According to position of the maximum similarity in the similarity matrix is obtained, feature templates can be obtained in the target image most The result of target following is realized in good matched position.
By using the mode quickly calculated, the calculation amount of characteristic matching is reduced in which can realize the order of magnitude, reaches real-time The target of calculating.
Finally illustrate, the above embodiments are merely illustrative of the technical solutions of the present invention and it is unrestricted, although with reference to compared with The present invention is described in detail in good embodiment, it will be understood by those of ordinary skill in the art that, it can be to the skill of the present invention Art scheme is modified or replaced equivalently, and without departing from the objective and range of the technical program, should all be covered in the present invention Right in.

Claims (5)

1. a kind of method for tracking target based on profile gradients, it is characterised in that:Include the following steps;
Step 1:The dividing candidate target from the initial frame of the video of input or image sequence, extracts the ginseng comprising candidate target Examine image;
Step 2:The profile of target reference picture is extracted, obtains each pixel coordinate sequence p of reference picturei=(xi,yi)TIt is and its right The level answered, the gradient sequence d of vertical directioni=(ti,ui)T, as standard feature template;
If the size of target reference picture is m × n, detect that the number of marginal point is L according to preset threshold condition T, then i=1, 2,3 ..., m × n, wherein only L point is sobel marginal points, and the gradient corresponding to other non-edge points be defined as (0, 0);
The process of sobel edge detections is exactly rolled up using all pixels of sobel operators and reference picture point and its neighborhood Then product determines the process of marginal point according to preset threshold condition T;
By sobel edge detections, point p can be obtainedi=(xi,yi)TAnd its corresponding gradient direction di=(ti,ui)T
Step 3:Scale, angle d are carried out to standard feature templatei=(ti,ui)TTransformation, obtains multiple dimensioned multi-angle template sequence (Pi1',di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di)T, to enhance matched robustness;
Step 4:The next frame of input video or image sequence extracts the Gradient Features g of the target imager=(vr,wr)T
Sobel edge detections are carried out using the method identical with step 2 and preset threshold condition for M × N target images to size, Obtain the Gradient Features g of target imager=(vr,wr)T, wherein, r=1,2,3 ..., M × N, L' be the target image detected The number of middle marginal point, corresponding gradient has value when pixel is edge, is otherwise (0,0);
Step 5:By the multiple dimensioned multi-angle Prototype drawing (P of the M groups obtained in step 3i1',di)T, (Pi2',di)T, (Pi3',di )T..., (PiM',di)TStep-length is carried out with the characteristic image of target image respectively to scan, and calculate for the sliding window of definite value The matched similarity of the two, the similarity obtained under more all templates, all the window's positions, then similarity obtain maximum value institute Corresponding the window's position, as best match position are the final results that present frame realizes target following;
Step 6:Sub-district according to where best match position, scale factor and angular factors are partitioned into target in the target image Domain, and be updated the subregion next time as the reference picture detected;
Step 7:Step 2 is repeated to the content of step 6, until video or all frame detections of image sequence finish.
2. a kind of method for tracking target based on profile gradients according to claim 1, it is characterised in that:The step 3 In standard feature template is carried out appropriate scale, angular transformation method, including:
A) by standard scale template sequence (pi,di)TEvery bit pi=(xi,yi)TAbscissa xiAmplification/diminution sxTimes, ordinate yiAmplification/diminution syTimes;
B) pass through different transformation factor sx、syAfter processing, multiple Analysis On Multi-scale Features template sequence (P are can obtaini1,di)T, (Pi2,di )T, (Pi3,di)T..., (Pik,di)T, wherein k is the number for carrying out different scale transformation;
C) appropriate angular transformation is carried out to all Analysis On Multi-scale Features templates b) obtained, to Analysis On Multi-scale Features template sequence (Pij, di)T, (j=1,2,3 ..., k) carry out angle be θ rotation, if rotating to be positive direction to the right, template sequence (Pij,di)TIt is every One point Pij=(Xij,Yij)T, θ is rotated clockwise centered on (0,0), obtains point Pij'=(Xij',Yij')T, can with mathematical formulae To be expressed as:
D) the multiple dimensioned multi-angle template sequence (P of M groups is obtainedi1',di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di )T
3. a kind of method for tracking target based on profile gradients according to claim 2, it is characterised in that:The scale becomes Change factor sx、syRanging from 0.9~1.1, ranging from -30 ° to 30 ° of the angle, θ.
4. a kind of method for tracking target based on profile gradients according to claim 1, it is characterised in that:The step 5 The computational methods of middle similarity, including:
1) g is denoted as to any point (x, y), gradient direction on target imagex,y=(vx,y,wx,y)T
2) when the feature templates window of reference picture is matched with an equal amount of range of certain in target image to be detected, 2 points Matching similarity s can be defined as:The point pair of same coordinate position in the eigenmatrix of feature templates matrix and target image Answer the sum of gradient vector angular separation normalization cosine value;
3) the gradient vector value of non-edge point is (0,0), therefore the marginal point pair that similarity can be reduced to same coordinate position should ladder Spend the sum of vector direction angle normalization cosine value:
In above-mentioned formula, (x, y) value is the coordinate in the sliding window position upper left corner in target image, and generally can directly use should Point coordinates is converted to the coordinate of window center point to represent the window's position, and the value range of similarity s is 0~1;
4) by the multiple dimensioned multi-angle template (P of M groupsi1',di)T, (Pi2',di)T, (Pi3',di)T..., (PiM',di)TExist respectively The sliding window that step-length is 1 is carried out on the eigenmatrix of target image to scan, and is obtained under more all templates, all the window's positions Similarity, then similarity obtain the window's position corresponding to maximum value, as best match position is that present frame realizes target The final result of tracking.
5. a kind of method for tracking target based on profile gradients according to claim 1, it is characterised in that:The step 5 The quick calculation method of middle similarity carries out convolution algorithm using Fast Fourier Transform (FFT) (FFT), similarity is transformed into frequency Domain, including:
(1) for any one multiple dimensioned multi-angle feature templates (pik, dik)T, define two gradient direction Component Matrices Tx、Ty, square The size of battle array is m × n, and the value of m, n are respectively by pikThe maximin of horizontal, vertical coordinate determine;pik=(xik,yik)T, dik=(tik,uik)T, according to pikIn coordinate value respectively to Tx、TyAssignment, by TxMiddle xthikArrange yikRow element is assigned a value of tik, will TyMiddle xthikArrange yikRow element is assigned a value of;
(2) (1) same mode is used, assignment obtains two gradient direction Component Matrices O of target imagex、Oy, assignment mode class Seemingly;
(3)TxAnd OxConvolutional calculation, which is realized by FFT, is obtained
Wherein,
It is calculated:
Similarly, it is calculated:
(4) according to convolution and the equivalence relation of Fast Fourier Transform (FFT) (FFT), above-mentioned result of calculation is handled as follows to obtain Similarity matrix:That is the set of matching similarity s.
CN201611239192.7A 2016-12-28 2016-12-28 Target tracking method based on contour gradient Active CN108256394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611239192.7A CN108256394B (en) 2016-12-28 2016-12-28 Target tracking method based on contour gradient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611239192.7A CN108256394B (en) 2016-12-28 2016-12-28 Target tracking method based on contour gradient

Publications (2)

Publication Number Publication Date
CN108256394A true CN108256394A (en) 2018-07-06
CN108256394B CN108256394B (en) 2020-09-25

Family

ID=62720243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611239192.7A Active CN108256394B (en) 2016-12-28 2016-12-28 Target tracking method based on contour gradient

Country Status (1)

Country Link
CN (1) CN108256394B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584250A (en) * 2018-11-28 2019-04-05 哈工大机器人(合肥)国际创新研究院 A kind of method that the visual zone of robust divides mark automatically
CN110070557A (en) * 2019-04-07 2019-07-30 西北工业大学 A kind of target identification and localization method based on edge feature detection
CN110210565A (en) * 2019-06-05 2019-09-06 中科新松有限公司 Normalized crosscorrelation image template matching implementation method
CN111292350A (en) * 2018-12-10 2020-06-16 北京京东尚科信息技术有限公司 Optimization algorithm, system, electronic device and storage medium of target orientation
CN111369599A (en) * 2018-12-25 2020-07-03 阿里巴巴集团控股有限公司 Image matching method, device and apparatus and storage medium
CN111951211A (en) * 2019-05-17 2020-11-17 株式会社理光 Target detection method and device and computer readable storage medium
CN112014409A (en) * 2020-10-25 2020-12-01 西安邮电大学 Method and system for detecting defects of semiconductor etching lead frame die
CN112184785A (en) * 2020-09-30 2021-01-05 西安电子科技大学 Multi-mode remote sensing image registration method based on MCD measurement and VTM
CN112435211A (en) * 2020-09-03 2021-03-02 北京航空航天大学 Method for describing and matching dense contour feature points in endoscope image sequence
CN113091759A (en) * 2021-03-11 2021-07-09 安克创新科技股份有限公司 Pose processing and map building method and device
CN113112516A (en) * 2021-04-01 2021-07-13 广东拓斯达科技股份有限公司 Image edge feature library construction method and device, computer equipment and storage medium
CN113139988A (en) * 2021-05-17 2021-07-20 中国科学院光电技术研究所 High-efficiency high-accuracy image processing method for estimating target scale change
CN113450378A (en) * 2021-06-28 2021-09-28 河北工业大学 Method for judging contact group difference plane height data matching degree
CN113486769A (en) * 2021-07-01 2021-10-08 珍岛信息技术(上海)股份有限公司 Method for rapidly matching images in high-definition video
CN113538340A (en) * 2021-06-24 2021-10-22 武汉中科医疗科技工业技术研究院有限公司 Target contour detection method and device, computer equipment and storage medium
CN113689397A (en) * 2021-08-23 2021-11-23 湖南视比特机器人有限公司 Workpiece circular hole feature detection method and workpiece circular hole feature detection device
CN114663316A (en) * 2022-05-17 2022-06-24 深圳市普渡科技有限公司 Method for determining an edgewise path, mobile device and computer storage medium
CN114842212A (en) * 2022-04-19 2022-08-02 湖南大学 Rapid multi-target multi-angle template matching method
CN115082472A (en) * 2022-08-22 2022-09-20 江苏东跃模具科技有限公司 Quality detection method and system for hub mold casting molding product
CN115131587A (en) * 2022-08-30 2022-09-30 常州铭赛机器人科技股份有限公司 Template matching method of gradient vector features based on edge contour
CN115223240A (en) * 2022-07-05 2022-10-21 北京甲板智慧科技有限公司 Motion real-time counting method and system based on dynamic time warping algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218616A (en) * 2013-05-05 2013-07-24 西安电子科技大学 Image outline characteristic extraction method based on Gauss-Hermite special moment
CN105678806A (en) * 2016-01-07 2016-06-15 中国农业大学 Fisher discrimination-based automatic tracking method for tracking behavior trace of live pig

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218616A (en) * 2013-05-05 2013-07-24 西安电子科技大学 Image outline characteristic extraction method based on Gauss-Hermite special moment
CN105678806A (en) * 2016-01-07 2016-06-15 中国农业大学 Fisher discrimination-based automatic tracking method for tracking behavior trace of live pig

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PETR DOKLADAL 等: "Contour-based object tracking with gradient-based contour attraction field", 《2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS,SPEECH,AND PROCESSING》 *
朱仲杰 等: "目标基视频编码中的运动目标提取与跟踪新算法", 《电子学报》 *
黄海赟 等: "基于多尺度图像的主动轮廓线模型", 《计算机研究与发展》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584250A (en) * 2018-11-28 2019-04-05 哈工大机器人(合肥)国际创新研究院 A kind of method that the visual zone of robust divides mark automatically
CN109584250B (en) * 2018-11-28 2022-09-20 哈工大机器人(合肥)国际创新研究院 Robust method for automatically dividing and marking visual region
CN111292350A (en) * 2018-12-10 2020-06-16 北京京东尚科信息技术有限公司 Optimization algorithm, system, electronic device and storage medium of target orientation
CN111292350B (en) * 2018-12-10 2024-03-01 北京京东乾石科技有限公司 Optimization algorithm, system, electronic device and storage medium for target orientation
CN111369599A (en) * 2018-12-25 2020-07-03 阿里巴巴集团控股有限公司 Image matching method, device and apparatus and storage medium
CN111369599B (en) * 2018-12-25 2024-04-16 阿里巴巴集团控股有限公司 Image matching method, device, apparatus and storage medium
CN110070557A (en) * 2019-04-07 2019-07-30 西北工业大学 A kind of target identification and localization method based on edge feature detection
CN111951211A (en) * 2019-05-17 2020-11-17 株式会社理光 Target detection method and device and computer readable storage medium
CN111951211B (en) * 2019-05-17 2024-05-14 株式会社理光 Target detection method, device and computer readable storage medium
CN110210565A (en) * 2019-06-05 2019-09-06 中科新松有限公司 Normalized crosscorrelation image template matching implementation method
CN112435211B (en) * 2020-09-03 2022-04-26 北京航空航天大学 Method for describing and matching dense contour feature points in endoscope image sequence
CN112435211A (en) * 2020-09-03 2021-03-02 北京航空航天大学 Method for describing and matching dense contour feature points in endoscope image sequence
CN112184785A (en) * 2020-09-30 2021-01-05 西安电子科技大学 Multi-mode remote sensing image registration method based on MCD measurement and VTM
CN112184785B (en) * 2020-09-30 2023-03-24 西安电子科技大学 Multi-mode remote sensing image registration method based on MCD measurement and VTM
CN112014409A (en) * 2020-10-25 2020-12-01 西安邮电大学 Method and system for detecting defects of semiconductor etching lead frame die
CN113091759A (en) * 2021-03-11 2021-07-09 安克创新科技股份有限公司 Pose processing and map building method and device
CN113091759B (en) * 2021-03-11 2023-02-28 安克创新科技股份有限公司 Pose processing and map building method and device
CN113112516A (en) * 2021-04-01 2021-07-13 广东拓斯达科技股份有限公司 Image edge feature library construction method and device, computer equipment and storage medium
CN113139988B (en) * 2021-05-17 2023-02-14 中国科学院光电技术研究所 Image processing method for efficiently and accurately estimating target scale change
CN113139988A (en) * 2021-05-17 2021-07-20 中国科学院光电技术研究所 High-efficiency high-accuracy image processing method for estimating target scale change
CN113538340A (en) * 2021-06-24 2021-10-22 武汉中科医疗科技工业技术研究院有限公司 Target contour detection method and device, computer equipment and storage medium
CN113450378A (en) * 2021-06-28 2021-09-28 河北工业大学 Method for judging contact group difference plane height data matching degree
CN113486769B (en) * 2021-07-01 2024-04-26 珍岛信息技术(上海)股份有限公司 Quick image matching method in high-definition video
CN113486769A (en) * 2021-07-01 2021-10-08 珍岛信息技术(上海)股份有限公司 Method for rapidly matching images in high-definition video
CN113689397A (en) * 2021-08-23 2021-11-23 湖南视比特机器人有限公司 Workpiece circular hole feature detection method and workpiece circular hole feature detection device
CN114842212A (en) * 2022-04-19 2022-08-02 湖南大学 Rapid multi-target multi-angle template matching method
CN114663316A (en) * 2022-05-17 2022-06-24 深圳市普渡科技有限公司 Method for determining an edgewise path, mobile device and computer storage medium
CN115223240B (en) * 2022-07-05 2023-07-07 北京甲板智慧科技有限公司 Motion real-time counting method and system based on dynamic time warping algorithm
CN115223240A (en) * 2022-07-05 2022-10-21 北京甲板智慧科技有限公司 Motion real-time counting method and system based on dynamic time warping algorithm
CN115082472B (en) * 2022-08-22 2022-11-29 江苏东跃模具科技有限公司 Quality detection method and system for hub mold casting molding product
CN115082472A (en) * 2022-08-22 2022-09-20 江苏东跃模具科技有限公司 Quality detection method and system for hub mold casting molding product
CN115131587A (en) * 2022-08-30 2022-09-30 常州铭赛机器人科技股份有限公司 Template matching method of gradient vector features based on edge contour

Also Published As

Publication number Publication date
CN108256394B (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN108256394A (en) A kind of method for tracking target based on profile gradients
CN104268857B (en) A kind of fast sub-picture element rim detection and localization method based on machine vision
CN109816673B (en) Non-maximum value inhibition, dynamic threshold value calculation and image edge detection method
CN108876816B (en) Target tracking method based on self-adaptive target response
CN108805904B (en) Moving ship detection and tracking method based on satellite sequence image
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
CN105574527B (en) A kind of quick object detecting method based on local feature learning
CN110349207A (en) A kind of vision positioning method under complex environment
CN105261022B (en) PCB board matching method and device based on outer contour
CN103886325B (en) Cyclic matrix video tracking method with partition
CN105279769B (en) A kind of level particle filter tracking method for combining multiple features
CN107451999A (en) foreign matter detecting method and device based on image recognition
CN105447888A (en) Unmanned plane maneuvering target detection method detecting based on effective target
CN104598936A (en) Human face image face key point positioning method
CN106682678B (en) Image corner detection and classification method based on support domain
CN104484868B (en) The moving target of a kind of combination template matches and image outline is taken photo by plane tracking
CN107203973A (en) A kind of sub-pixel positioning method of three-dimensional laser scanning system center line laser center
CN109410248B (en) Flotation froth motion characteristic extraction method based on r-K algorithm
CN109060290B (en) Method for measuring wind tunnel density field based on video and sub-pixel technology
CN106127205A (en) A kind of recognition methods of the digital instrument image being applicable to indoor track machine people
CN104899888A (en) Legemdre moment-based image subpixel edge detection method
CN106529548A (en) Sub-pixel level multi-scale Harris corner detection algorithm
CN108257153B (en) Target tracking method based on direction gradient statistical characteristics
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
Lin et al. A new prediction method for edge detection based on human visual feature

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant