CN109934853A - Correlation filtering tracking based on the fusion of response diagram confidence region self-adaptive features - Google Patents
Correlation filtering tracking based on the fusion of response diagram confidence region self-adaptive features Download PDFInfo
- Publication number
- CN109934853A CN109934853A CN201910215879.4A CN201910215879A CN109934853A CN 109934853 A CN109934853 A CN 109934853A CN 201910215879 A CN201910215879 A CN 201910215879A CN 109934853 A CN109934853 A CN 109934853A
- Authority
- CN
- China
- Prior art keywords
- frame
- feature
- hog
- search window
- color histogram
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a kind of correlation filtering trackings based on the fusion of response diagram confidence region self-adaptive features, it is characterized in that, frame is tracked based on correlation filtering, feature extraction is carried out using two kinds of complementary characteristics of histograms of oriented gradients (HOG) and color histogram, and the fusion parameters of two kinds of features are adaptively set according to the confidence region of response diagram under each video frame special scenes.Its fusion parameters that multiple features are adaptively set according to the response diagram confidence region of each video frame, to promote the stability of tracking system.
Description
Technical field
The present invention relates to video filtering tracking fields, in particular to a kind of adaptive based on response diagram confidence region
The correlation filtering tracking of Fusion Features.
Background technique
Video frequency object tracking is one of research hotspot of computer vision field, and its purpose is in video image sequence
The position of target is estimated in column.The technology is numerous in video monitoring, human-computer interaction, robot field and pilotless automobile etc.
Play highly important role in.Real-time and stability are to realize two big targets of Target Tracking System.Convolution is former
Reason shows that time-consuming convolution algorithm can be converted to element dot-product operation in Fourier transform.Correlation based on convolution principle
Filtering technique is introduced in target following, and high processing speed meets requirement of the Target Tracking System to real-time.
However, due to illumination variation under complex scene, appearance deformation, partial occlusion, quickly movement, motion blur, background are similar etc. all
Multifactor interference, video frequency object tracking are still a challenging task.
Feature extraction is one of vital link in Target Tracking System.In order to deal with various complex scene factors
Interference, multiple features fusion has become the main stream approach for realizing feature extraction, and the mode of multiple Fusion Features will affect tracking
The stability of system.Currently, mainstream correlation filtering tracking based on multi-feature fusion, mostly uses greatly fixed weight mode
Realize the fusion of multiple features.However, target is apparent and ambient enviroment is all constantly changing in actual complex scene, Gu
The special scenes of each video frame can not be adapted to by determining weight fusion mode.
The shortcoming of existing fixed weight amalgamation mode:
Currently, mainstream correlation filtering tracking based on multi-feature fusion, mostly uses greatly fixed weight mode to realize more
The fusion of feature.By taking Staple tracking as an example, we have done following experiment to its fixed weight:
Particular video sequence is merged using different fixed weight ratios, different tracking results can be generated.It is right
Shaking sequence is respectively adopted different fixed fusion weights and is tracked, and is respectively adopted 0.3 and 0.2 with color histogram
For situation.In the 55th lesser situation of frame illumination variation, two kinds of fusion weights can be accurately tracked.However, in the 60th frame
In the case that tracking target encounters illumination drastic change, since color histogram is very sensitive to illumination drastic change, using 0.3 fusion
Ratio makes color histogram produce larger cumulative effect to fused response diagram, and multimodal shape is presented in response diagram, and maximum is rung
It is the correct position of target that corresponding position, which should be worth, no longer, and then tracking is caused to fail.When the integration percentage of color histogram is reduced to
When 0.2, in the case where the 60th frame illumination drastic change, unicuspid crest is still presented in fused response diagram, and maximum response corresponds to position
Setting is still the correct position for tracking target.
In same video sequence, different video frame also has the characteristics that different scenes.When target is in illumination variation scene
Under, color histogram is to light sensitive and HOG feature is insensitive, can increase the integration percentage of HOG characteristic response figure;Work as mesh
When severe deformation occurs for mark, color histogram is global characteristics, very small by being influenced, and HOG characteristic response figure confidence level
It can reduce, the integration percentage of color histogram can be increased at this time.
Therefore, in the Target Tracking System of actual complex scene, since target and background all becomes constantly
Change, fixed weight amalgamation mode, is not only difficult to the different video sequence of unified adaptation, it is also difficult to unified to adapt to same video sequence
The different video frame of column.It, should be using suitable for specific field for the different video frame of different video sequence or same video sequence
The adaptive amalgamation mode of scape realizes the fusion of multiple features.
Accordingly, the present invention will track frame based on correlation filtering, propose a kind of based on response diagram confidence region
The method that multiple features adaptively merge is realized, to promote the stability of correlation filtering tracking system.
Summary of the invention
For the shortcoming of existing fixed weight amalgamation mode, the present invention is based on correlation filterings to track frame, proposes one
The method that kind multiple features adaptively merge, is adaptively arranged multiple features according to the response diagram confidence region of each video frame
Fusion parameters, to promote the stability of tracking system.
In order to achieve the above objectives, the technical solution of the present invention is as follows:
A kind of correlation filtering tracking based on the fusion of response diagram confidence region self-adaptive features, is based on correlation filtering
Frame is tracked, using the two kinds of complementary characteristics progress feature extractions of histograms of oriented gradients (HOG) and color histogram, and according to
The fusion parameters of two kinds of features are adaptively arranged in the confidence region of response diagram under each video frame special scenes.
Further, described method includes following steps:
Step 1 inputs first frame
Each frame upper left corner in setting video frame sequence is coordinate origin (1,1), wide high respectively Width and Height;
Manually or automatically select the rectangular area (x of target to be tracked in first frame0,y0,w0,h0), that is, the tracking target selected;Its
In, (x0,y0) indicate rectangular area top left co-ordinate, w0,h0The width for respectively indicating rectangular area is high;First frame selected target
Also referred to as present frame tracking result (x1,y1,w1,h1)=(x0,y0,w0,h0), subscript indicates current frame number;
Step 2 initialized target template
1 calculates search window;
2 generate standard gaussian response diagram;
3 extract histograms of oriented gradients (HOG) feature;
4 calculate the associated filter template of HOG feature;
5 extract color histogram feature templates;
Step 3 input next frame simultaneously extracts feature
Method according to step 1 in step 2 calculates present frame search window Search (t), according to step 3 in step 2
Description extract present frame histograms of oriented gradients (HOG) feature frequency domain representation Ft;
Method according to step 5 in step 2 extracts current frame color histogram feature bg_histT and fg_histt, will
Each pixel corresponds to the bin value of histogram in Search Area image, and combined standard target window size and previous frame color are straight
Square figure feature b g_histt-1And fg_histt-1, calculate the phase of current frame color histogram feature and color histogram template
Like map Lt, size is identical as response diagram G;
The fusion of step 4 self-adaptive features
4.1 calculate self-adaptive features fusion parameters
Enable E (Gt) indicate HOG characteristic response figure GtExpectation, shown in calculation method such as formula (1):
Enable GhtIndicate HOG characteristic response figure GtConfidence region, the element Gh of the position (i, j)t(i, j) calculation method is such as
Shown in formula (2):
HOG characteristic response figure GtAdaptive fusion parameters α calculation method such as formula (3) shown in:
The similar map L of color histogramtAdaptive fusion parameters be 1- α;
The fusion of 4.2 self-adaptive features
GL is enabled to indicate HOG characteristic response figure GtMap L similar with color histogramtSelf-adaptive features fusion as a result, its
Shown in calculation method such as formula (4):
GL=α × Gt+(1-α)×Lt (4)
Step 5 determines tracking result
Step 6: updating target template
Step 7: tracking terminates if present frame is last frame;Otherwise, step 3 is gone to.
Further, the step 2, specifically comprises the following steps:
1 calculates search window
According to previous frame, that is, t-1 frame tracking result (xt-1,yt-1,wt-1,ht-1) corresponding rectangular area can calculate and work as
The previous frame i.e. search window Search (t) of t frame candidate target, particularly, first frame search window is then according to (x0,y0,w0,
h0) calculate;The central point of search window is (x_st,y_st), wherein x_st=xt-1+wt-1/2、 y_st=yt-1+ht-1/ 2, wide height
Respectively w_st=1.5 × wt-1+0.5×ht-1、h_st=1.5 × ht-1+0.5×wt-1;In order to ensure search area is in video frame
It is high according further to the width of the intersection amendment search window in the search area and present frame region in range;For convenient for subsequent meter
Color histogram feature is calculated, limiting the distance between boundary and true object boundary of search window is even number, is further corrected
The width of search window is high;
Enabling the width of standardization window NormWin high is respectively w_n and h_n, then the transformation factor of search window isSearch window image can be standardized transformation according to search window transformation factor and form standard
Search window, a height of w_sn of widtht=w_st×γ、h_snt=h_stThe width of × γ, the standard target window of present frame are a height of
w_ont=w_snt×0.75-h_snt×0.25、 h_ont=h_snt×0.75-w_snt×0.25;
2 generate standard gaussian response diagram
Standard gaussian response diagram g is a two-dimensional matrix, a height of w_g=w_sn of widtht/ cell, h_g=h_snt/
Cell, matrix element value are the probability density functions for meeting (0,0, δ, δ, 0) dimensional gaussian distribution N, can be according toFormula is calculated;Wherein, δ indicates that the standard deviation of dimensional gaussian distribution, calculation method areCell indicates that the size of each grid in HOG characteristic extraction procedure is cell × cell, (i, j)
Indicate that the element coordinate position of Gaussian response figure matrix, origin are located at the central point of matrix;Standard gaussian response diagram is subjected to Fu
Vertical available its frequency domain representation G of leaf transformation, with the same size of g;
3 extract histograms of oriented gradients (HOG) feature
It is arranged using cell as HOG feature grid dimensional parameters, 2 × 2 grid as block size, set of histograms away from bin
For 2 π/7, HOG feature f is extracted in present frame standardization search windowt, having a size of w_g × h_g × 28;Using having a size of
The Cosine Window of w_g × h_g is to feature ftIt is smoothed, then carries out Fourier transformation and obtain the frequency domain representation F of HOG featuret,
Itself and ftSame size;
4 calculate the associated filter template of HOG feature
The frequency domain representation F of known standardization search window HOG featuretWith the frequency domain representation G of standard gaussian response diagram, then
The frequency domain representation H of HOG feature associated filter templatetIt can be according to formula Ht=G/FtIt is calculated;
5 extract color histogram feature templates
Search window Search (t)=(x_st,y_st,w_st,h_st) region of interest within (xt-1,yt-1,wt-1,ht-1) with
Outer region is defined as background area, and target area retraction is a certain amount of to be defined as foreground area, central point and target area phase
Together, wide high indent is all (wt-1+ht-1)/10;Difference color histogram bg_ is extracted respectively in background area and foreground area
histtWith foreground color histogram fg_histt, as current frame color histogram feature template.
Further, in the step 3, the response diagram G of present frame HOG featuretAccording to formula Gt=Ft⊙Ht-1It calculates
It arrives.
Further, the step 5 specifically includes: self-adaptive features fusion results GL matrix element value represents its correspondence
Candidate target is the probability of tracking result in search window, then it is tracking result that greatest member value, which corresponds to candidate target,;
The number of candidate target is (w_sn in present frame search windowt-w_ont)×(h_snt-h_ont);Enable GLmax、
x_GLmaxAnd y_GLmaxRespectively indicate greatest member value and its corresponding transverse and longitudinal coordinate in self-adaptive features fusion results GL matrix
Position, then present frame tracking result is (xt,yt,wt,ht), wherein wt=wt-1、 ht=ht-1、xt=xt-1+(x_GLmax-(w_
snt-w_ont)/2)/γ-wt/2、 yt=yt-1+(y_GLmax-(h_snt-h_ont)/2)/γ-ht/ 2, γ are search window
Transformation factor.
Further, the step 6 specifically includes:
According to present frame tracking result (xt,yt,wt,ht) position and step 2.1 method calculate search window
Search'(t), Search'(t is extracted according to the method for step 2.3) frequency of histograms of oriented gradients (HOG) feature in range
Domain representation Ft', the method according to step 2.4 calculates Ht'=G/Ft';Enabling η is undated parameter, present frame HOG feature correlation filtering
Device template HtShown in update method such as formula (5):
Ht=(1- η) Ht-1+ηHt' (5)
According to present frame tracking result (xt,yt,wt,ht) position and the method for step 2.5 to extract color histogram special
Levy bg_histt' and fg_histt';Enabling θ and β is undated parameter, present frame difference color histogram and foreground color histogram
Shown in template renewal method such as formula (6), (7):
bg_histt=(1- θ) × bg_histt-1+θ×bg_histt' (6)
fg_histt=(1- β) × fg_histt-1+β×fg_histt' (7)。
Compared with the existing technology, the invention has the benefit that
The invention proposes a kind of correlation filtering trackings based on the fusion of response diagram confidence region self-adaptive features.
This method be based on correlation filtering track frame, using two kinds of complementary characteristics of histograms of oriented gradients (HOG) and color histogram into
Row feature extraction, and melting for two kinds of features is adaptively arranged according to the confidence region of response diagram under each video frame special scenes
Close parameter.Compared to the correlation filtering tracking using preset parameter amalgamation mode, more stable tracking effect is achieved.
Detailed description of the invention
Fig. 1 is the flow diagram of the method for the present invention.
Fig. 2 is that skiing sequence first frame tracing area chooses situation map.
Fig. 3 is the tracking structural schematic diagram of the embodiment of the present invention 1.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawings and detailed description:
Embodiment 1: as shown in Figure 1-3,
According to technical solution of the present invention, it is a kind of based on response diagram confidence region self-adaptive features fusion correlation filtering with
Track method is chosen Shaking video sequence and is tracked, with illumination variation, dimensional variation, background clutter, plane outward turning
Turn, this five kinds challenge attributes of plane internal rotation.Steps are as follows:
Step 1. inputs first frame
Shaking video is chosen, wherein the wide height of video frame is Width=624, Height=352 respectively.Selected first
The rectangular area (225,135,61,71) of target to be tracked in frame, i.e., selected tracking target is as shown in Fig. 2 blue rectangle frame.
Wherein, (225,135) are the top left co-ordinates of rectangular area, and (61,71) are that the width of rectangular area is high.
Step 2. initialized target template
2.1 calculate search window
According to previous frame, that is, t-1 frame tracking result (xt-1,yt-1,wt-1,ht-1) corresponding rectangular area can calculate and work as
The previous frame i.e. search window Search (t) of t frame candidate target, particularly, first frame search window then according to (225,135,
61,71) it, calculates.By taking first frame as an example, the central point of search window is (256,171), wide high respectively w_s1=127, h_s1
=137.In order to ensure search area is within the scope of video frame, according further to the intersection of the search area and present frame region
The width for correcting search window is high.To limit boundary and the real goal of search window convenient for subsequent calculating color histogram feature
The distance between boundary is even number, and the width for further correcting search window is high.
Enabling the width of standardization window NormWin high is respectively w_n=150 and h_n=150, then the transformation of search window because
Son isSearch window image can be standardized to transformation according to search window transformation factor
Formation standard search window.By taking first frame as an example, a height of w_sn of width1=144, h_sn1=156, the standard target of present frame
The a height of w_on of the width of window1=69, h_on1=81.
2.2 generate standard gaussian response diagram
Standard gaussian response diagram g is a two-dimensional matrix, wherein its width of cell=4 a height of w_g=36, h_g=39,
Matrix element value is the probability density function for meeting (0,0, δ, δ, 0) dimensional gaussian distribution N, can be according toIt is public
Formula is calculated.Wherein, δ indicates that the standard deviation of dimensional gaussian distribution, calculation method are δ=1.168, and 1cell indicates that HOG is special
The size for levying each grid in extraction process is cell × cell, and (i, j) indicates the element coordinate bit of Gaussian response figure matrix
It sets, origin is located at the central point of matrix.Standard gaussian response diagram is subjected to available its frequency domain representation G of Fourier transform,
With the same size of g.
2.3 extract histograms of oriented gradients (HOG) feature
It is arranged using cell as HOG feature grid dimensional parameters, 2 × 2 grid as block size, set of histograms away from bin
For 2 π/7, HOG feature f is extracted in present frame standardization search window1, having a size of 36 × 39 × 28.Using having a size of 36
× 39 Cosine Window is to feature ftIt is smoothed, then carries out Fourier transformation and obtain the frequency domain representation F of HOG feature1, with
f1Same size.
2.4 calculate the associated filter template of HOG feature
The frequency domain representation F of known standardization search window HOG feature1With the frequency domain representation G of standard gaussian response diagram, then
The frequency domain representation H of HOG feature associated filter template1It can be according to formula H1=G/F1It is calculated.
2.5 extract color histogram feature templates
By taking first frame as an example, search window Search (1)=(256,171,127,137) region of interest within (225,135,
61,71) region other than is defined as background area, and target area retraction is a certain amount of to be defined as foreground area, central point and mesh
Mark region is identical, and wide high indent is all 6.6.Difference color histogram bg_ is extracted respectively in background area and foreground area
hist1With foreground color histogram fg_hist1, as current frame color histogram feature template.
Step 3. input next frame simultaneously extracts feature
Method according to step 2.1 calculates present frame search window Search (2), extracts present frame side according to 2.3 steps
To the frequency domain representation F of histogram of gradients (HOG) feature2, the response diagram G of present frame HOG feature2It can be according to formula G2=F2⊙
H1It is calculated.
Method according to step 2.5 extracts current frame color histogram feature bg_hist2And fg_hist2, area will be searched
Each pixel corresponds to the bin value of histogram in area image, and combined standard target window size and previous frame color histogram are special
Levy bg_hist1And fg_hist1, calculate the similar map L of current frame color histogram feature and color histogram template1,
Its size is identical as response diagram G.
The fusion of step 4. self-adaptive features
4.1 calculate self-adaptive features fusion parameters
Enable E (G2) indicate HOG characteristic response figure G2Expectation, E (G is calculated according to following equation2)=0.1168:
Enable Gh2Indicate HOG characteristic response figure G2Confidence region, the element Gh of the position (i, j)2(i, j) calculation method is such as
Shown in formula:
HOG characteristic response figure G2The following formula of the adaptive calculation method of fusion parameters α=0.6845 shown in:
The similar map L of color histogram2Adaptive fusion parameters be 0.3155.
The fusion of 4.2 self-adaptive features
GL is enabled to indicate HOG characteristic response figure G2Map L similar with color histogram2Self-adaptive features fusion as a result, its
Calculation method is as shown by the equation:
GL=0.6845 × G2+0.3155×L2
Step 5. determines tracking result
It is tracking result that self-adaptive features fusion results GL matrix element value, which represents it and corresponds to candidate target in search window,
Probability, then it is tracking result that greatest member value, which corresponds to candidate target,.
The number of candidate target is 75 × 75 in present frame search window.Enable GLmax、x_GLmax=42 and y_GLmax=38
Greatest member value and its corresponding transverse and longitudinal coordinate position in self-adaptive features fusion results GL matrix are respectively indicated, then present frame
Tracking result is (225,139,61,71), wherein x2=225, y2=139, w2=61, h2=71, γ=1.1372 are to search
The transformation factor of window.
Step 6. updates target template
Search window is calculated according to the position of present frame tracking result (225,139,61,71) and the method for step 2.1
Search'(2), Search'(2 is extracted according to the method for step 2.3) frequency of histograms of oriented gradients (HOG) feature in range
Domain representation F2', the method according to step 2.4 calculates H2'=G/F2'.Enabling η=0.01 is undated parameter, present frame HOG feature phase
Close filter template H2Update method is as shown by the equation:
H2=(1-0.01) × H1+0.01×H2'
Color histogram is extracted according to the position of present frame tracking result (225,139,61,71) and the method for step 2.5
Feature b g_hist2' and fg_hist2'.Enabling θ and β is undated parameter equal to 0.04, present frame difference color histogram and prospect
Color histogram template renewal method is as shown by the equation:
bg_hist2=(1-0.04) × bg_hist1+0.04×bg_hist2'
fg_hist2=(1-0.04) × fg_hist1+0.04×fg_hist2'
If step 7. present frame is last frame, tracking terminates;Otherwise, step 3 is gone to
Finally, the hardware experiments environment of the embodiment of the present invention be Intel Core i7-6700CPU, it is dominant frequency 3.4GHz, interior
The computer of 8GB configuration is deposited, the success rate of final tracking result reaches 97.8%, and part tracking result screenshot is as shown in Figure 3.
In figure, red block (dark color) indicates video marker actual position, and green frame (light color) indicates that tracking result of the present invention, blue are empty
Wire frame representation region of search.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, appoints
The change or replacement why not expected by creative work, should be covered by the protection scope of the present invention.Therefore, this hair
Bright protection scope should be determined by the scope of protection defined in the claims.
Claims (6)
1. a kind of correlation filtering tracking based on the fusion of response diagram confidence region self-adaptive features, which is characterized in that be based on
Correlation filtering tracks frame, carries out feature extraction using two kinds of complementary characteristics of histograms of oriented gradients (HOG) and color histogram,
And the fusion parameters of two kinds of features are adaptively set according to the confidence region of response diagram under each video frame special scenes.
2. the method according to claim 1, wherein including the following steps:
Step 1 inputs first frame
Each frame upper left corner in setting video frame sequence is coordinate origin (1,1), wide high respectively Width and Height;By hand
Or in automatic selected first frame target to be tracked rectangular area (x0,y0,w0,h0), that is, the tracking target selected;Wherein, (x0,
y0) indicate rectangular area top left co-ordinate, w0,h0The width for respectively indicating rectangular area is high;First frame selected target is also referred to as worked as
Previous frame tracking result (x1,y1,w1,h1)=(x0,y0,w0,h0), subscript indicates current frame number;
Step 2 initialized target template
1 calculates search window;
2 generate standard gaussian response diagram;
3 extract histograms of oriented gradients (HOG) feature;
4 calculate the associated filter template of HOG feature;
5 extract color histogram feature templates;
Step 3 input next frame simultaneously extracts feature
Method according to step 1 in step 2 calculates present frame search window Search (t), and step 3 retouches in foundation step 2
State the frequency domain representation F for extracting present frame histograms of oriented gradients (HOG) featuret;
Method according to step 5 in step 2 extracts current frame color histogram featureAnd fg_histt, area will be searched
Each pixel corresponds to the bin value of histogram in area image, and combined standard target window size and previous frame color histogram are special
Levy bg_histt-1And fg_histt-1, calculate the similar map of current frame color histogram feature and color histogram template
Lt, size is identical as response diagram G;
The fusion of step 4 self-adaptive features
4.1 calculate self-adaptive features fusion parameters
Enable E (Gt) indicate HOG characteristic response figure GtExpectation, shown in calculation method such as formula (1):
Enable GhtIndicate HOG characteristic response figure GtConfidence region, the element Gh of the position (i, j)t(i, j) calculation method such as formula
(2) shown in:
HOG characteristic response figure GtAdaptive fusion parameters α calculation method such as formula (3) shown in:
The similar map L of color histogramtAdaptive fusion parameters be 1- α;
The fusion of 4.2 self-adaptive features
GL is enabled to indicate HOG characteristic response figure GtMap L similar with color histogramtSelf-adaptive features fusion as a result, its calculating side
Shown in method such as formula (4):
GL=α × Gt+(1-α)×Lt (4)
Step 5 determines tracking result
Step 6: updating target template
Step 7: tracking terminates if present frame is last frame;Otherwise, step 3 is gone to.
3. method according to claim 2, which is characterized in that the step 2 specifically comprises the following steps:
1 calculates search window
According to previous frame, that is, t-1 frame tracking result (xt-1,yt-1,wt-1,ht-1) corresponding rectangular area can calculate present frame
That is the search window Search (t) of t frame candidate target, particularly, first frame search window is then according to (x0,y0,w0,h0) meter
It calculates;The central point of search window is (x_st,y_st), wherein x_st=xt-1+wt-1/2、y_st=yt-1+ht-1/ 2, wide high difference
For w_st=1.5 × wt-1+0.5×ht-1、h_st=1.5 × ht-1+0.5×wt-1;In order to ensure search area is in video frame range
It is interior, it is high according further to the width of the intersection amendment search window in the search area and present frame region;For convenient for subsequent calculating face
Color Histogram feature, limiting the distance between boundary and true object boundary of search window is even number, and further amendment is searched
The width of window is high;
Enabling the width of standardization window NormWin high is respectively w_n and h_n, then the transformation factor of search window isSearch window image can be standardized transformation according to search window transformation factor and form standard
Search window, a height of w_sn of widtht=w_st×γ、h_snt=h_stThe width of × γ, the standard target window of present frame are a height of
w_ont=w_snt×0.75-h_snt×0.25、h_ont=h_snt×0.75-w_snt×0.25;
2 generate standard gaussian response diagram
Standard gaussian response diagram g is a two-dimensional matrix, a height of w_g=w_sn of widtht/ cell, h_g=h_snt/ cell, square
Battle array element value is the probability density function for meeting (0,0, δ, δ, 0) dimensional gaussian distribution N, can be according toFormula into
Row calculates;Wherein, δ indicates that the standard deviation of dimensional gaussian distribution, calculation method areCell is indicated
The size of each grid is cell × cell in HOG characteristic extraction procedure, and (i, j) indicates the element coordinate of Gaussian response figure matrix
Position, origin are located at the central point of matrix;Standard gaussian response diagram is subjected to available its frequency domain representation G of Fourier transform,
Itself and the same size of g;
3 extract histograms of oriented gradients (HOG) feature
Using cell as HOG feature grid dimensional parameters, 2 × 2 grid as block size, set of histograms away from bin be set as 2 π/
7, HOG feature f is extracted in present frame standardization search windowt, having a size of w_g × h_g × 28;Using having a size of w_g ×
The Cosine Window of h_g is to feature ftIt is smoothed, then carries out Fourier transformation and obtain the frequency domain representation F of HOG featuret, with ft
Same size;
4 calculate the associated filter template of HOG feature
The frequency domain representation F of known standardization search window HOG featuretWith the frequency domain representation G of standard gaussian response diagram, then HOG feature
The frequency domain representation H of associated filter templatetIt can be according to formula Ht=G/FtIt is calculated;
5 extract color histogram feature templates
Search window Search (t)=(x_st,y_st,w_st,h_st) region of interest within (xt-1,yt-1,wt-1,ht-1) other than area
Domain is defined as background area, and target area retraction is a certain amount of to be defined as foreground area, and central point is identical as target area, wide height
Indent is all (wt-1+ht-1)/10;Difference color histogram bg_hist is extracted respectively in background area and foreground areatWith it is preceding
Scape color histogram fg_histt, as current frame color histogram feature template.
4. method according to claim 2, which is characterized in that in the step 3, the response diagram G of present frame HOG featuretRoot
According to formula Gt=Ft⊙Ht-1It is calculated.
5. method according to claim 2, which is characterized in that the step 5 specifically includes: self-adaptive features fusion results
GL matrix element value represents it and corresponds to the probability that candidate target in search window is tracking result, then greatest member value is corresponding candidate
Target is tracking result;
The number of candidate target is (w_sn in present frame search windowt-w_ont)×(h_snt-h_ont);Enable GLmax、x_GLmax
And y_GLmaxGreatest member value and its corresponding transverse and longitudinal coordinate position in self-adaptive features fusion results GL matrix are respectively indicated, then
Present frame tracking result is (xt,yt,wt,ht), wherein wt=wt-1、ht=ht-1、xt=xt-1+(x_GLmax-(w_snt-w_ont)/
2)/γ-wt/2、yt=yt-1+(y_GLmax-(h_snt-h_ont)/2)/γ-ht/ 2, γ are the transformation factor of search window.
6. according to the method described in claim 2, it is characterized in that, the step 6 specifically includes:
According to present frame tracking result (xt,yt,wt,ht) position and step 2.1 method calculate search window Search'(t),
Method according to step 2.3 extracts Search'(t) the frequency domain representation F of histograms of oriented gradients (HOG) feature in ranget', according to
H is calculated according to the method for step 2.4t'=G/Ft';Enabling η is undated parameter, present frame HOG feature associated filter template HtIt updates
Shown in method such as formula (5):
Ht=(1- η) Ht-1+ηHt' (5)
According to present frame tracking result (xt,yt,wt,ht) position and step 2.5 method extract color histogram feature b g_
histt' and fg_histt';Enabling θ and β is undated parameter, and present frame difference color histogram and foreground color histogram template are more
Shown in new method such as formula (6), (7):
bg_histt=(1- θ) × bg_histt-1+θ×bg_histt' (6)
fg_histt=(1- β) × fg_histt-1+β×fg_histt' (7)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910215879.4A CN109934853B (en) | 2019-03-21 | 2019-03-21 | Correlation filtering tracking method based on response image confidence region adaptive feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910215879.4A CN109934853B (en) | 2019-03-21 | 2019-03-21 | Correlation filtering tracking method based on response image confidence region adaptive feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109934853A true CN109934853A (en) | 2019-06-25 |
CN109934853B CN109934853B (en) | 2023-04-07 |
Family
ID=66987904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910215879.4A Active CN109934853B (en) | 2019-03-21 | 2019-03-21 | Correlation filtering tracking method based on response image confidence region adaptive feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934853B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379802A (en) * | 2021-07-01 | 2021-09-10 | 昆明理工大学 | Multi-feature adaptive fusion related filtering target tracking method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110026770A1 (en) * | 2009-07-31 | 2011-02-03 | Jonathan David Brookshire | Person Following Using Histograms of Oriented Gradients |
US20170180680A1 (en) * | 2015-12-21 | 2017-06-22 | Hai Yu | Object following view presentation method and system |
CN107545582A (en) * | 2017-07-04 | 2018-01-05 | 深圳大学 | Video multi-target tracking and device based on fuzzy logic |
CN107622229A (en) * | 2017-08-29 | 2018-01-23 | 中山大学 | A kind of video frequency vehicle based on fusion feature recognition methods and system again |
CN107784663A (en) * | 2017-11-14 | 2018-03-09 | 哈尔滨工业大学深圳研究生院 | Correlation filtering tracking and device based on depth information |
CN107798686A (en) * | 2017-09-04 | 2018-03-13 | 华南理工大学 | A kind of real-time modeling method method that study is differentiated based on multiple features |
CN108053419A (en) * | 2017-12-27 | 2018-05-18 | 武汉蛋玩科技有限公司 | Inhibited and the jamproof multiscale target tracking of prospect based on background |
CN108986140A (en) * | 2018-06-26 | 2018-12-11 | 南京信息工程大学 | Target scale adaptive tracking method based on correlation filtering and color detection |
-
2019
- 2019-03-21 CN CN201910215879.4A patent/CN109934853B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110026770A1 (en) * | 2009-07-31 | 2011-02-03 | Jonathan David Brookshire | Person Following Using Histograms of Oriented Gradients |
US20170180680A1 (en) * | 2015-12-21 | 2017-06-22 | Hai Yu | Object following view presentation method and system |
CN107545582A (en) * | 2017-07-04 | 2018-01-05 | 深圳大学 | Video multi-target tracking and device based on fuzzy logic |
CN107622229A (en) * | 2017-08-29 | 2018-01-23 | 中山大学 | A kind of video frequency vehicle based on fusion feature recognition methods and system again |
CN107798686A (en) * | 2017-09-04 | 2018-03-13 | 华南理工大学 | A kind of real-time modeling method method that study is differentiated based on multiple features |
CN107784663A (en) * | 2017-11-14 | 2018-03-09 | 哈尔滨工业大学深圳研究生院 | Correlation filtering tracking and device based on depth information |
CN108053419A (en) * | 2017-12-27 | 2018-05-18 | 武汉蛋玩科技有限公司 | Inhibited and the jamproof multiscale target tracking of prospect based on background |
CN108986140A (en) * | 2018-06-26 | 2018-12-11 | 南京信息工程大学 | Target scale adaptive tracking method based on correlation filtering and color detection |
Non-Patent Citations (4)
Title |
---|
KANNAPPAN PALANIAPPAN等: "Efficient feature extraction and likelihood fusion for vehicle tracking in low frame rate airborne video", 《2010 13TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION》 * |
冯春来等: "一种基于多信息综合的人脸跟踪算法", 《计算机工程与应用》 * |
成悦等: "结合学习率调整的自适应特征融合相关滤波跟踪算法", 《计算机应用研究》 * |
高赟等: "采用响应图置信区域自适应特征融合的相关滤波跟踪", 《光学精密工程》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113379802A (en) * | 2021-07-01 | 2021-09-10 | 昆明理工大学 | Multi-feature adaptive fusion related filtering target tracking method |
CN113379802B (en) * | 2021-07-01 | 2024-04-16 | 昆明理工大学 | Multi-feature adaptive fusion related filtering target tracking method |
Also Published As
Publication number | Publication date |
---|---|
CN109934853B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105405154B (en) | Target object tracking based on color-structure feature | |
CN109977997B (en) | Image target detection and segmentation method based on convolutional neural network rapid robustness | |
CN111161313B (en) | Multi-target tracking method and device in video stream | |
CN110321937B (en) | Motion human body tracking method combining fast-RCNN with Kalman filtering | |
CN110084850A (en) | A kind of dynamic scene vision positioning method based on image, semantic segmentation | |
CN108876820B (en) | Moving target tracking method under shielding condition based on mean shift | |
CN105046721B (en) | The Camshift algorithms of barycenter correction model are tracked based on Grabcut and LBP | |
CN111161325B (en) | Three-dimensional multi-target tracking method based on Kalman filtering and LSTM | |
CN112364865B (en) | Method for detecting small moving target in complex scene | |
CN106097385A (en) | A kind of method and apparatus of target following | |
CN106952294A (en) | A kind of video tracing method based on RGB D data | |
CN107657626A (en) | The detection method and device of a kind of moving target | |
CN108596920A (en) | A kind of Target Segmentation method and device based on coloured image | |
CN110163132A (en) | A kind of correlation filtering tracking based on maximum response change rate more new strategy | |
Alvarado-Robles et al. | An approach for shadow detection in aerial images based on multi-channel statistics | |
CN113763427A (en) | Multi-target tracking method based on coarse-fine shielding processing | |
CN103578121B (en) | Method for testing motion based on shared Gauss model under disturbed motion environment | |
CN103985139B (en) | Particle filter target tracking method based on color model and prediction vector cluster model information fusion | |
CN113689459B (en) | Real-time tracking and mapping method based on GMM and YOLO under dynamic environment | |
CN109934853A (en) | Correlation filtering tracking based on the fusion of response diagram confidence region self-adaptive features | |
CN110544267A (en) | correlation filtering tracking method for self-adaptive selection characteristics | |
CN109658441A (en) | Foreground detection method and device based on depth information | |
CN108469729A (en) | A kind of human body target identification and follower method based on RGB-D information | |
Mao et al. | Design of visual navigation system of farmland tracked robot based on raspberry pie | |
CN105118071B (en) | A kind of video tracing method based on adaptive piecemeal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |