CN102930558B - Real-time tracking method for infrared image target with multi-feature fusion - Google Patents

Real-time tracking method for infrared image target with multi-feature fusion Download PDF

Info

Publication number
CN102930558B
CN102930558B CN201210397686.3A CN201210397686A CN102930558B CN 102930558 B CN102930558 B CN 102930558B CN 201210397686 A CN201210397686 A CN 201210397686A CN 102930558 B CN102930558 B CN 102930558B
Authority
CN
China
Prior art keywords
msub
mrow
msubsup
feature
rho
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210397686.3A
Other languages
Chinese (zh)
Other versions
CN102930558A (en
Inventor
白俊奇
赵春光
王寿峰
翟尚礼
汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 28 Research Institute
Original Assignee
CETC 28 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 28 Research Institute filed Critical CETC 28 Research Institute
Priority to CN201210397686.3A priority Critical patent/CN102930558B/en
Publication of CN102930558A publication Critical patent/CN102930558A/en
Application granted granted Critical
Publication of CN102930558B publication Critical patent/CN102930558B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a real-time tracking method for an infrared image target with multi-feature fusion. The real-time tracking method comprises the following steps of: initializing a target tracking point position; initializing a target model; calculating a target candidate model; calculating a coefficient of unified feature Bhattacharyya and weight coefficients between feature; calculating a new tracking position of a current frame target; estimating the coefficient of the unified feature Bhattacharyya at the new position; and comparing the two coefficients of the unified feature Bhattacharyya and outputting results. The real-time tracking method can be used for adaptively calculating the weight coefficients between multiple features, enhances the robustness of target tracking, ensures the stability of target tracking, solves the problem of tracking point drifting caused by unstable single feature, and effectively improves the accuracy of target tracking.

Description

Multi-feature fusion infrared image target real-time tracking method
Technical Field
The invention designs an infrared image target tracking method with multi-feature fusion, and particularly relates to an infrared image target tracking method suitable for hardware to realize in real time.
Background
In recent years, with the development of integrated circuit processes and infrared materials, the infrared imaging technology has made great progress and is widely applied to the fields of national defense construction and national economy. However, the infrared image has a relatively low signal-to-noise ratio compared to the visible image, and therefore provides limited information when performing infrared image target detection and tracking. Due to the fact that target features in the infrared image are not obvious, the problems of large background clutter and the like exist, and accurate tracking of the infrared image target becomes more difficult.
Currently, target tracking algorithms are classified into two categories, model-based tracking methods and appearance-based tracking methods. Compared with a model tracking method, the appearance tracking method avoids a complex process of establishing the model and has wider engineering practical value. The mean shift tracking algorithm is widely applied to target tracking due to the characteristics of simplicity, robustness and good real-time performance. The mean shift is a non-parameter density calculation method, and a distribution mode most similar to the distribution of the sample is searched through multiple iterations. The comeniciu et al propose a mean shift target tracking algorithm by finding the maximum of the similarity of the target color histogram and the candidate target color histogram. Chu et al use a Kalman filter to predict the initial iterative position of Mean Shift, but when the target is severely occluded, there is some deviation due to inaccuracy in the target location point found by the Mean Shift algorithm. Collins et al propose an adaptive tracking method that can select easily identifiable color features, where the candidate color feature set contains 49 sets of features calculated from R, G, B-valued linear combinations. Therefore, the existing target tracking algorithm has the following disadvantages: (1) the classical target tracking algorithm adopts a single characteristic to describe a target, and has poor anti-jamming capability; (2) most of the existing multi-feature target tracking algorithms only utilize the current frame to calculate the weight coefficient among features, and when the target is subjected to complex change, the tracking algorithms have poor robustness; (3) under the conditions of non-rigid deformation, local shielding and overlapping of a target, the tracking precision of most of the existing tracking algorithms is reduced, and even the target is lost; (4) most of the existing tracking algorithms greatly increase the complexity of the algorithms while improving the target tracking precision, and are not easy to realize by hardware in real time.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problem of providing a real-time tracking method of an infrared image target with multi-feature fusion aiming at the defects of the prior art.
In order to solve the technical problem, the invention discloses a real-time tracking method of an infrared image target with multi-feature fusion, which comprises the following steps:
(1) initializing target tracking point location y0. The initial tracking point is manually designated;
(2) initializing the target model to initiate tracking point y0Establishing a target gray model q for the center1And a target LBP texture model q2(ii) a (local binary pattern, LBP) local binary patterns.
(3) Calculating a target candidate model according to the position y of the tracking point of the target0Calculating a candidate target gray model p1(y0) And candidate target LBP texture model p2(y0);
(4) Coefficient ρ is determined by using the gray-scale feature Bhattacharyya (Bhattacharyya, see Visual C + + digital image processing, page 466, author: Xiancheng, first edition 2008, electronics industry Press)1And the Bhattacharyya coefficient rho of the LBP texture feature2And a weight coefficient alpha of the gradation characteristic1And the weighting coefficient alpha of the LBP texture feature2Calculating the position y0And (3) processing the combined characteristic Bhattacharyya coefficient rho, wherein the expression is as follows:
ρ=α1·ρ12·ρ2
(5) calculating the new position y of the target of the current frame1
(6) Utilizing gray level feature Bhattacharyya coefficient rho'1And LBP texture feature Bhattacharyya coefficient ρ'2And a weight coefficient of a gray-scale feature of α'1And weight coefficient of LBP texture feature of alpha'2Calculating the position y1The coefficient rho' of the Bhattacharyya of the joint characteristic is expressed as follows;
ρ′=α′1·ρ′1+α′2·ρ′2
(7) when rho'<At the time of the rho,otherwise y1Keeping the same;
(8) if (y)0-y1)|<Stop the calculation, otherwise, will y1Assign y to0And step (3) is executed, wherein the error constant coefficient is.
In step (2), the target gray model q1Comprises the following steps:
<math> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>q</mi> <mi>u</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msub> <mi>q</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
target LBP texture model q2Comprises the following steps:
<math> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>q</mi> <mi>v</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msub> <mi>q</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
wherein q isuProbability density of each level, q, of a target gray model gray featurevFor each level of probability density, m, of the LBP texture feature1Maximum quantization scale range, m, for the gray features of the target gray model2And u represents a gray quantization level and v represents a texture quantization level, which is the maximum quantization level range of the LBP texture characteristics.
In step (3), candidate target gray model p1Comprises the following steps:
<math> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
candidate target LBP texture model p2Comprises the following steps:
<math> <mrow> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>p</mi> <mi>v</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msub> <mi>p</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
puprobability density of each level, p, of a target gray model gray featurevThe level probability density, m, of the target gray model gray feature and the LBP texture feature1Maximum quantization scale range, m, for the gray features of the target gray model2And u represents a gray quantization level and v represents a texture quantization level, which is the maximum quantization level range of the LBP texture characteristics.
Weighting coefficient alpha of gray characteristic in step (4)1And the weighting coefficient alpha of the LBP texture feature2Weight coefficient α 'of gray feature in step (6)'1And weight coefficient of LBP texture feature of alpha'2Updating in an iterative mode, wherein the calculation formulas are respectively as follows:
α1=(1-λ)·α1,old+λ·α1,cur
α2=(1-λ)·α2,old+λ·α2,cur
α'1=(1-λ)·α'1,old+λ·α'1,cur
α'2=(1-λ)·α'2,old+λ·α'2,cur
wherein alpha is1,oldAnd alpha2,oldThe weight coefficients, alpha, of the gray feature and LBP texture feature of the previous frame in the step (4) are respectively1,curAnd alpha2,curRespectively are the weight coefficients, alpha ', of the gray feature and the LBP texture feature of the current frame in the step (4)'1,oldAnd alpha'2,oldRespectively are the weight coefficients, alpha ', of the gray feature and the LBP texture feature of the previous frame in the step (6)'1,curAnd alpha'2,curAnd (4) weighting coefficients of the gray feature and the LBP texture feature of the current frame in the step (6), wherein lambda is a proportionality coefficient. And lambda is more than or equal to 0 and less than or equal to 1, the convergence speed of the weight coefficient is determined, the larger the lambda value is, the faster the convergence speed is, the stronger the tracking mobility is, the smaller the lambda value is, the slower the convergence speed is, and the better the tracking stability is.
Weighting coefficient alpha of gray feature of current frame in step (4)1,curAnd the weighting coefficient alpha of the LBP texture feature2,curAnd (5) weighting coefficient alpha 'of the gray level feature of the current frame in the step (6)'1,curAnd weight coefficient of LBP texture feature of alpha'2,curThe calculation formula is as follows:
<math> <mrow> <msub> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>cur</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>cur</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
where ρ is1Is the Bhattacharyya coefficient, rho, of the gray feature in step (4)2Is the Bhattacharyya coefficient, rho 'of the LBP texture feature in the step (4)'1Is the Bhattacharyya coefficient, rho 'of the gray feature in the step (6)'2Is the Bhattacharyya coefficient of the LBP texture feature in step (6).
Ash characteristic Bhattacharyya coefficient rho 'in step (6)'1And Bhattacharyya coefficient ρ 'of LBP texture feature'2And a weight coefficient of a gray-scale feature of α'1And weight coefficient of LBP texture feature of alpha'2Obtained by the following formula:
<math> <mrow> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msqrt> <msubsup> <mi>p</mi> <mi>u</mi> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>u</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msqrt> <msubsup> <mi>p</mi> <mi>v</mi> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>v</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
p'uis position y1Of the target grayscale model grayscale feature of (d'vIs position y1Processing each level of probability density of LBP texture characteristics;
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msubsup> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>old</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> </mrow> </math>
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msubsup> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>old</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
wherein, alpha'1,oldIs the weight coefficient of the gray feature of the previous frame, alpha'2,oldIs the weighting coefficient of the last frame LBP texture feature, and is the scaling coefficient. And lambda is more than or equal to 0 and less than or equal to 1, the convergence speed of the weight coefficient is determined, the larger the lambda value is, the faster the convergence speed is, the stronger the tracking mobility is, the smaller the lambda value is, the slower the convergence speed is, and the better the tracking stability is.
In the multi-feature fusion infrared image target real-time tracking method, an Epanechnikov (Epanechnikov) kernel function is used for calculating a gray feature probability histogram and an LBP texture feature probability histogram.
Compared with the prior art, the invention has the following remarkable advantages: (1) according to the feature significance and similarity of the target and the background, the weight coefficient among multiple features is calculated in a self-adaptive mode, and the target tracking robustness is enhanced; (2) the weight coefficient among multiple features is updated in an iterative mode, so that the target tracking stability is ensured; (3) the infrared image target is tracked by utilizing a multi-feature fusion target tracking method, so that the problem of tracking point drift caused by instability of single feature is solved, and the target tracking precision is effectively improved; (4) the multi-feature fusion target tracking method provided by the invention has the advantages of no high-order operation and complex structure, small algorithm operation amount and easiness in hardware real-time realization.
Drawings
The above and other advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1 is a flow chart of the present invention.
Fig. 2a to 2d are conventional single-feature (gray scale) infrared image target tracking results.
Fig. 3a to 3d are the target tracking results of the infrared image with multi-feature fusion according to the present invention.
Detailed Description
In the multi-feature fusion infrared image target real-time tracking method, the characteristics of the infrared image target are described by utilizing the gray characteristic and the LBP texture characteristic.
Eight neighborhood LBP texture feature expression LBP8,1As follows:
<math> <mrow> <msub> <mi>LBP</mi> <mn>8,1</mn> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>n</mi> </msub> <mo>-</mo> <msub> <mi>g</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msup> <mn>2</mn> <mi>n</mi> </msup> <mo>,</mo> </mrow> </math>
<math> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> </mtd> <mtd> <mi>x</mi> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> </mtd> <mtd> <mi>x</mi> <mo>&lt;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> </mrow> </math>
wherein, gcIs the current point, gnIs the surrounding neighborhood point, n is 0.. 7.
According to the infrared image target real-time tracking method based on multi-feature fusion, the similarity between a target model and a target candidate model is described by using a Bhattacharyya coefficient.
Coefficient rho of BhattacharyyaBhaThe expression is as follows:
<math> <mrow> <msub> <mi>&rho;</mi> <mi>Bha</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>m</mi> </munderover> <msqrt> <mi>p</mi> <mo>&CenterDot;</mo> <mi>q</mi> </msqrt> <mo>,</mo> </mrow> </math>
where p is the target candidate model and q is the target model.
In the multi-feature fusion infrared image target real-time tracking method, the weight coefficient alpha of the gray feature1And the weighting coefficient alpha of the LBP texture feature2Updating in an iterative mode, wherein the expression is as follows:
α1=(1-λ)·α1,old+λ·α1,cur
α2=(1-λ)·α2,old+λ·α2,cur
wherein alpha is1,oldAnd alpha2,oldThe weight coefficients of the gray feature of the previous frame and the LBP texture feature are respectively alpha1,curAnd alpha2,curIs the weighting coefficient of the gray feature of the current frame and the LBP texture feature, and is the scale coefficient.
In the multi-feature fusion infrared image target real-time tracking method, the weight coefficient alpha of the gray features of the current frame1,curAnd the weighting coefficient alpha of the LBP texture feature2,curThe expressions are respectively as follows:
<math> <mrow> <msub> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
where ρ is1Is the Bhattacharyya coefficient, rho, of the gray scale feature2Is the Bhattacharyya coefficient of the LBP texture feature.
Example 1
Referring to fig. 1, the following describes, by way of example, a real-time tracking method for infrared image targets with multi-feature fusion according to the present invention. The number of pixels of the infrared image is 320 × 240, and the frame rate is 25 HZ. The imaging of the thermal infrared imager is transmitted to a special image processing board with a DSP + FPGA framework through optical fibers, the tracking of the infrared image target with multi-feature fusion is realized in a DSP processor, the requirement of real-time processing is met, and the specific implementation steps are as follows:
(1) initializing target tracking point location y0The initial tracking point is manually specified.
An initial target tracking point position (i, j) is manually specified, i is 80, j is 100 (shown in fig. 2), and an einzekov (Epanechnikov) kernel function bandwidth h is set to 10.
(2) Initializing target model, and establishing target gray scale model q according to gray scale characteristics1Calculating LBP texture characteristics of the target, and establishing a target texture model q by combining the LBP texture characteristics2
An image I in a range of a bandwidth h of 10 centered on an initial tracking point position (80,100) is calculatedLBPThe expression of LBP texture feature of (1) is as follows:
<math> <mrow> <msub> <mi>I</mi> <mi>LBP</mi> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mn>1</mn> <mo>=</mo> <mn>75</mn> </mrow> <mn>85</mn> </munderover> <munderover> <mi>&Sigma;</mi> <mrow> <mi>k</mi> <mn>2</mn> <mo>=</mo> <mn>95</mn> </mrow> <mn>105</mn> </munderover> <msub> <mi>LBP</mi> <mn>8,1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </math>
eight neighborhood LBP texture feature expression LBP8,1As follows:
<math> <mrow> <msub> <mi>LBP</mi> <mn>8,1</mn> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mn>2</mn> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> </mrow> <mn>7</mn> </munderover> <mi>s</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>n</mi> </msub> <mo>-</mo> <msub> <mi>g</mi> <mi>c</mi> </msub> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msup> <mn>2</mn> <mi>n</mi> </msup> <mo>,</mo> </mrow> </math>
<math> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open='{' close=''> <mtable> <mtr> <mtd> <mn>1</mn> <mo>,</mo> </mtd> <mtd> <mi>x</mi> <mo>&GreaterEqual;</mo> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> <mo>,</mo> </mtd> <mtd> <mi>x</mi> <mo>&lt;</mo> <mn>0</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow> </math>
wherein, gcIs the current target point, c-k 2 x 320+ k1, gnIs gcAnd (4) surrounding neighborhood points, wherein n is 0.
Target grayscale model q1Comprises the following steps:
<math> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>q</mi> <mi>u</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msub> <mi>q</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>q</mi> <mi>u</mi> </msub> <mo>=</mo> <mi>C</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>k</mi> <mo>[</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>h</mi> </mfrac> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&CenterDot;</mo> <mi>&delta;</mi> <mo>[</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>&mu;</mi> <mo>]</mo> <mo>,</mo> </mrow> </math>
target LBP texture model q2Comprises the following steps:
<math> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>q</mi> <mi>v</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msub> <mi>q</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>q</mi> <mi>v</mi> </msub> <mo>=</mo> <mi>C</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </munderover> <mi>k</mi> <mo>[</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>h</mi> </mfrac> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&CenterDot;</mo> <mi>&delta;</mi> <mo>[</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>v</mi> <mo>]</mo> <mo>,</mo> </mrow> </math>
wherein q isuAnd q isvRespectively representing the probability density of each level of the gray level characteristic and the LBP texture characteristic of the target model, m1255 and m2255 represent the quantization levels of the target model grayscale feature and the LBP texture feature, respectively, function b1Is located at xiTo a grey scale feature index, function b2Is located at xiThe mapping of the pixel to the LBP texture feature index of (1) is a Delta function, C is a normalization coefficient, μ ═ 1.. 255, and v ═ 1.. 255.
(3) And calculating the target candidate model. According to the position y of the tracking point0Calculating a candidate target gray model p1(y0) And a target texture candidate model p2(y0);
Candidate target grayscale model p1Comprises the following steps:
<math> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>=</mo> <mi>C</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mi>k</mi> <mo>[</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>h</mi> </mfrac> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&CenterDot;</mo> <mi>&delta;</mi> <mo>[</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>&mu;</mi> <mo>]</mo> <mo>,</mo> </mrow> </math>
candidate target LBP texture model p2Comprises the following steps:
<math> <mrow> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>p</mi> <mi>v</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msub> <mi>p</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>p</mi> <mi>v</mi> </msub> <mo>=</mo> <mi>C</mi> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </munderover> <mi>k</mi> <mo>[</mo> <msup> <mrow> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> </mrow> <mi>h</mi> </mfrac> <mo>|</mo> <mo>|</mo> </mrow> <mn>2</mn> </msup> <mo>]</mo> <mo>&CenterDot;</mo> <mi>&delta;</mi> <mo>[</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>v</mi> <mo>]</mo> <mo>,</mo> </mrow> </math>
puand pvRespectively representing the probability density of each level of the gray level characteristic and the LBP texture characteristic of the target model, m1255 and m2255 represent the quantization levels of the target model grayscale feature and the LBP texture feature, respectively, function b1Is located at xiTo a grey scale feature index, function b2Is located at xiThe mapping of the pixel to the LBP texture feature index of (1) is a Delta function, C is a normalization coefficient, μ ═ 1.. 255, and v ═ 1.. 255.
(4) Respectively calculating Bhattacharyya coefficients rho of the gray feature and the LBP texture feature1、ρ2And a weight coefficient alpha1、α2Using rho1、α1、ρ2、α2Calculating the position y0The combined feature Bhattacharyya coefficient ρ;
the joint feature Bhattacharyya coefficient ρ is described as:
ρ=α1·ρ12·ρ2
<math> <mrow> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msqrt> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>u</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msqrt> <msubsup> <mi>p</mi> <mi>v</mi> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>v</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
gray scale feature weight coefficient alpha1LBP texture feature alpha2The update expression is as follows:
<math> <mrow> <msub> <mi>&alpha;</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msub> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>old</mi> </mrow> </msub> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <msub> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msub> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>old</mi> </mrow> </msub> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>&alpha;</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msub> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>old</mi> </mrow> </msub> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <msub> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msub> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>old</mi> </mrow> </msub> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
wherein alpha is1,oldAnd alpha2,oldIs the weight coefficient of the previous frame, α1,curAnd alpha2,curIs the current frame weight coefficient and λ is the scale coefficient.
(5) Calculating new target tracking position y of current frame1
<math> <mrow> <msub> <mi>y</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>&alpha;</mi> <mn>1</mn> </msub> <mo>&CenterDot;</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </mfrac> <mo>+</mo> <msub> <mi>&alpha;</mi> <mn>2</mn> </msub> <mo>&CenterDot;</mo> <mfrac> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>&CenterDot;</mo> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> <mrow> <munderover> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>n</mi> <mo>&prime;</mo> </msup> </munderover> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msqrt> <mfrac> <msub> <mi>q</mi> <mi>u</mi> </msub> <mrow> <msub> <mi>p</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mo>&CenterDot;</mo> <mi>&delta;</mi> <mo>[</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>u</mi> <mo>]</mo> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msqrt> <mfrac> <msub> <mi>q</mi> <mi>v</mi> </msub> <mrow> <msub> <mi>p</mi> <mi>v</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mn>0</mn> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> </msqrt> <mo>&CenterDot;</mo> <mi>&delta;</mi> <mo>[</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mi>v</mi> <mo>]</mo> <mo>,</mo> </mrow> </math>
Wherein,n' is the number of candidate target pixel points, h is the kernel function bandwidth, qu、qv、pu、pv、α1、α2、xi、b1(·)、b2The meaning of (. cndot.) is the same as defined in steps (2), (3) and (4).
(6) Bhattacharyya coefficient rho 'utilizing gray level feature and LBP texture feature'1、ρ'2And a weight coefficient of α'1、α'2Calculating the position y1The coefficient rho' of the Bhattacharyya of the joint characteristic is expressed as follows;
ρ′=α′1·ρ′1+α′2·ρ′2
<math> <mrow> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msqrt> <msubsup> <mi>p</mi> <mi>u</mi> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>u</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msqrt> <msubsup> <mi>p</mi> <mi>v</mi> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>v</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
grayscale characteristic weight coefficient alpha'1LBP texture feature alpha'2The update expression is as follows:
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msubsup> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>old</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> </mrow> </math>
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&lambda;</mi> <mo>)</mo> </mrow> <mo>&CenterDot;</mo> <msubsup> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>old</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>+</mo> <mi>&lambda;</mi> <mo>&CenterDot;</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
wherein, alpha'1,oldAnd alpha'2,oldIs the last frame weight coefficient and λ is the scale coefficient.
(7) When rho'<At the time of the rho,otherwise y1Keeping the same;
(8) if abs (y)0-y1)<0.01 then stop, otherwise, y0←y1And (5) executing the step (3).
Fig. 2 is a conventional technology, and fig. 3 is a target tracking result of an infrared image obtained according to the present embodiment using only single feature (grayscale feature) and multi-feature fusion, and grayscale colors inevitably appear because the infrared image is used. Fig. 2a, 2b, 2c, and 2d show images of the 20 th frame, the 80 th frame, the 140 th frame, and the 200 th frame, respectively, and fig. 3a, 3b, 3c, and 3d show images of the 20 th frame, the 80 th frame, the 140 th frame, and the 200 th frame, respectively. Comparing fig. 2 and fig. 3, it was found that: only using single characteristic to track the target can cause unstable tracking process and poor tracking precision, such as random swing of a tracking wave gate in fig. 2; the tracking precision can be effectively improved by using a multi-feature fusion tracking method, for example, a tracking wave gate in fig. 3 is always near a target centroid.
The present invention provides a real-time tracking method for infrared image targets with multi-feature fusion, and a plurality of methods and approaches for implementing the technical solution are provided, the above description is only a preferred embodiment of the present invention, it should be noted that, for those skilled in the art, a plurality of improvements and modifications may be made without departing from the principle of the present invention, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (5)

1. A real-time tracking method of an infrared image target with multi-feature fusion is characterized by comprising the following steps:
(1) giving the position y of the initial tracking point of the target0
(2) Initializing the target model to initiate tracking point y0Establishing a target gray model q for the center1And a target LBP texture model q2
(3) Calculating a target candidate model according to the position y of the tracking point of the target0Calculating a candidate target gray model p1(y0) And candidatesTarget LBP texture model p2(y0);
(4) Utilizing the coefficient rho of the gray characteristic Bhattacharyya1And the Bhattacharyya coefficient rho of the LBP texture feature2And a weight coefficient alpha of the gradation characteristic1And the weighting coefficient alpha of the LBP texture feature2Calculating the position y0And (3) processing the combined characteristic Bhattacharyya coefficient rho, wherein the expression is as follows:
ρ=α1·ρ12·ρ2
(5) calculating the new position y of the target of the current frame1
(6) Utilizing gray level feature Bhattacharyya coefficient rho'1And LBP texture feature Bhattacharyya coefficient ρ'2And a weight coefficient of a gray-scale feature of α'1And weight coefficient of LBP texture feature of alpha'2Calculating the position y1The coefficient rho' of the Bhattacharyya of the joint characteristic is expressed as follows;
ρ′=α′1·ρ′1+α′2·ρ′2
(7) when p' < p,otherwise y1Keeping the same;
(8) if (y)0-y1) If l <, stop calculating, otherwise, divide y1Assign y to0And executing step (3), wherein the error constant coefficient is obtained;
weighting coefficient alpha of gray characteristic in step (4)1And the weighting coefficient alpha of the LBP texture feature2Weight coefficient α 'of gray feature in step (6)'1And weight coefficient of LBP texture feature of alpha'2Updating in an iterative mode, wherein the calculation formulas are respectively as follows:
α1=(1-λ)·α1,old+λ·α1,cur
α2=(1-λ)·α2,old+λ·α2,cur
α'1=(1-λ)·α'1,old+λ·α'1,cur
α'2=(1-λ)·α'2,old+λ·α'2,cur
wherein alpha is1,oldAnd alpha2,oldThe weight coefficients, alpha, of the gray feature and LBP texture feature of the previous frame in the step (4) are respectively1,curAnd alpha2,curRespectively are the weight coefficients, alpha ', of the gray feature and the LBP texture feature of the current frame in the step (4)'1,oldAnd alpha'2,oldRespectively are the weight coefficients, alpha ', of the gray feature and the LBP texture feature of the previous frame in the step (6)'1,curAnd alpha'2,curAnd (4) weighting coefficients of the gray feature and the LBP texture feature of the current frame in the step (6), wherein lambda is a proportionality coefficient.
2. The method for tracking the infrared image target with the fusion of the multiple features as claimed in claim 1, wherein in the step (2), the target gray model q is obtained1Comprises the following steps:
<math> <mrow> <msub> <mi>q</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>q</mi> <mi>u</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msub> <mi>q</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
target LBP texture model q2Comprises the following steps:
<math> <mrow> <msub> <mi>q</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>q</mi> <mi>v</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msub> <mi>q</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
wherein q isuProbability density of each level, q, of a target gray model gray featurevFor each level of probability density, m, of the LBP texture feature1Maximum quantization scale range, m, for the gray features of the target gray model2And u represents a gray quantization level and v represents a texture quantization level, which is the maximum quantization level range of the LBP texture characteristics.
3. The method for tracking the infrared image target with the fusion of the multiple features as claimed in claim 1, wherein in the step (3), the candidate target gray model p is1Comprises the following steps:
<math> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msub> <mi>p</mi> <mi>u</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
candidate target LBP texture model p2Comprises the following steps:
<math> <mrow> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mrow> <mo>{</mo> <msub> <mi>p</mi> <mi>v</mi> </msub> <mo>}</mo> </mrow> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> <mo>.</mo> <mo>.</mo> <mo>.</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>,</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msub> <mi>p</mi> <mi>v</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> </mrow> </math>
puprobability density of each level, p, of a target gray model gray featurevThe level probability density, m, of the target gray model gray feature and the LBP texture feature1Maximum quantization scale range, m, for the gray features of the target gray model2And u represents a gray quantization level and v represents a texture quantization level, which is the maximum quantization level range of the LBP texture characteristics.
4. The method for tracking the infrared image target in real time through multi-feature fusion according to claim 1, wherein the weighting coefficient α of the gray features of the current frame in the step (4) is1,curAnd the weighting coefficient alpha of the LBP texture feature2,curAnd (5) weighting coefficient alpha 'of the gray level feature of the current frame in the step (6)'1,curAnd weight coefficient of LBP texture feature of alpha'2,curThe calculation formula is as follows:
<math> <mrow> <msub> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>1</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msub> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>cur</mi> </mrow> </msub> <mo>=</mo> <mfrac> <msub> <mi>&rho;</mi> <mn>2</mn> </msub> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mn>2</mn> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mn>2</mn> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>cur</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&alpha;</mi> <mrow> <mn>2</mn> <mo>,</mo> <mi>cur</mi> </mrow> <mo>&prime;</mo> </msubsup> <mo>=</mo> <mfrac> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <msqrt> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>+</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> </msqrt> </mfrac> <mo>,</mo> </mrow> </math>
where ρ is1Is the Bhattacharyya coefficient, rho, of the gray feature in step (4)2Is the Bhattacharyya coefficient, rho 'of the LBP texture feature in the step (4)'1Is the Bhattacharyya coefficient, rho 'of the gray feature in the step (6)'2Is the Bhattacharyya coefficient of the LBP texture feature in step (6).
5. The method for tracking the infrared image target in real time through multi-feature fusion according to claim 4, wherein in the step (6), the gray feature Bhattacharyya coefficient rho'1And Bhattacharyya coefficient ρ 'of LBP texture feature'2And a weight coefficient of a gray-scale feature of α'1And weight coefficient of LBP texture feature of alpha'2Obtained by the following formula:
<math> <mrow> <msubsup> <mi>&rho;</mi> <mn>1</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>u</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> </munderover> <msqrt> <msubsup> <mi>&rho;</mi> <mi>u</mi> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>u</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
<math> <mrow> <msubsup> <mi>&rho;</mi> <mn>2</mn> <mo>&prime;</mo> </msubsup> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>v</mi> <mo>=</mo> <mn>1</mn> </mrow> <msub> <mi>m</mi> <mn>2</mn> </msub> </munderover> <msqrt> <msubsup> <mi>&rho;</mi> <mi>v</mi> <mo>&prime;</mo> </msubsup> <mo>&CenterDot;</mo> <msub> <mi>q</mi> <mi>v</mi> </msub> </msqrt> <mo>,</mo> </mrow> </math>
p'uis position y1Of the target grayscale model grayscale feature of (d'vIs position y1At each level of probability density of the LBP texture features.
CN201210397686.3A 2012-10-18 2012-10-18 Real-time tracking method for infrared image target with multi-feature fusion Active CN102930558B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210397686.3A CN102930558B (en) 2012-10-18 2012-10-18 Real-time tracking method for infrared image target with multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210397686.3A CN102930558B (en) 2012-10-18 2012-10-18 Real-time tracking method for infrared image target with multi-feature fusion

Publications (2)

Publication Number Publication Date
CN102930558A CN102930558A (en) 2013-02-13
CN102930558B true CN102930558B (en) 2015-04-01

Family

ID=47645348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210397686.3A Active CN102930558B (en) 2012-10-18 2012-10-18 Real-time tracking method for infrared image target with multi-feature fusion

Country Status (1)

Country Link
CN (1) CN102930558B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI628624B (en) * 2017-11-30 2018-07-01 國家中山科學研究院 Improved thermal image feature extraction method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109215062B (en) * 2017-06-29 2022-02-08 沈阳新松机器人自动化股份有限公司 Motion capture method based on image vision, binocular positioning device and system
US10304207B2 (en) * 2017-07-07 2019-05-28 Samsung Electronics Co., Ltd. System and method for optical tracking
CN109902578B (en) * 2019-01-25 2021-01-08 南京理工大学 Infrared target detection and tracking method
CN113379789B (en) * 2021-06-11 2022-12-27 天津大学 Moving target tracking method in complex environment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
汪首坤,郭俊杰,王军政.基于自适应特征融合的均值迁移目标跟踪.《北京理工大学学报》.2011,第31卷(第7期),第804-805页,第807页. *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI628624B (en) * 2017-11-30 2018-07-01 國家中山科學研究院 Improved thermal image feature extraction method

Also Published As

Publication number Publication date
CN102930558A (en) 2013-02-13

Similar Documents

Publication Publication Date Title
CN106846359B (en) Moving target rapid detection method based on video sequence
CN110335290B (en) Twin candidate region generation network target tracking method based on attention mechanism
CN107481264B (en) Video target tracking method with self-adaptive scale
CN109767439B (en) Target detection method for multi-scale difference and bilateral filtering of self-adaptive window
CN106780576B (en) RGBD data stream-oriented camera pose estimation method
CN109816641B (en) Multi-scale morphological fusion-based weighted local entropy infrared small target detection method
CN105335986B (en) Method for tracking target based on characteristic matching and MeanShift algorithm
CN111080675B (en) Target tracking method based on space-time constraint correlation filtering
CN101551909B (en) Tracking method based on kernel and target continuous adaptive distribution characteristics
CN102930558B (en) Real-time tracking method for infrared image target with multi-feature fusion
CN106991686B (en) A kind of level set contour tracing method based on super-pixel optical flow field
CN107578430B (en) Stereo matching method based on self-adaptive weight and local entropy
CN105809715B (en) A kind of visual movement object detection method adding up transformation matrices based on interframe
CN107403134B (en) Local gradient trilateral-based image domain multi-scale infrared dim target detection method
CN110060286B (en) Monocular depth estimation method
CN107968946B (en) Video frame rate improving method and device
CN104103080A (en) Method of small dim target detection under complicated background
CN106204461B (en) In conjunction with the compound regularized image denoising method of non local priori
CN105844637B (en) SAR image change detection method based on non-local CV model
CN109242019B (en) Rapid detection and tracking method for optical small target on water surface
CN110555868A (en) method for detecting small moving target under complex ground background
CN103826032A (en) Depth map post-processing method
CN115393734A (en) SAR image ship contour extraction method based on fast R-CNN and CV model combined method
CN113822352A (en) Infrared dim target detection method based on multi-feature fusion
CN108010002B (en) Structured point cloud denoising method based on adaptive implicit moving least square

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant