CN109472813A - Occlusion tracking method based on background weighting and based on Mean Shift algorithm and Kalman prediction fusion - Google Patents

Occlusion tracking method based on background weighting and based on Mean Shift algorithm and Kalman prediction fusion Download PDF

Info

Publication number
CN109472813A
CN109472813A CN201811247771.5A CN201811247771A CN109472813A CN 109472813 A CN109472813 A CN 109472813A CN 201811247771 A CN201811247771 A CN 201811247771A CN 109472813 A CN109472813 A CN 109472813A
Authority
CN
China
Prior art keywords
background
value
target
histogram
mean shift
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811247771.5A
Other languages
Chinese (zh)
Inventor
吴水琴
任维
毛耀
刘琼
李志俊
周翕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Optics and Electronics of CAS
Original Assignee
Institute of Optics and Electronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Optics and Electronics of CAS filed Critical Institute of Optics and Electronics of CAS
Priority to CN201811247771.5A priority Critical patent/CN109472813A/en
Publication of CN109472813A publication Critical patent/CN109472813A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an occlusion tracking method based on background weighted Mean Shift (HRBW Mean Shift) algorithm and Kalman prediction fusion, which aims at the problem that the Bhattacharyya coefficient in a Mean Shift algorithm is influenced by background pixels in an initial target frame so that the occlusion state of a target cannot be accurately judged, the Mean Shift algorithm fails before occlusion judgment, and the Kalman prediction filtering algorithm cannot be started or the trajectory of the target cannot be accurately predicted after the Kalman prediction filtering algorithm is started. The invention provides a method for improving a target model by adding a background weighting factor calculated by a target histogram and a background histogram into the target histogram, so that the Bhattacharyya coefficient value is further optimized, and the Bhattacharyya coefficient method can more accurately detect the shielding state of a target. The invention increases the difference value of the Bhattacharyya coefficients in the normal tracking state and the shielding state, is beneficial to judging the shielding state, and improves the tracking performance during shielding by improving the accuracy of shielding judgment.

Description

A kind of screening of Mean Shift algorithm and Kalman prediction fusion based on background weighting Keep off tracking
Technical field
The present invention relates to what a kind of Mean Shift algorithm based on background weighting and Kalman prediction merged to block tracking Method belongs to video image tracking field.It is mainly used for solving the occlusion issue during the tracking of video image, it can be accurate Detection occlusion state and solve the problems, such as that track algorithm tracks failure during blocking.
Background technique
With the development of computer vision, has a large amount of track algorithm at present and emerge in large numbers.But blocking tracking is always to track Difficult point in research.When target passes through shelter, since characteristic information is reduced, when especially target is blocked completely, tracking Algorithm can extract shelter or other decoys, or even when target reappears, and track algorithm can still keep mistake Tracking.
Track algorithm is blocked in solution at present TLD track algorithm, sub-block matching method.TLD track algorithm is in tracking module Increase tracking failure detection mechanism, judges that target is blocking heel according to the continuity of target trajectory in adjacent video frames Whether track algorithm fails.Detection module is enabled after tracking failure, detection module carries out global search in the picture and detects again Target.But this track algorithm frame frequency is low, and requirement of real-time is often not achieved.Target is divided into multiple sub-blocks, root by sub-block matching method It is matched according to the feature of each sub-block, integrates the tracking result of each sub-block then to determine the final location information of target. Such method can solve the tracking problem in the case of target is at least partially obscured, but can not solve target by under whole circumstance of occlusion Tracking problem;And this track algorithm is influenced by piecemeal, the anti-ability of the blocking decline of track algorithm, sub-block mistake when sub-block is excessive That the tracking performance decline of hour track algorithm.
Since mean shift algorithm has the advantages that structure is simple and fast convergence rate, so being often used mean shift algorithm It carries out blocking tracking with the method for Kalman predictive filtering algorithm fusion, using in mean shift algorithm Bhattacharyya coefficient value determines occlusion state.When target in normal state, tracked using mean shift algorithm Processing enables the calculation of Kalman predictive filtering if detecting that target is under occlusion state using Bhattacharyya coefficient value Method carries out prediction processing, to guarantee the correctness and continuity of tracking.Because Kalman filter algorithm is the history according to target Exercise data predicts the trace information of target, and target histories exercise data be provided by mean shift algorithm, once position Information errors or occlusion state information errors are set, Kalman predictive filtering algorithm prediction error can be all caused.Due to average drifting Influence of the algorithm vulnerable to background pixel in initial target frame, it is especially larger in initial tracking box area and target area area difference When, it can include the background pixel information of more redundancy in initial tracking box, and mean shift algorithm is believed in the background of more redundancy Can drift about in the case of breath, in addition Bhattacharyya Y-factor method Y may also can not accurate detection target occlusion state, Cause to start or cannot start in time Kalman predictive filter, so that tracking failure.
Summary of the invention
In order to overcome the above-mentioned deficiencies of the prior art, the present invention provides a kind of Mean Shift calculations based on background weighting What method and Kalman prediction were merged blocks tracking, reduces the influence of background pixel in initial block, is improving tracking accuracy While with speed, while the sensibility of occlusion detection is improved, enables the occlusion state for more accurately determining target, mention The anti-of high algorithm blocks tracking ability.
For realize invention purpose, the present invention provide it is a kind of based on background weighting Mean Shift algorithm and Kalman it is pre- That surveys fusion blocks tracking, the specific steps are as follows:
Step (1): the goal histogram model of the background weighting based on histogram ratio is established:
The goal histogram model for establishing the background weighting based on histogram ratio is needed using based on goal histogram and back The weight that the log-likelihood ratio of scape histogram is derived is as background weighted factor.Therefore, it is necessary to establish goal histogram model With background histogram model, background weighted factor is calculated according to goal histogram model and background histogram model, then root The goal histogram model based on HRBW is established according to background weighted factor.
A) goal histogram models
Initial target is selected using rectangle frame to video initial frame, then carries out Target Modeling.If there is m in target area Pixel has n characteristic value in feature space, models to the characteristic value u of target area, the formula of the model is defined as:
In formula 1, C is normaliztion constant, xiFor the coordinate of ith pixel, function b (xi) it is position xiThe characteristic value at place, δ[b(xi)-u] determine b (xi) whether it is equal to u, value is 1 if equal, and otherwise value is 0.For kernel function, play Reinforce initial rectangular frame center pixel and weakens the effect of edge pixel.
B) background histogram-modeling
Background histogram, the model formation of background area characteristic value u are established to the partial region of outer rectangular frame is defined as:
In formula 3, nkIndicate the sum of all pixels of background area.
C) calculating of background weighted factor
According to the LLR ratio of goal histogram and background histogram calculation characteristic value u, the formula of the ratio is defined Are as follows:
In formula 4, by nonlinear transformation, so that characteristic value identical with background color feature institute in goal histogram The weight accounted for is smaller.Wherein η is the characteristic value q in goal histogram in order to preventuFor the characteristic value b in 0 or background histogramu The case where being 0.According to LuCalculate transformed background weighting coefficient, the formula of background weighting coefficient is defined as:
Wherein, LmaxIt is LuIn maximum value, LminIt is LuIn minimum value;μuIt is piecewise function, indicates the power of characteristic value u Weight, value is in [0,1].
D) object module of the background weighting based on histogram ratio is established
According to background weighting coefficient, the goal histogram model 0 that the background based on histogram ratio weights can be obtained, the model is public Formula is defined as:
Wherein, μuFor background weighting coefficient, C' is normalization factor.
Step (2): candidate target model is established:
The next frame image of selecting video establishes candidate target model, the feature of the model to the target in present image The formula of value u is defined as:
Step (3): determine occlusion state
The occlusion state of target, formula are judged according to Bhattacharyya Y-factor method Y is defined as:
Wherein, ρ (y) indicates Bhattacharyya coefficient value, and the value of ρ (y) is between [0,1], and the value of ρ (y) is bigger, table Show that the similarity of 2 templates is higher.Occlusion threshold is set as BhIf Bhattacharyya coefficient value is less than Bh, then determine target Status is occlusion state.
Step (4): track algorithm is selected according to occlusion state
If target is under occlusion state, Kalman predictive filtering algorithm is selected to carry out predicting tracing.Otherwise HRBW is used Mean Shift algorithm is tracked, the target position y that HRBW Mean Shift algorithm obtains1, y1Formula is defined as:
G (x)=- k'(x) (formula 10)
Wherein, y0Indicate the initial value of current goal position, y1For the calculated target position of HRBW Mean Shift algorithm It sets.IfHRBW Mean Shift iteration terminates, at this time y1For the final position of target.
The η in formula 4 in step (1) is q in order to preventuOr buFor 0 error situation, so being assigned a value of one very to η Small value (0.1 × 10-6)。
Wherein, the occlusion threshold B in step (3)hValue be 0.5, when Bhattacharyya coefficient value is less than or equal to When 0.5, dbjective state is determined as occlusion state.
The invention has the following advantages over the prior art:
(1) algorithm structure of the invention is simple, and calculation amount is small, is easy to hardware realization.It can be on DSP, FPGA or GPU in real time Handle video data.
(2) present invention reduces the influence of background pixel in initial block, improves tracking relative to mean shift algorithm Precision and the number of iterations for reducing algorithm.
(3) invention increases the differences under normal tracking mode with the Bhattacharyya coefficient under occlusion state, more Conducive to the judgement blocked.The performance of Kalman predictive filter is promoted by improving the accuracy of shadowing, to improve Tracking performance when blocking.
Detailed description of the invention
Fig. 1 is the general frame figure of this algorithm;
Fig. 2 is goal histogram modeling and background histogram-modeling schematic diagram;
Fig. 3 is that the tracking for passing through cloud cover object to target in RGB image (being shown as gray level image) using the present invention is imitated Fruit figure;
Fig. 4 is the tracking effect figure for passing through electric pole shelter to target in gray level image using the present invention.
Specific embodiment
Specific embodiments of the present invention will be described in detail with reference to the accompanying drawing.
Data handled by this example are video stream data, and the pixel size of video image is 768*976.The calculation of this example Method flow chart is verified using RGB image and gray level image respectively as shown in Figure 1, in order to verify universality of the invention, Specific implementation steps are as follows:
Step (1): read video stream data, in the first frame image of video stream data using rectangle frame choose manually to The target of tracking.If the inside black rectangle frame of Fig. 2 is the target following frame that mouse is chosen, external grey rectangle frame is background Frame, background frame is by Code automatic build.Goal histogram model is established according to formula 1 and formula 2, internal black rectangle frame and outer Region between portion's grey rectangle frame is background area, establishes background histogram model according to formula 3.And according to formula 4 and public affairs Formula 5 calculates background weighted factor, and background weighted factor is added in goal histogram, is obtained according to formula 6 based on histogram The object module of the background weighting of ratio.
Step (2): the next frame image of video stream data is read, and candidate target model at this time is established according to formula 7;
Step (3): occlusion threshold B is seth=0.5, and calculate according to formula 8 the Bhattacharyya system of current goal Numerical value ρ (y), if ρ (y) > Bh, then indicate that target is under normal tracking mode;If ρ (y) < Bh, then it represents that target is in and blocks shape Under state;
Step (4): if target is under occlusion state, starting the track of Kalman predictive filter prediction target, and The location information of target is exported, step (2) is repeated and arrives step (4), is completed until video data is read.;If dbjective state is in Under normal tracking mode, is then tracked using HRBW Mean Shift track algorithm, target is obtained according to formula 9,10 and 11 Position, in order to meet requirement of real-time, setting the number of iterations is greater than 20,Less than 0.5, then HRBW Mean Shift is tracked The major cycle of algorithm terminates.It exports the location information of current goal and is transmitted to Kalman filter, repeat step (2) and arrive step (4).Circulation terminates after the completion of video data is read.
Fig. 3 is RGB image tracking effect figure, when handling rgb video stream picture, in order to improve the speed of service of algorithm, RGB feature space (256*256*256) is converted into 16*16*16 feature space, under occlusion tracking effect such as Fig. 3 It is shown.
Fig. 4 be infrared image tracking effect figure, using color feature space be gray value (tonal range are as follows: 0~ 255), tracking effect is as shown in Figure 4 under occlusion.

Claims (3)

1. a kind of Mean Shift algorithm (HRBW Mean Shift) and Kalman prediction fusion based on background weighting blocks Tracking, which comprises the following steps:
Step (1): the goal histogram model of the background weighting based on histogram ratio is established:
The goal histogram model for establishing the background weighting based on histogram ratio is needed using straight based on goal histogram and background The weight that the log-likelihood ratio of square figure is derived needs to establish goal histogram model and background histogram as background weighted factor Graph model calculates background weighted factor according to goal histogram model and background histogram model, is then weighted according to background The factor establishes the goal histogram model based on HRBW;
A) goal histogram models
Initial target is selected using rectangle frame to video initial frame, Target Modeling is then carried out, if there is m pixel in target area Point has n characteristic value in feature space, models to the characteristic value u of target area, the formula of the model is defined as:
In formula 1, C is normaliztion constant, xiFor the coordinate of ith pixel, function b (xi) it is position xiThe characteristic value at place, δ [b (xi)-u] determine b (xi) whether it is equal to u, value is 1 if equal, and otherwise value is 0,For kernel function, play Reinforce initial rectangular frame center pixel and weakens the effect of edge pixel;
B) background histogram-modeling
Background histogram, the model formation of background area characteristic value u are established to the partial region of outer rectangular frame is defined as:
In formula 3, nkIndicate the sum of all pixels of background area;
C) calculating of background weighted factor
According to the LLR ratio of goal histogram and background histogram calculation characteristic value u, the formula of the ratio is defined as:
In formula 4, by nonlinear transformation, so that characteristic value identical with background color feature is shared in goal histogram Weight is smaller, and wherein η is the characteristic value q in goal histogram in order to preventuFor the characteristic value b in 0 or background histogramuIt is 0 Situation, according to LuCalculate transformed background weighting coefficient, the formula of background weighting coefficient is defined as:
Wherein, LmaxIt is LuIn maximum value, LminIt is LuIn minimum value;μuIt is piecewise function, indicates the weight of characteristic value u, Value is in [0,1];
D) object module of the background weighting based on histogram ratio is established
According to background weighting coefficient, the goal histogram model 0 that the background based on histogram ratio weights can be obtained, the model formation is fixed Justice are as follows:
Wherein, μuFor background weighting coefficient, C' is normalization factor, xiFor the coordinate of ith pixel, function b (xi) it is position xi The characteristic value at place, δ [b (xi)-u] determine b (xi) whether it is equal to u, value is 1 if equal, and otherwise value is 0,For Kernel function;
Step (2): candidate target model is established:
The next frame image of selecting video establishes candidate target model to the target in present image, the characteristic value u's of the model Formula is defined as:
puIt (y) is similarity value, xiFor the coordinate of ith pixel, y is the initial position of candidate target, function b (xi) it is position xi The characteristic value at place, δ [b (xi)-u] determine b (xi) whether it is equal to u, value is 1 if equal, and otherwise value is 0,For Kernel function, ChFor normalization factor;
Step (3): determine occlusion state
The occlusion state of target, formula are judged according to Bhattacharyya Y-factor method Y is defined as:
Wherein, ρ (y) indicates Bhattacharyya coefficient value, and for the value of ρ (y) between [0,1], the value of ρ (y) is bigger, indicates 2 The similarity of template is higher, sets occlusion threshold as BhIf Bhattacharyya coefficient value is less than Bh, then determine shape locating for target State is occlusion state;
Step (4): track algorithm is selected according to occlusion state
If target is under occlusion state, Kalman predictive filtering algorithm is selected to carry out predicting tracing, otherwise uses HRBW Mean Shift algorithm is tracked, the target position y that HRBW Mean Shift algorithm obtains1, y1Formula is defined as:
G (x)=- k'(x) (formula 10)
Wherein, g (x) is the negative derivative of kernel function, wiFor weight, y0Indicate the initial value of current goal position, y1For HRBW The calculated target position of Mean Shift algorithm, ifHRBW Mean Shift iteration terminates, at this time y1For mesh Target final position.
2. a kind of screening of Mean Shift algorithm and Kalman prediction fusion based on background weighting according to claim 1 Keep off tracking, it is characterised in that: the η in formula 4 in step (1) is q in order to preventuOr buFor 0 error situation, so The value (0.1 × 10 of a very little is assigned a value of to η-6)。
3. a kind of screening of Mean Shift algorithm and Kalman prediction fusion based on background weighting according to claim 1 Keep off tracking, it is characterised in that: the occlusion threshold B in step (3)hValue be 0.5, when Bhattacharyya coefficient value When less than or equal to 0.5, dbjective state is judged as occlusion state.
CN201811247771.5A 2018-10-25 2018-10-25 Occlusion tracking method based on background weighting and based on Mean Shift algorithm and Kalman prediction fusion Pending CN109472813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811247771.5A CN109472813A (en) 2018-10-25 2018-10-25 Occlusion tracking method based on background weighting and based on Mean Shift algorithm and Kalman prediction fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811247771.5A CN109472813A (en) 2018-10-25 2018-10-25 Occlusion tracking method based on background weighting and based on Mean Shift algorithm and Kalman prediction fusion

Publications (1)

Publication Number Publication Date
CN109472813A true CN109472813A (en) 2019-03-15

Family

ID=65665932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811247771.5A Pending CN109472813A (en) 2018-10-25 2018-10-25 Occlusion tracking method based on background weighting and based on Mean Shift algorithm and Kalman prediction fusion

Country Status (1)

Country Link
CN (1) CN109472813A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414535A (en) * 2019-07-02 2019-11-05 绵阳慧视光电技术有限责任公司 A kind of manual initial block modification method and system based on background differentiation
CN110458862A (en) * 2019-05-22 2019-11-15 西安邮电大学 A kind of motion target tracking method blocked under background
CN112884814A (en) * 2021-03-15 2021-06-01 南通大学 Anti-shielding action tracking method and device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992451A (en) * 2015-06-25 2015-10-21 河海大学 Improved target tracking method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104992451A (en) * 2015-06-25 2015-10-21 河海大学 Improved target tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晓卫等: ""基于直方图比的背景加权的Mean Shift目标跟踪算法"", 《强激光与粒子束》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110458862A (en) * 2019-05-22 2019-11-15 西安邮电大学 A kind of motion target tracking method blocked under background
CN110414535A (en) * 2019-07-02 2019-11-05 绵阳慧视光电技术有限责任公司 A kind of manual initial block modification method and system based on background differentiation
CN110414535B (en) * 2019-07-02 2023-04-28 绵阳慧视光电技术有限责任公司 Manual initial frame correction method and system based on background distinction
CN112884814A (en) * 2021-03-15 2021-06-01 南通大学 Anti-shielding action tracking method and device and storage medium

Similar Documents

Publication Publication Date Title
CN112883819A (en) Multi-target tracking method, device, system and computer readable storage medium
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN110766058B (en) Battlefield target detection method based on optimized RPN (resilient packet network)
CN109816692A (en) A kind of motion target tracking method based on Camshift algorithm
CN109472813A (en) Occlusion tracking method based on background weighting and based on Mean Shift algorithm and Kalman prediction fusion
CN101883209B (en) Method for integrating background model and three-frame difference to detect video background
CN111160212B (en) Improved tracking learning detection system and method based on YOLOv3-Tiny
CN108182695B (en) Target tracking model training method and device, electronic equipment and storage medium
CN112949508A (en) Model training method, pedestrian detection method, electronic device and readable storage medium
CN105335701A (en) Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion
CN110070565A (en) A kind of ship trajectory predictions method based on image superposition
CN105374049B (en) Multi-corner point tracking method and device based on sparse optical flow method
CN116524062B (en) Diffusion model-based 2D human body posture estimation method
CN115063454B (en) Multi-target tracking matching method, device, terminal and storage medium
CN106846373B (en) A kind of mutual occlusion handling method of video object merging target appearance model and game theory
CN116645396A (en) Track determination method, track determination device, computer-readable storage medium and electronic device
CN110648351B (en) Multi-appearance model fusion target tracking method and device based on sparse representation
CN112164093A (en) Automatic person tracking method based on edge features and related filtering
CN110472608A (en) Image recognition tracking processing method and system
CN116630367B (en) Target tracking method, device, electronic equipment and storage medium
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object
CN106447692A (en) Sample self-adaptive immune genetic particle filter weak and small target tracking method
CN116630989A (en) Visual fault detection method and system for intelligent ammeter, electronic equipment and storage medium
CN110349178A (en) A kind of human body unusual checking and identifying system and method
CN107067411B (en) Mean-shift tracking method combined with dense features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190315