CN108320300A - A kind of space-time context visual tracking method of fusion particle filter - Google Patents

A kind of space-time context visual tracking method of fusion particle filter Download PDF

Info

Publication number
CN108320300A
CN108320300A CN201810002288.4A CN201810002288A CN108320300A CN 108320300 A CN108320300 A CN 108320300A CN 201810002288 A CN201810002288 A CN 201810002288A CN 108320300 A CN108320300 A CN 108320300A
Authority
CN
China
Prior art keywords
target
particle
indicate
weights
particle filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810002288.4A
Other languages
Chinese (zh)
Inventor
文武
伍立志
廖新平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING XINKE DESIGN Co Ltd
Original Assignee
CHONGQING XINKE DESIGN Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING XINKE DESIGN Co Ltd filed Critical CHONGQING XINKE DESIGN Co Ltd
Priority to CN201810002288.4A priority Critical patent/CN108320300A/en
Publication of CN108320300A publication Critical patent/CN108320300A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A kind of space-time context visual tracking method of fusion particle filter is claimed in the present invention.First, the first frame image information is initialized, and by the way that experiment parameter is arranged, automatically selects the rectangular area where first frame target;Secondly, using Bhattacharyya coefficients as the foundation for judging whether target blocks;Finally, when target, which is in, blocks, target is estimated and is predicted in the position of subsequent image frames and movement locus by introducing particle filter algorithm, realizes the accurate tracking of target.The present invention can be applied not only to the visual target tracking under the complex backgrounds such as illumination variation, target rotation, background area interference, and blocking with robustness to target, meet requirement of real-time.

Description

A kind of space-time context visual tracking method of fusion particle filter
Technical field
The invention belongs to the space-time context vision tracking of technical field of image processing, more particularly to fusion particle filter Method.
Background technology
Vision tracking is one of hot spots of area researches such as computer vision, image procossing and pattern-recognition, is commonly used to The fields such as video monitoring, man-machine interface and imaging of medical.In recent years, Visual Tracking causes the wide of academia and industrial quarters General concern, domestic and international researcher pass through unremitting effort, it is proposed that including tracking-study-detection (Tracking-Learning- Detection, TLD), compression tracking (Compressive Tracking, CT), core correlation filtering (Kernelized Correlation Filter, KCF) and space-time context (Spatio-Temporal Context, STC) etc. it is a variety of outstanding Vision Tracking.However, during actual tracking, one side target is there may be deformation or is rotated, separately On the one hand due to the influence of the external environments such as illumination or shelter, cause above-mentioned algorithm that cannot accurately realize target following.Cause This, it is most important to design effective Vision Tracking.
STC Vision Trackings establish the time-space relationship model of target and its context area, by calculating confidence map The target location of a new frame is obtained, since the study of space-time model and the detection of target are all to pass through Fast Fourier Transform (FFT) (Fast Fourier Transform, FFT) is realized, therefore accelerates arithmetic speed.On the basis of STC algorithms, current Improvement has:(1) dynamic zoning method processing target context area is used, while original algorithm is improved to reach according to weights are assigned To the purpose of more preferable tracking effect;(2) picture relatively rough in STC algorithms is replaced using the middle level feature of weighting super-pixel It the low-level image feature information of plain grade gray scale and is fixed with the parameter for solving space-time context model using auto-adaptive parameter update method Problem.Although the above method can effectively track target to a certain extent, in a practical situation, when target is in complex background It is middle when blocking, target following can be made to fail.
For the target occlusion problem in processing STC algorithms, there are some researchers to consider and utilize Kalman filtering algorithm It solves, but seldom research particle filter algorithm is in solving STC algorithms target the video frequency object tracking during and blocks When the tracking drifting problem that generates.
Invention content
It is in and blocks and complex background holds in target mainly for STC algorithms the object of the present invention is to provide one kind A kind of the problem of being also easy to produce tracking drift, it is proposed that space-time context visual tracking method of fusion particle filter.This method is not The visual target tracking that can be only suitable under complex background, and blocking with robustness to target, meet real-time and want It asks.Technical scheme is as follows:
A kind of space-time context visual tracking method of fusion particle filter comprising following steps:
Step 1:Target following video is obtained, initialized target tracks the first frame image information of video, and passes through setting Parameter alpha, β in experiment parameter, including target confidence map function, β indicate form parameter, α indicate scale parameter, learning rate because The variance parameter σ of sub- ρ and the particle weights in particle filter algorithm automatically select the square where target in the first frame image information Shape region;
Step 2:Rectangular area where in the first frame target image information obtained to step 1 utilizes Bhattacharyya Pasteur's coefficients judge whether target blocks;
Step 3:When target, which is in, blocks, by introducing improved particle filter algorithm to target in subsequent image frames Position and movement locus are estimated and are predicted, realize that target is in accurate tracking when blocking, the improvement particle filter is calculated Method is mainly improved:Importance sampling density function pair t+1 frame particle weights are introduced in the weight update of particle to carry out more Newly.
Further, parameter alpha=2.25, β=1 in the step 1 in target confidence map function, learning rate factor ρ Variance parameter σ=2 of particle weights in as the parameter value in STC algorithms also 0.075, particle filter algorithm.
Further, using Bhattacharyya coefficients as the foundation for judging whether target blocks in the step 2, It specifically includes:
The color histogram of known target and context area is defined as:
Wherein, q is color histogram, and c is normalization coefficient, and y indicates the center pixel of target, xiIndicate ith pixel, I indicates that a variable, n indicate that the quantity of pixel, ω () are gaussian weighing function, and δ () is Dirac function, b (xi) be Pixel xiColor histogram mapping function, μ be colored pixels probability characteristics, miIndicate the probability of i-th of target area pixel, At this point, the context area pixel value around target is defined as:
Ii=miOi+(1-mi)Li
Wherein, OiFor target image, LiFor local context area image;
Bhattacharyya coefficients are defined as:
Wherein, qi(μ) is goal histogram, qi *(μ) is tracing area histogram.
Further, the step 3 by introduce improved particle filter algorithm to target in the position of subsequent image frames And movement locus is estimated and is predicted, realizes target in accurate tracking when blocking, specifically includes:
(1) it initializes, if the particle assembly in t moment stochastical sampling tracking target and surrounding context region is:Then corresponding weights areM indicates the quantity of particle,Indicate i-th of particle in t The weights at moment,It indicates i-th of particle of stochastical sampling, and meets
(2) status predication of target;Appoint at interval of time t and take two neighboring point in m sampled point, calculates between them Vector be:yiIndicate the ordinate of i-th of particle, xiIndicate the abscissa of i-th of particle.
Ri=(dxi,dyi)=(xi-xi-1,yi-yi-1)
According to above formula, the moving direction of t moment is predicted, and t+1 moment status predication vectors are:
Above formula is substituted into following formula status predication equation, calculates state-noise covariance matrix Qt
Wherein, Xt+1For status predication vector, A is state-transition matrix, WtFor system white Gaussian noise vector and Wt∈N (0,Qt);
(3) the weight update of particle;The similitude expression formula of particle model and object module is:
Wherein, ρi *Indicate Bhattacharyya coefficients,For the characteristic information of particle, qμFor target signature information;Pass through Distance between the two indicates that its similar degree, expression formula are:
Then the calculation formula of t frames particle weights is:
Wherein, σ indicates the variance of particle weights;
It introduces importance sampling density function pair t+1 frame particle weights at this time to be updated, expression formula is:
Wherein,Indicate likelihood function,Indicate probability transition density function, Zt+1Table Show the target observation variable at t+1 moment,Indicate state variable of i-th of particle at the t+1 moment, Indicate importance sampling density function;
(4) particle weights are normalized to:
Wherein, i=1,2 ... M;Indicate j-th of particle the t+1 moment weighted value;
(5) resampling of particle;Before the resampling of particle starts, thresholding sample points are first set as Nth, effective grain Subnumber is Neff, the expression formula of number of effective particles is:
Work as Neff< NthWhen, carry out the resampling of particle, the particle that right of retention is greater than the set value again is given up weight and is less than and sets The particle of definite value, and obtained all particle weights are set as equal;
(6) state estimation of target;The expression formula such as following formula of t+1 moment observation vectors, calculating observation noise covariance square Battle array Rt+1
Yt+1=BXt+1+Vt+1
Wherein, Yt+1For observation vector, B is observed differential matrix, Vt+1For process white Gaussian noise vector and Vt+1∈N(0, Rt+1);
By above step, the center expression formula for obtaining final goal estimation is:
Above formula is iterated into space-time context model, realizes target accurately tracking when complex background blocks.
It advantages of the present invention and has the beneficial effect that:
What the present invention was studied in the case where target is in complex background and circumstance of occlusion, target in STC algorithms can be solved A kind of drifting problem in video tracking, it is proposed that space-time context visual tracking method of fusion particle filter.This method Innovation be Bhattacharyya coefficients are added to carry out target in the target local context region of STC algorithms blocking Differentiate, when judging that target is not blocked, target can effectively be tracked by directly executing STC algorithms.When judging target When blocking, if still directly executing STC algorithms, tracking target can drift about, but be moved to target by introducing particle filter Track carries out predictive estimation to reach real-time, accurate, robust tracking target, can solve STC algorithms when target is in and blocks Tracking drifting problem.
Description of the drawings
Fig. 1 is the flow diagram that the present invention provides preferred embodiment.
Specific implementation mode
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, detailed Carefully describe.Described embodiment is only a part of the embodiment of the present invention.
The present invention solve above-mentioned technical problem technical solution be:
The invention discloses a kind of space-time context visual tracking methods of fusion particle filter, as shown in Figure 1, specifically Steps are as follows:
Step 1:In video frequency object tracking, the first frame image information is initialized, and by the way that experiment parameter is arranged, automatically The rectangular area where first frame target is selected, is specifically included:
Parameter alpha=2.25, β=1 in target confidence map function, the parameter value one in learning rate factor ρ and STC algorithm Sample also 0.075.Variance parameter σ=2 of particle weights in particle filter algorithm.
Step 2:Using Bhattacharyya coefficients as the foundation for judging whether target blocks, specifically include:
The color histogram of known target and context area is defined as:
Wherein, q is color histogram, and c is normalization coefficient, and ω () is gaussian weighing function, and δ () is dirac letter Number, b (xi) it is pixel xiColor histogram mapping function, μ be colored pixels probability characteristics, miIndicate i-th of target area picture The probability of element.At this point, the context area pixel value around target is defined as:
Ii=miOi+(1-mi)Li
Wherein, OiFor target image, LiFor local context area image.
Bhattacharyya coefficients are defined as:
Wherein, qi(μ) is goal histogram, qi *(μ) is tracing area histogram.
Step 3:Vision Tracking:When target, which is in, blocks, by introducing particle filter algorithm to target follow-up The position of picture frame and movement locus are estimated and are predicted, realize the accurate tracking of target, specifically include:
(1) it initializes.If t moment stochastical sampling tracks target and the particle assembly in surrounding context region is:Then corresponding weights areAnd meet
(2) status predication of target.Appoint at interval of time t and take two neighboring point in m sampled point, calculates between them Vector be:
Ri=(dxi,dyi)=(xi-xi-1,yi-yi-1)
According to above formula, the moving direction of t moment can be predicted, and t+1 moment status predication vectors are:
Above formula is substituted into following formula status predication equation, calculates state-noise covariance matrix Qt
Wherein, Xt+1For status predication vector, A is state-transition matrix, WtFor system white Gaussian noise vector and Wt∈N (0,Qt)。
(3) the weight update of particle.The similitude expression formula of particle model and object module is:
Wherein, ρi *Indicate Bhattacharyya coefficients,For the characteristic information of particle, qμFor target signature information.It can be with Indicate that its similar degree, expression formula are by distance between the two:
Then the calculation formula of t frames particle weights is:
Wherein, σ indicates the variance of particle weights.
It introduces importance sampling density function pair t+1 frame particle weights at this time to be updated, expression formula is:
Wherein,Indicate likelihood function,Indicate probability transition density function,Indicate importance sampling density function.
(4) normalization of particle weights.At the moment, it is to particle weights normalization result:
Wherein, i=1,2 ... M.
(5) resampling of particle.Before the resampling of particle starts, thresholding sample points are first set as Nth, effective grain Subnumber is Neff, the expression formula of number of effective particles is:
Work as Neff< NthWhen, the resampling of particle is carried out, retains the larger particle of weight, gives up the smaller particle of weight, and Obtained all particle weights are set as equal.
(6) state estimation of target.The expression formula such as following formula of t+1 moment observation vectors, calculating observation noise covariance square Battle array Rt+1
Yt+1=BXt+1+Vt+1
Wherein, Yt+1For observation vector, B is observed differential matrix, Vt+1For process white Gaussian noise vector and Vt+1∈N(0, Rt+1)。
By above step, the center expression formula for obtaining final goal estimation is:
Above formula is iterated into space-time context model, realizes target accurately tracking when complex background blocks.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention. After the content for having read the record of the present invention, technical staff can make various changes or modifications the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (4)

1. a kind of space-time context visual tracking method of fusion particle filter, which is characterized in that include the following steps:
Step 1:Target following video is obtained, initialized target tracks the first frame image information of video, and is tested by being arranged Parameter alpha, β in parameter, including target confidence map function, β indicate that form parameter, α indicate scale parameter, learning rate factor ρ With the variance parameter σ of the particle weights in particle filter algorithm, the rectangle where target in the first frame image information is automatically selected Region;
Step 2:Rectangular area where in the first frame target image information obtained to step 1, utilizes Bhattacharyya Pasteur's coefficient judges whether target blocks;
Step 3:When target, which is in, blocks, by introducing improved particle filter algorithm to target in the position of subsequent image frames And movement locus is estimated and is predicted, realizes that target is in accurate tracking when blocking, the improved particle filter algorithm master It improves:Importance sampling density function pair t+1 frame particle weights are introduced in the weight update of particle to be updated.
2. the space-time context visual tracking method of fusion particle filter according to claim 1, which is characterized in that described Parameter alpha=2.25, β=1 in step 1 in target confidence map function, the parameter value one in learning rate factor ρ and STC algorithm Sample also 0.075, variance parameter σ=2 of the particle weights in particle filter algorithm.
3. the space-time context visual tracking method of fusion particle filter according to claim 1, which is characterized in that described Using Bhattacharyya coefficients as the foundation for judging whether target blocks in step 2, specifically include:
The color histogram of known target and context area is defined as:
Wherein, q is color histogram, and c is normalization coefficient, and y indicates the center pixel of target, xiIndicate that ith pixel, i indicate One variable, n indicate that the quantity of pixel, ω () are gaussian weighing function, and δ () is Dirac function, b (xi) it is pixel xi Color histogram mapping function, μ be colored pixels probability characteristics, miIndicate the probability of i-th of target area pixel, at this point, Context area pixel value around target is defined as:
Ii=miOi+(1-mi)Li
Wherein, OiFor target image, LiFor local context area image;
Bhattacharyya coefficients are defined as:
Wherein, qi(μ) is goal histogram, qi *(μ) is tracing area histogram.
4. the space-time context visual tracking method of fusion particle filter according to claim 3, which is characterized in that described Step 3 by introduce improved particle filter algorithm to target the position of subsequent image frames and movement locus carry out estimation and Prediction realizes target in accurate tracking when blocking, specifically includes:
(1) it initializes, if the particle assembly in t moment stochastical sampling tracking target and surrounding context region is: Then corresponding weights areM indicates the quantity of particle,Indicate i-th of particle t moment weights,It indicates i-th of particle of stochastical sampling, and meets
(2) status predication of target;Appoint at interval of time t and take two neighboring point in m sampled point, calculate between them to Amount is:yiIndicate the ordinate of i-th of particle, xiIndicate the abscissa of i-th of particle;
Ri=(dxi,dyi)=(xi-xi-1,yi-yi-1)
According to above formula, the moving direction of t moment is predicted, and t+1 moment status predication vectors are:
Above formula is substituted into following formula status predication equation, calculates state-noise covariance matrix Qt
Wherein, Xt+1For status predication vector, A is state-transition matrix, WtFor system white Gaussian noise vector and Wt∈N(0, Qt);
(3) the weight update of particle;The similitude expression formula of particle model and object module is:
Wherein, ρi *Indicate Bhattacharyya coefficients,For the characteristic information of particle, qμFor target signature information;Both pass through The distance between indicate its similar degree, expression formula is:
Then the calculation formula of t frames particle weights is:
Wherein, σ indicates the variance of particle weights;
It introduces importance sampling density function pair t+1 frame particle weights at this time to be updated, expression formula is:
Wherein,Indicate likelihood function,Indicate probability transition density function, Zt+1Indicate t+1 The target observation variable at moment,Indicate state variable of i-th of particle at the t+1 moment,It indicates Importance sampling density function;
(4) particle weights are normalized to:
Wherein, i=1,2 ... M;Indicate j-th of particle the t+1 moment weighted value;
(5) resampling of particle;Before the resampling of particle starts, thresholding sample points are first set as Nth, number of effective particles For Neff, the expression formula of number of effective particles is:
Work as Neff< NthWhen, the resampling of particle is carried out, the particle that right of retention is greater than the set value again gives up weight and is less than setting value Particle, and obtained all particle weights are set as equal;
(6) state estimation of target;The expression formula such as following formula of t+1 moment observation vectors, calculating observation noise covariance matrix Rt+1
Yt+1=BXt+1+Vt+1
Wherein, Yt+1For observation vector, B is observed differential matrix, Vt+1For process white Gaussian noise vector and Vt+1∈N(0, Rt+1);
By above step, the center expression formula for obtaining final goal estimation is:
Above formula is iterated into space-time context model, realizes target accurately tracking when complex background blocks.
CN201810002288.4A 2018-01-02 2018-01-02 A kind of space-time context visual tracking method of fusion particle filter Pending CN108320300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810002288.4A CN108320300A (en) 2018-01-02 2018-01-02 A kind of space-time context visual tracking method of fusion particle filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810002288.4A CN108320300A (en) 2018-01-02 2018-01-02 A kind of space-time context visual tracking method of fusion particle filter

Publications (1)

Publication Number Publication Date
CN108320300A true CN108320300A (en) 2018-07-24

Family

ID=62892945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810002288.4A Pending CN108320300A (en) 2018-01-02 2018-01-02 A kind of space-time context visual tracking method of fusion particle filter

Country Status (1)

Country Link
CN (1) CN108320300A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308457A (en) * 2018-08-31 2019-02-05 华南理工大学 Multiparticle three-dimensional method for tracing under a kind of high concentration
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110363113A (en) * 2019-06-28 2019-10-22 华南理工大学 VLC-ITS information source based on particle filter tracks extracting method
CN110580711A (en) * 2019-08-23 2019-12-17 天津大学 video tracking method adopting particle filtering
CN110910416A (en) * 2019-11-20 2020-03-24 河北科技大学 Moving obstacle tracking method and device and terminal equipment
CN111862157A (en) * 2020-07-20 2020-10-30 重庆大学 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
CN112053385A (en) * 2020-08-28 2020-12-08 西安电子科技大学 Remote sensing video shielding target tracking method based on deep reinforcement learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN107038431A (en) * 2017-05-09 2017-08-11 西北工业大学 Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631895A (en) * 2015-12-18 2016-06-01 重庆大学 Temporal-spatial context video target tracking method combining particle filtering
CN107038431A (en) * 2017-05-09 2017-08-11 西北工业大学 Video target tracking method of taking photo by plane based on local sparse and spatio-temporal context information

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
KAIHUA ZHANG等: "Fast Tracking via Spatio-Temporal Context Learning", 《ARXIV:1311.1939V1 [CS.CV]》 *
TING LIU等: "Real-time part-based visual tracking via adaptive correlation filters", 《2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 *
WU WEN等: "Tracking Algorithm of Improved Spatio-Temporal Context with Particle Filter", 《2017 3RD IEEE INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATIONS》 *
ZIYANG WANG等: "Particle Filter Based on Context Tracking Algorithm for Real-world Hazy Scenes", 《2017 IEEE 2ND ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE》 *
刘维亭等: "基于重要性重采样粒子滤波器的机动目标跟踪方法", 《江苏科技大学学报(自然科学版)》 *
夏瑜等: "一种抗遮挡自适应粒子滤波目标跟踪方法", 《光电子·激光》 *
贺永峰: "粒子滤波算法在视频跟踪中的应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109308457A (en) * 2018-08-31 2019-02-05 华南理工大学 Multiparticle three-dimensional method for tracing under a kind of high concentration
CN109308457B (en) * 2018-08-31 2022-03-29 华南理工大学 Multi-particle three-dimensional tracking method under high concentration
CN110135314A (en) * 2019-05-07 2019-08-16 电子科技大学 A kind of multi-object tracking method based on depth Trajectory prediction
CN110244322A (en) * 2019-06-28 2019-09-17 东南大学 Pavement construction robot environment sensory perceptual system and method based on Multiple Source Sensor
CN110363113A (en) * 2019-06-28 2019-10-22 华南理工大学 VLC-ITS information source based on particle filter tracks extracting method
CN110363113B (en) * 2019-06-28 2021-09-21 华南理工大学 VLC-ITS information source tracking extraction method based on particle filtering
CN110580711A (en) * 2019-08-23 2019-12-17 天津大学 video tracking method adopting particle filtering
CN110910416A (en) * 2019-11-20 2020-03-24 河北科技大学 Moving obstacle tracking method and device and terminal equipment
CN111862157A (en) * 2020-07-20 2020-10-30 重庆大学 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
CN111862157B (en) * 2020-07-20 2023-10-10 重庆大学 Multi-vehicle target tracking method integrating machine vision and millimeter wave radar
CN112053385A (en) * 2020-08-28 2020-12-08 西安电子科技大学 Remote sensing video shielding target tracking method based on deep reinforcement learning
CN112053385B (en) * 2020-08-28 2023-06-02 西安电子科技大学 Remote sensing video shielding target tracking method based on deep reinforcement learning

Similar Documents

Publication Publication Date Title
CN108320300A (en) A kind of space-time context visual tracking method of fusion particle filter
CN109344725B (en) Multi-pedestrian online tracking method based on space-time attention mechanism
CN110009665B (en) Target detection tracking method in shielding environment
CN107452015B (en) Target tracking system with re-detection mechanism
CN103886325B (en) Cyclic matrix video tracking method with partition
CN105469054B (en) The model building method of normal behaviour and the detection method of abnormal behaviour
CN108521554A (en) Large scene multi-target cooperative tracking method, intelligent monitor system, traffic system
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
CN104036526A (en) Gray target tracking method based on self-adaptive window
CN110717934A (en) Anti-occlusion target tracking method based on STRCF
Qi et al. Small infrared target detection utilizing local region similarity difference map
CN102509414B (en) Smog detection method based on computer vision
CN117011381A (en) Real-time surgical instrument pose estimation method and system based on deep learning and stereoscopic vision
CN109102520A (en) The moving target detecting method combined based on fuzzy means clustering with Kalman filter tracking
CN106485283B (en) A kind of particle filter pedestrian target tracking based on Online Boosting
Wang et al. A real-time active pedestrian tracking system inspired by the human visual system
CN112733770A (en) Regional intrusion monitoring method and device
CN117252908A (en) Anti-occlusion multi-target tracking method based on attention
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
Ren et al. A multiCell visual tracking algorithm using multi-task particle swarm optimization for low-contrast image sequences
CN116193103A (en) Video picture jitter level assessment method
CN114429489A (en) Mobile robot target occlusion tracking method based on multi-core correlation filtering fusion
CN110322474B (en) Image moving target real-time detection method based on unmanned aerial vehicle platform
Zhou et al. Real-time detection and spatial segmentation of difference image motion changes
JPH07320068A (en) Method and processor for image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180724