CN104200494B - Real-time visual target tracking method based on light streams - Google Patents

Real-time visual target tracking method based on light streams Download PDF

Info

Publication number
CN104200494B
CN104200494B CN201410458973.XA CN201410458973A CN104200494B CN 104200494 B CN104200494 B CN 104200494B CN 201410458973 A CN201410458973 A CN 201410458973A CN 104200494 B CN104200494 B CN 104200494B
Authority
CN
China
Prior art keywords
point
target
image
time
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410458973.XA
Other languages
Chinese (zh)
Other versions
CN104200494A (en
Inventor
梁建宏
高涵
张以成
管沁朴
刘淼
孙安琦
杨兴帮
吴耀
王田苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201410458973.XA priority Critical patent/CN104200494B/en
Publication of CN104200494A publication Critical patent/CN104200494A/en
Application granted granted Critical
Publication of CN104200494B publication Critical patent/CN104200494B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a real-time visual target tracking method based on light streams. The method includes: screening feature points rich in textural features, and tracking the feature points between every two frames of images to obtain feature point matching relations; filtering the feature points with large matching errors through various filter algorithms such as normalized correlation coefficients, forward-reverse tracking errors and random consistency detection to retain most reliable feature points so as to obtain the target motion speed in images; using the target motion speed as observation quantity, and estimating through Kalman filtering to generate the target position. Compared with a tracking algorithm, the method has the advantages that the method is unaffected by light, the target can be stably tracked under the conditions that the target moves, a camera moves, the target is blocked for a short term and the like. The method is high in calculation efficiency, and the tracking algorithm is verified for the first time on an ARM platform.

Description

Real-time visual target tracking method based on optical flow
Technical Field
The invention relates to a real-time visual target tracking method based on optical flow, and belongs to the technical field of target tracking.
Background
The visual target tracking is an important research direction in the field of machine vision, detects, extracts, identifies and tracks an interested target in an image sequence, so as to provide motion state parameters of the target, such as position, speed, acceleration and motion trail, further process and analyze the motion state parameters, realize behavior understanding of the moving target, provide original data for subsequent applications (such as visual navigation, pose estimation, motion analysis and the like), and have wide application in the fields of intelligent monitoring, human-computer interaction, robot navigation and the like. In addition to the civil field, the visual target tracking technology is also greatly concerned in the military field, such as field reconnaissance, cruise missile guidance, ground attack helicopters, main battle tank fire control systems and other scenes. In the above application, object tracking is the basis for the robot to sense the external environment and react, and is the key to understanding the image.
Currently, the following methods are commonly used for visual tracking:
(1) the Meanshift algorithm is a method of matching images of a fixed-size area using color information. The Meanshift algorithm is a data-driven non-parameter estimation algorithm, which is also called a kernel density estimation algorithm, and mainly finds the posterior probability local optimum through a mean shift vector, namely, the Meanshift algorithm determines a matching region through a region with the most similar color, and the Meanshift algorithm does not support the updating of a target.
(2) The template matching algorithm is a method for comparing each region of an image with a fixed template. And selecting a target template during initialization, and finally selecting an image area with the minimum distance from the template as the position of the target in the current image by continuously comparing the image area with the template in the tracking process. The algorithm is simple in structure, but the calculation amount is large, the condition of target change cannot be processed, and when the target is changed greatly or rotates by more than 45 degrees, the algorithm fails.
(3) The machine learning method is used for training a classifier by respectively extracting the features of a target and a background, and the image is evaluated by the classifier. In the tracking process, the position of the target is determined through the posterior probability given by the classifier. Meanwhile, the classifier can be retrained through the calculation result, so that continuous tracking is realized under the condition that the target is changed. The algorithm has large calculation amount, unstable tracking effect and easy divergence.
In summary, the problems of the prior art are as follows: the tracking of the target is greatly influenced by the environment (such as illumination change, target motion and camera motion), and the shape updating of the target is not supported. Meanwhile, the calculation amount is large, and the real-time requirement is difficult to meet.
Disclosure of Invention
The invention aims to solve the problems and provides a real-time visual target tracking method based on optical flow. And screening out feature points rich in texture features during initialization, and tracking the feature points between every two frames of images so as to obtain a feature point matching relation. And filtering the characteristic points with large matching errors through a plurality of filtering algorithms including normalized correlation coefficients, positive and negative tracking errors, random consistency detection and the like, and reserving the most reliable characteristic points so as to obtain the movement speed of the target in the image. And taking the motion speed of the target as an observed quantity, and generating the target position by Kalman filtering estimation.
A real-time visual target tracking method based on optical flow comprises the following steps:
acquiring an initial position of a first frame image of a tracking target;
step two, acquiring a characteristic point sequence;
step three: obtaining an image I of a new moment of a tracked targett+1
Step four: tracking the characteristic points;
step five: screening the point pairs;
step six: eliminating the influence of background characteristic points;
step seven: system state estimation
The invention has the advantages that:
(1) target tracking is carried out in a characteristic point tracking mode, the calculated amount is small, and the running speed is high;
(2) by using Kalman filtering, tracking errors can be controlled, and meanwhile, the method has the capability of resisting short-time shielding;
(3) a background elimination method based on random consistency detection is provided, and the influence of background special points on a tracking result can be eliminated.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic illustration of background elimination of the present invention;
FIG. 3 is a schematic illustration of face tracking;
FIG. 4 is a schematic view of a track-and-shoot slow-running large vehicle;
FIG. 5 is a schematic view of tracking an aerial highway vehicle.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
The invention relates to a real-time visual target tracking method based on optical flow, the flow of the method is shown in figure 1, and the method comprises the following steps:
acquiring an initial position of a first frame image of a tracking target;
the initial position of the tracking target is known, a rectangular frame is given to represent the position of the target, and the position of the target in each frame of image is represented by the rectangular frame in the target tracking process.
Step two, acquiring a characteristic point sequence
The optical flow field of a general solution is a dense motion field, and is generally used for navigation applications such as obstacle avoidance. However, in the target tracking problem, a dense optical flow field is not required, and the motion velocity of the target can be obtained only by optical flows of partial pixel points on the target. Furthermore, solving for dense optical flow consumes a significant amount of time, which is too burdensome for real-time algorithms. Furthermore, not all points are suitable for tracking, and for those points lacking texture information, the tracking error will be larger, and will cause the tracking error, therefore, the present invention firstly uses the image ItIn a rectangular frame of the position of the middle target, screening characteristic points rich in texture information, wherein the method comprises the following steps:
let p be ═ px,py]Is any point in a rectangular frame, px、pyThe coordinates of a point p in a pixel coordinate system are represented, a search area is defined as a range with a radius w and a center of p, the size of the search area is (2w +1) × (2w +1), and a matrix M is defined as:
wherein: i isxAnd IyIs the differential of all points in the search area in the x and y directions under the pixel coordinate system.
For each point in the rectangular frame, two eigenvalues λ of the matrix M are calculated1、λ2Let the minimum eigenvalue be λminSetting a threshold lambda of the characteristic value{threshold},1≤λ{threshold}Less than or equal to 10, discarding lambdamin<λ{threshold}The remaining dots are at λminArranging the points in sequence from big to small to be set as a pre-selected point set, wherein the positions of some points in the pre-selected point set generally converge together, and selecting lambda in the pre-selected point set to balance the distribution of the pointsminMaximum point, set minimum distance, discard and λminThe maximum point distance is less than the minimum distance, will be λminAdding the largest point to the characteristic point sequence, forming a new pre-selected point set by the rest points, and selecting lambda from the new pre-selected point setminMaximum point, discard and λminThe maximum point distance is less than the minimum distance, will be λminAnd adding the largest point into the characteristic point sequence, forming a new preselected point set by the rest points again, circulating in sequence until the preselected point set is empty, and finally obtaining the characteristic point sequence.
Wherein if the minimum distance is too large, fewer points remain, otherwise the number of points is too large and too concentrated. The number of feature points that are ultimately retained can be controlled by adjusting the minimum spacing. After going through the above steps, the remaining points are points that are rich in texture information and suitable for tracking, while being more evenly distributed within the rectangular box.
Step three: obtaining an image I of a new moment of a tracked targett+1
Step four: tracking the characteristic points;
according to image ItThe image I is obtained by the Lucas-Kanade method through the characteristic point sequence in the methodtIn image It+1The corresponding points form N point pairs.
In the invention, the Lucas-Kanade method is used for matching the characteristic points, so that the movement of the characteristic points between two images is obtained, namely the characteristic points are tracked.
Step five: screening the point pairs;
after the characteristic point sequence tracking is completed, N point pairs are obtained, wherein some are reliable, and some have larger tracking errors, and the point pairs are screened by respectively adopting positive and negative tracking errors and normalized correlation coefficients.
(1) Screening the N point pairs by adopting positive and negative tracking errors;
positive and negative tracking errors the error of the characteristic point tracking is evaluated by using a time positive and negative consistent constraint method, and the essential idea is to assume that the correct characteristic point tracking should be independent of the direction of time flow.
Image I set at time ttPoint p in (1)tImage I matched to time t +1t+1Point in (1) is denoted as pt+1Let is from pt+1Point retracing back to image ItWhen the corresponding point is p'tIdeally, p'tAnd ptLet the distance between these two points be the error, which should be the same point:
e=dis(p′t,pt),
obtaining errors of N point pairs, and making the errors largerTo a small rank, the reserve error is smallA point pair.
(2) Using pairs of normalized correlation coefficientsScreening individual point pairs;
for each feature point, a small image block representation centered on it may be used. Due to the assumption of smooth motion, the appearance of an object between two adjacent images does not change drastically. Therefore, if the point matching is correct, the image blocks corresponding to each pair of feature points should be similar. Therefore, the degree of similarity between a pair of image blocks can be measured by normalizing the correlation coefficients.
Let feature points be represented by image blocks of w × w, which may be represented by a vector q ∈ Rw×wIndicating that at time t, image I is acquiredtPoint p in (1)tImage block A oftTo obtain a vector qtAt time t +1, image I is acquiredt+1Point p in (1)t+1Image block A oft+1To obtain a vector qt+1Obtaining normalized correlation coefficients of two vectors:
wherein,representing a vector qtIs determined by the average value of (a) of (b),representing a vector qt+1Is determined by the average value of (a) of (b),<·,·>denotes the inner product of two vectors, ncc (q)t,qt+1) Indicating the normalized correlation coefficient, a normalized correlation coefficient threshold value ncc is set{threshold},0.7≤ncc{threshold}Less than or equal to 0.9, and making the normalized correlation coefficient less than ncc{threshold}The point pairs of (2) are discarded.
And screening the point pairs obtained in the step four by the two methods to obtain the screened high-reliability point pairs.
Step six: eliminating the influence of background characteristic points;
in addition to the target to be tracked, a background is often included in the rectangular frame, and feature points from the background may adversely affect the tracking result.
And eliminating the interference of the background characteristic points by adopting a random consensus detection (RANSAC) based method. Let P be (P)1,p2...pm,pm+1...pn) Is a high reliability set of point pairs after screening, where the first m point pairs come from the background, m ≦ n/2 (in a correct tracking procedure, the target should be the main part of the rectangular box, so we agree that m ≦ n/2). Let D ═ D1,d2...dm,dm+1...dn) Indicating that each point pair in P is displaced between two frame images, then diIndicating the relative displacement of the ith point pair. In the image coordinate system, the velocity of the background is assumed to be VbThe speed of the object being VoThe following can be obtained:
di~N(Vb,σb),i∈[1,m]
dj~N(Vo,σo),j∈[m+1,n]
wherein: di~N(Vb,σb) Denotes diObey mean value of VbVariance is σbNormal distribution of (d)j~N(Vo,σo) Denotes djObey mean value of VoVariance is σoIs normally distributed.
The method for eliminating the background feature points comprises the following steps:
(1) randomly extracting K elements from the D to form an array V;
(2) taking the median value of V and recording as EViFinding out all elements in V and EViThe error of (2) is recorded as ∈i
(3) Executing the steps (1) and (2), repeating for T times, and solving T errors;
(4) EV with minimum erroriThe value is recorded as EV, and the EV is the displacement of the target;
(5) solving errors of all elements in D and EV, and removing point pairs with the errors larger than a threshold value, wherein the error is more than or equal to 1 and less than or equal to 3;
through the steps, the point pairs with the influence of the background characteristic points eliminated are obtained.
As shown in fig. 2 below, when tracking a car on a highway, two points in the background are eliminated (gray) and the remaining feature points (white) are all located on the car.
The method for eliminating background feature points in the sixth step has the main advantages that the method is easy to apply and is suitable for a series of tracking algorithms based on feature points, and the calculation method of the estimation and the error of the speed can be changed according to the application.
Step seven: system state estimation
Although the remaining feature points have strong reliability after being filtered for many times, the error of the algorithm still accumulates gradually during the operation process. Therefore, kalman filtering is adopted to estimate the position of the target by receiving the displacement of the target in the step six as the observed quantity, and simultaneously control the noise. In addition, feature point based tracking algorithms can fail when a tracked object disappears from view or features are lost. With kalman filtering, when the tracker fails without an output of velocity, the algorithm can maintain the state of the object by no measurement update of the tracking model.
Let the state vector X at time tt=[Pt,Vt]TIn which P istIndicating the position of the rectangular frame, VtIndicating the displacement of the rectangular frame in the current frame image and the previous frame image. Because the motion of the target between any two frames of images is not too large, the motion of the target between each frame can be taken as a constant speed model. Thus, the state transition matrix of the target is as follows:
wherein: xt+1Representing the state vector at time t +1, XtRepresenting the state vector at time t, xt+1、yt+1Representing the coordinates, x, of the object in the image coordinate system at time t +1t、ytRepresenting the coordinates of the object in the image coordinate system at time t, ut+1、vt+1Representing the speed of the target in two directions in the image coordinate system from time t to time t +1, ut、vtRepresents the speed of the target from t-1 time to t time in two directions of the image coordinate system, and u is set0=v0When the number of frames is 0, Δ t represents a time interval between the acquisition of two frames of images.
The observation vector for kalman filtering is represented as:
wherein,
V~N(0,R)
wherein V to N (0, R) represent a normal distribution in which V follows a mean value of 0 and a variance of R.
And estimating the observation vector through Kalman filtering, so as to obtain the position of the target t +1 in the image, wherein the position is a rectangular frame, and returning to the step two until the target tracking is stopped.
The method of the invention is applied to track human faces and automobiles, wherein the result of tracking the human faces is shown in figure 3, and people enter a bright environment from a dark environment, change the positions of the faces, take off glasses and other operations.

Claims (4)

1. A real-time visual target tracking method based on optical flow comprises the following steps:
acquiring an initial position of a first frame image of a tracking target;
the initial position of a tracking target is known, and a rectangular frame is given to represent the position of the target;
step two, acquiring a characteristic point sequence;
in picture ItIn a rectangular frame of the position of the middle target, screening characteristic points rich in texture information, wherein the method comprises the following steps:
let p be ═ px,py]Is any point in a rectangular frame, px、pyThe coordinates of a point p in a pixel coordinate system are represented, a search area is defined as a range with a radius w and a center of p, the size of the search area is (2w +1) × (2w +1), and a matrix M is defined as:
M = &Sigma; x = p x - w p x + w &Sigma; y = p y - w x = p y + w I x I x I x I y I x I y I y I y
wherein: i isxAnd IyThe differential of all points in the search area in the x and y directions under the pixel coordinate system is obtained;
for each point in the rectangular frame, two eigenvalues λ of the matrix M are calculated1、λ2Let the minimum eigenvalue be λminSetting a threshold lambda of the characteristic value{thres hold}Discard λmin{thres hold}The remaining dots are at λminArranging in descending order, setting as a pre-selected point set, and selecting lambda in the pre-selected point setminMaximum point, set minimum distance, discard and λminThe maximum point distance is less than the minimum distance, will be λminAdding the largest point to the characteristic point sequence, forming a new pre-selected point set by the rest points, and selecting lambda from the new pre-selected point setminMaximum point, discard and λminThe maximum point distance is less than the minimum distance, will be λminAdding the largest point into the characteristic point sequence, forming a new preselected point set by the rest points again, circulating in sequence until the preselected point set is empty, and finally obtaining the characteristic point sequence;
step three: obtaining an image I of a new moment of a tracked targett+1
Step four: tracking the characteristic points;
according to image ItThe image I is obtained by the Lucas-Kanade method through the characteristic point sequence in the methodtIn image It+1The corresponding points form N point pairs;
step five: screening the point pairs;
(1) screening the N point pairs by adopting positive and negative tracking errors;
image set at time tItPoint p in (1)tImage I matched to time t +1t+1Point in (1) is denoted as pt+1Let is from pt+1Point retracing back to image ItWhen the corresponding point is p'tLet the distance between these two points be the error:
e=dis(p′t,pt),
obtaining errors of N point pairs, sorting the errors from large to small, and keeping the errors smallPoint pairs;
(2) using pairs of normalized correlation coefficientsScreening individual point pairs;
at time t, image I is acquiredtPoint p in (1)tImage block A oftTo obtain a vector qtAt time t +1, image I is acquiredt+1Point p in (1)t+1Image block A oft+1To obtain a vector qt+1Obtaining normalized correlation coefficients of two vectors:
v 1 = q t - q t &OverBar; ,
v 2 = q t + 1 - q t + 1 &OverBar; ,
n c c ( q t , q t + 1 ) = < v 1 | v 1 | , v 2 | v 2 | >
wherein,representing a vector qtIs determined by the average value of (a) of (b),representing a vector qt+1Is determined by the average value of (a) of (b),<·,·>denotes the inner product of two vectors, ncc (q)t,qt+1) Indicating the normalized correlation coefficient, a normalized correlation coefficient threshold value ncc is set{thres hold}Normalized correlation coefficient is less than ncc{thres hold}Discarding the point pairs;
screening the point pairs obtained in the step four by the two methods to obtain screened high-reliability point pairs;
step six: eliminating the influence of background characteristic points;
let P be (P)1,p2...pm,pm+1...pn) Is a high reliability set of point pairs after screening, where the first m point pairs come from the backN/2 is less than or equal to m; let D ═ D1,d2...dm,dm+1...dn) Indicating that each point pair in P is displaced between two frame images, then diRepresenting the relative displacement of the ith point pair; in the image coordinate system, the velocity of the background is assumed to be VbThe speed of the object being VoObtaining:
di~N(Vbb),i∈[1,m]
dj~N(Voo),j∈[m+1,n]
wherein: di~N(Vbb) Denotes diObey mean value of VbVariance is σbNormal distribution of (d)j~N(Voo) Denotes djObey mean value of VoVariance is σoNormal distribution of (2);
the method for eliminating the background feature points comprises the following steps:
(1) randomly extracting K elements from the D to form an array V;
(2) taking the median value of V and recording as EViFinding out all elements in V and EViThe error of (2) is recorded as ∈i
(3) Executing the steps (1) and (2), repeating for T times, and solving T errors;
(4) EV with minimum erroriThe value is recorded as EV, and the EV is the displacement of the target;
(5) solving errors between all elements in D and EV, and removing point pairs with the errors larger than a threshold value;
obtaining a point pair for eliminating the influence of the background characteristic points through the steps;
step seven: estimating the state of the system;
let the state vector X at time tt=[Pt,Vt]TIn which P istIndicating the position of the rectangular frame, VtRepresenting the displacement of a rectangular frame in a current frame image and a previous frame image, taking the motion of a target between each frame as a constant speed model, wherein a state transition matrix of the target is as follows:
X t + 1 = x t + 1 y t + 1 u t + 1 v t + 1 = F x &CenterDot; X t = 1 0 &Delta; t 0 0 1 0 &Delta; t 0 0 1 0 0 0 0 1 x t y t u t v t
wherein: xt+1Representing the state vector at time t +1, XtRepresenting the state vector at time t, xt+1、yt+1Representing the coordinates, x, of the object in the image coordinate system at time t +1t、ytRepresenting the coordinates of the object in the image coordinate system at time t, ut+1、vt+1Representing the speed of the target in two directions in the image coordinate system from time t to time t +1, ut、vtRepresents the speed of the target from t-1 time to t time in two directions of the image coordinate system, and u is set0=v0When the image is obtained, Δ t represents a time interval between two frames of images;
the observation vector for kalman filtering is represented as:
Z = u t v t = H &CenterDot; X t + V ,
wherein,
H = 0 0 1 0 0 0 0 1 ,
V~N(0,R)
wherein V-N (0, R) represents that V obeys normal distribution with mean value of 0 and variance of R;
and estimating the observation vector through Kalman filtering to obtain the position of the target t +1 in the image, wherein the position is a rectangular frame, and returning to the step two to circulate until the target tracking is stopped.
2. The method for optical flow-based real-time visual target tracking as claimed in claim 1, wherein λ is 1 ≦ λ in said second step{thres hold}≤10。
3. The method for optical flow-based real-time visual target tracking according to claim 1, wherein 0.7 ≦ ncc in said step five{thres hold}≤0.9。
4. The method for optical flow-based real-time visual target tracking according to claim 1, wherein 1. ltoreq. 3 in said sixth step.
CN201410458973.XA 2014-09-10 2014-09-10 Real-time visual target tracking method based on light streams Expired - Fee Related CN104200494B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410458973.XA CN104200494B (en) 2014-09-10 2014-09-10 Real-time visual target tracking method based on light streams

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410458973.XA CN104200494B (en) 2014-09-10 2014-09-10 Real-time visual target tracking method based on light streams

Publications (2)

Publication Number Publication Date
CN104200494A CN104200494A (en) 2014-12-10
CN104200494B true CN104200494B (en) 2017-05-17

Family

ID=52085780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410458973.XA Expired - Fee Related CN104200494B (en) 2014-09-10 2014-09-10 Real-time visual target tracking method based on light streams

Country Status (1)

Country Link
CN (1) CN104200494B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105796053B (en) * 2015-02-15 2018-11-20 执鼎医疗科技(杭州)有限公司 Utilize the method for OCT measurement dynamic contrast and the lateral flow of estimation
CN104680194A (en) * 2015-03-15 2015-06-03 西安电子科技大学 On-line target tracking method based on random fern cluster and random projection
CN105354857B (en) * 2015-12-07 2018-09-21 北京航空航天大学 A kind of track of vehicle matching process for thering is viaduct to block
CN106023262B (en) * 2016-06-06 2018-10-12 深圳市深网视界科技有限公司 A kind of crowd flows principal direction method of estimation and device
CN106570861A (en) * 2016-10-25 2017-04-19 深圳市高巨创新科技开发有限公司 Optical flow velocity measurement method and system for unmanned plane
CN106778478A (en) * 2016-11-21 2017-05-31 中国科学院信息工程研究所 A kind of real-time pedestrian detection with caching mechanism and tracking based on composite character
CN106803265A (en) * 2017-01-06 2017-06-06 重庆邮电大学 Multi-object tracking method based on optical flow method and Kalman filtering
CN109559330B (en) * 2017-09-25 2021-09-10 北京金山云网络技术有限公司 Visual tracking method and device for moving target, electronic equipment and storage medium
CN107862704B (en) * 2017-11-06 2021-05-11 广东工业大学 Target tracking method and system and holder camera used by same
JP7353747B2 (en) * 2018-01-12 2023-10-02 キヤノン株式会社 Information processing device, system, method, and program
CN111768428B (en) * 2019-04-02 2024-03-19 智易联(上海)工业科技有限公司 Method for enhancing image tracking stability based on moving object
CN110263733B (en) * 2019-06-24 2021-07-23 上海商汤智能科技有限公司 Image processing method, nomination evaluation method and related device
CN110415277B (en) * 2019-07-24 2022-03-08 中国科学院自动化研究所 Multi-target tracking method, system and device based on optical flow and Kalman filtering
CN113129333B (en) * 2020-01-16 2023-06-16 舜宇光学(浙江)研究院有限公司 Multi-target real-time tracking method and system and electronic equipment thereof
CN111829521B (en) * 2020-06-23 2022-05-03 浙江工业大学 Consistent target tracking method based on data driving
CN111783611B (en) * 2020-06-28 2023-12-29 阿波罗智能技术(北京)有限公司 Unmanned vehicle positioning method and device, unmanned vehicle and storage medium
CN111998853A (en) * 2020-08-27 2020-11-27 西安达升科技股份有限公司 AGV visual navigation method and system
CN113610134B (en) * 2021-07-29 2024-02-23 Oppo广东移动通信有限公司 Image feature point matching method, device, chip, terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005165791A (en) * 2003-12-03 2005-06-23 Fuji Xerox Co Ltd Object tracking method and tracking system
CN101916446A (en) * 2010-07-23 2010-12-15 北京航空航天大学 Gray level target tracking algorithm based on marginal information and mean shift
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN102646279A (en) * 2012-02-29 2012-08-22 北京航空航天大学 Anti-shielding tracking method based on moving prediction and multi-sub-block template matching combination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005165791A (en) * 2003-12-03 2005-06-23 Fuji Xerox Co Ltd Object tracking method and tracking system
CN101916446A (en) * 2010-07-23 2010-12-15 北京航空航天大学 Gray level target tracking algorithm based on marginal information and mean shift
CN102222214A (en) * 2011-05-09 2011-10-19 苏州易斯康信息科技有限公司 Fast object recognition algorithm
CN102646279A (en) * 2012-02-29 2012-08-22 北京航空航天大学 Anti-shielding tracking method based on moving prediction and multi-sub-block template matching combination

Also Published As

Publication number Publication date
CN104200494A (en) 2014-12-10

Similar Documents

Publication Publication Date Title
CN104200494B (en) Real-time visual target tracking method based on light streams
Wu et al. Vision-based real-time aerial object localization and tracking for UAV sensing system
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN110232350B (en) Real-time water surface multi-moving-object detection and tracking method based on online learning
CN104463191A (en) Robot visual processing method based on attention mechanism
JP7263216B2 (en) Object Shape Regression Using Wasserstein Distance
CN111376273B (en) Brain-like inspired robot cognitive map construction method
EP3752955A1 (en) Image segmentation
CN106023257A (en) Target tracking method based on rotor UAV platform
CN108537181A (en) A kind of gait recognition method based on the study of big spacing depth measure
CN116343330A (en) Abnormal behavior identification method for infrared-visible light image fusion
CN106023155A (en) Online object contour tracking method based on horizontal set
CN104794737A (en) Depth-information-aided particle filter tracking method
CN111462191A (en) Non-local filter unsupervised optical flow estimation method based on deep learning
CN105913455A (en) Local image enhancement-based object tracking method
CN108288038A (en) Night robot motion&#39;s decision-making technique based on scene cut
CN111105439A (en) Synchronous positioning and mapping method using residual attention mechanism network
CN111724411A (en) Multi-feature fusion tracking method based on hedging algorithm
CN105913459A (en) Moving object detection method based on high resolution continuous shooting images
CN111027586A (en) Target tracking method based on novel response map fusion
Pham et al. Pencilnet: Zero-shot sim-to-real transfer learning for robust gate perception in autonomous drone racing
Ding et al. Machine learning model for feature recognition of sports competition based on improved TLD algorithm
CN102663773A (en) Dual-core type adaptive fusion tracking method of video object
CN114492634A (en) Fine-grained equipment image classification and identification method and system
CN109887004A (en) A kind of unmanned boat sea area method for tracking target based on TLD algorithm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170517

Termination date: 20180910

CF01 Termination of patent right due to non-payment of annual fee