CN112614153A - Ground moving target tracking method based on differential forward and backward optical flows - Google Patents

Ground moving target tracking method based on differential forward and backward optical flows Download PDF

Info

Publication number
CN112614153A
CN112614153A CN202011353920.3A CN202011353920A CN112614153A CN 112614153 A CN112614153 A CN 112614153A CN 202011353920 A CN202011353920 A CN 202011353920A CN 112614153 A CN112614153 A CN 112614153A
Authority
CN
China
Prior art keywords
target
tracking
frame
tracker
target tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011353920.3A
Other languages
Chinese (zh)
Inventor
杨宇
王振北
李�杰
杨成伟
刘畅
张晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202011353920.3A priority Critical patent/CN112614153A/en
Publication of CN112614153A publication Critical patent/CN112614153A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a ground moving target tracking method based on differential forward and backward optical flows, which comprises the following steps: step 1, extracting a target frame aiming at an image containing a target, taking the size of the target frame twice as the size of an initial target tracking frame, and tracking the target by adopting a tracker for realizing target tracking by utilizing optical flow field calculation; step 2, inputting the optical flow vector field of the actual pixel in the target tracking frame acquired by the tracker into a K-means algorithm to classify the optical flow vector field of the moving target and the optical flow vector field of the static background; and 3, correcting the target tracking frame of the tracker by adopting the moving target segmented by the K-means algorithm, continuing to track the target, and repeating the step 2 and the step 3. The invention can distinguish the moving foreground from the static background, improve the tracking stability of the algorithm and operate in an embedded platform.

Description

Ground moving target tracking method based on differential forward and backward optical flows
Technical Field
The invention belongs to the technical field of tracking of moving objects by unmanned aerial vehicles, and particularly relates to a ground moving object tracking method based on differential forward and backward optical flows.
Background
With the development of unmanned aerial vehicle technology, the reconnaissance demand on ground targets, especially on ground moving targets, is more and more vigorous. The reconnaissance requirement is mainly divided into two stages of discovery of ground targets and long-time locking. In the long-time locking stage of the target, because the unmanned aerial vehicle platform is in an incomplete controllable environment affected by external conditions, the long-time tracking of the unmanned aerial vehicle on the ground target, especially on the ground moving target always has technical difficulties, and a method capable of continuously and accurately tracking the ground moving target for a long time is urgently needed.
In the process of visually tracking a ground moving target by an unmanned aerial vehicle, the following technical routes mainly exist:
the first type is a visual tracking technology route based on deep learning, and the route needs a large amount of pre-training before use, so that the visual change characteristics of a moving target are obtained by utilizing a deep learning means, and a better tracking effect is achieved. However, the technical route needs a large training set and is complex in algorithm and difficult to use in a low-cost embedded platform.
The second is a target visual tracking algorithm based on the similarity of previous and next frames, the algorithm has the advantages of simple algorithm structure and high running speed, and is beneficial to application in an embedded platform.
The third is an online learning-based target visual tracking algorithm, which is between the first and the second, but has the defect of difficult operation in an embedded platform.
Based on the situation, the invention provides a ground moving target tracking method based on differential forward and backward optical flows.
Disclosure of Invention
In view of this, the invention provides a ground moving target tracking method based on differential forward and backward optical flows, which can distinguish a moving foreground from a static background, improve the tracking stability of an algorithm, and can operate in an embedded platform.
In order to solve the above-mentioned technical problems, the present invention has been accomplished as described above.
A ground moving object tracking method based on differential forward and backward optical flows comprises the following steps:
step 1, extracting a target frame aiming at an image containing a target, taking the size of the target frame twice as the size of an initial target tracking frame, and tracking the target by adopting a tracker for realizing target tracking by utilizing optical flow field calculation;
step 2, inputting the optical flow vector field of the actual pixel in the target tracking frame acquired by the tracker into a K-means algorithm to classify the optical flow vector field of the moving target and the optical flow vector field of the static background;
and 3, correcting the target tracking frame of the tracker by adopting the moving target segmented by the K-means algorithm, continuing to track the target, and repeating the step 2 and the step 3.
Preferably, a target tracking frame used by the tracker is marked as a target tracking frame A; marking the minimum outsourcing rectangle of the moving target segmented by the K-means algorithm as a target tracking frame B;
the correction is: and (4) integrating the sizes and the positions of the target tracking frame B and the target tracking frame A, and updating the tracking frame actually used by the tracker.
Preferably, the size and position of the target tracking frame B and the target tracking frame a are integrated, and the tracking frame actually used by the update tracker is:
and taking the overlapped area of the target tracking frame B and the target tracking frame A as a new target tracking frame of the tracker.
Preferably, step 2 and step 3 are performed once during the tracking process of each frame, and the target tracking frame is corrected.
Preferably, the tracker is a Lucas-Kanade tracker.
Has the advantages that:
(1) the invention enlarges the initial target tracking frame of the tracker, and reserves more effective tracking points of the moving target, thereby improving the tracking effect. When the moving target is tracked, the moving target in the initial tracking frame can be searched in a self-adaptive mode, and the accurate position of the tracked target is given.
(2) Under the condition of obtaining the tracking result, the foreground moving target position is obtained by utilizing K-means clustering, the tracking result is corrected, and the tracking precision is improved. The algorithm has high tracking stability and good real-time performance, is suitable for tracking different types of targets, and can be used in an embedded platform with low cost.
(3) In the tracking process, the method and the device use the thought of an online learning tracking method for reference, and correct the target tracking frame in the tracking process of each frame, so that the tracking accuracy is improved, and meanwhile, required computing resources are reduced.
Drawings
FIG. 1 is a flow diagram of an overall method provided by an embodiment of the present invention;
fig. 2 is a schematic diagram of a forward and backward tracking error provided by an embodiment of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention provides a ground moving target tracking method based on differential forward and backward optical flows, which has the following basic ideas: acquiring an accurate optical flow vector field around a target by utilizing a forward and backward optical flow technology; then, classifying the optical flow vector field of the moving target and the optical flow vector field of the static background by using K-means, thereby realizing the segmentation of the foreground, namely the background, of the moving target; and finally, correcting a tracking result by using the segmented foreground moving target, thereby improving the tracking precision of the system.
In addition, in the tracking process, the method corrects the target tracking frame in the tracking process of each frame by using the thought of an online learning tracking method, so that the required computing resources are reduced while the tracking accuracy is improved.
The following describes the implementation process of the present invention in detail with reference to the flowchart of fig. 1.
Step 1, moving object tracking based on forward and backward optical flows.
In the step, on the basis of finding the target, a target frame is extracted according to an image containing the target, the target frame with the size twice as large as that of the initial target tracking frame is used as the initial target tracking frame, and a tracker for realizing target tracking by utilizing optical flow field calculation is adopted to track the target.
Assuming that a good tracking algorithm should have forward and backward tracking continuity, i.e. whether tracking in forward or backward order in time, the generated trajectory should be the same. The forward and backward tracking error of any one tracker is defined according to the characteristics, as shown in fig. 2: from an initial position X at time ttStarting to track position X generating time t + kt+pFrom position Xt+pBackward tracking the predicted position of the generation time t
Figure BDA0002802089870000041
The Euclidean distance between the initial position and the predicted position is used as the tracking error of the tracker in the forward and backward directions at the time t.
The tracking algorithm of the invention is based on a Lucas-Kanade tracker, and after a plurality of tracking points are given, the positions of the tracking points in the next frame are determined according to the motion condition of pixels. The tracking point is the best tracking point screened in the initial box based on an error map drawn from forward and backward errors. The initial target tracking frame is generally given as a target frame similar in size to the tracked target, whereas the initial target tracking frame in the present invention selects a target frame twice as large in size. In practical application, the difficulty of initializing the target tracking frame of the moving target is high, and the target cannot be accurately contained in the initial target tracking frame. The initial target tracking frame is enlarged, the initialization difficulty is reduced, more effective tracking points are kept, and therefore the tracking effect of the moving target is improved.
The tracker uses the previous frame image, the current frame image and the generated sampling point sequence to realize the functions of tracking, calculating the forward and backward error and matching the similarity. By eliminating the tracking points with the matching degree smaller than the median matching degree and the tracking points with the tracking error larger than the median error, the characteristic points with poor tracking result are removed, and less than 50% of the characteristic points are remained in the two groups of points-1 and-2 of the tracker correspondingly. For each Point in Point1, forward tracking is used, that is, the tracking result of the Point a of the previous frame in the current frame is B, and then backward tracking is used, that is, the tracking Point C of the previous frame is obtained by backward tracking of the Point B of the current frame, so that two tracking tracks in the forward direction and the backward direction are generated, ideally, the two tracks should coincide, that is, a and C coincide, so that the forward and backward direction error of the distance between a and C is calculated to obtain a corresponding array. And calculating the similarity of the A and the B by using the data group, wherein the similarity takes the A and the B as the center, calculating the matching degree of 10 × 10 areas intercepted from the previous frame and the current frame respectively, and assigning the matching degree value to the similarity to obtain a corresponding similarity array. And calculating a median of the similarity according to the similarity array, and removing the tracking feature points exceeding the median. And predicting the position and the size of the target tracking frame in the current frame by using the residual points.
And 2, classifying the moving foreground and the static background based on the K-means.
In this step, the optical flow vector field of the actual pixel in the target tracking frame acquired by the tracker is input to a K-means algorithm to classify the optical flow vector field of the moving target and the optical flow vector field of the stationary background.
K-means belongs to unsupervised classification, measures the similarity between samples according to a certain mode, updates the clustering center through iteration, and when the clustering center does not move any more or the moving difference value is smaller than a threshold value, the samples are classified into different categories.
When the foreground and the background are divided, firstly, the image is converted into a gray-scale image, and the gray-scale value of the image and the output of the tracker are used for dividing. Randomly selecting a clustering center point in a target frame given by the tracker, classifying all sample points by using a selected measurement mode according to the current clustering center, calculating the mean value of each current class of sample points to be used as the clustering center of the next iteration, and then calculating the difference between the clustering center of the next iteration and the current clustering center. When this difference is less than a given iteration threshold, the iteration ends. Otherwise, the next iteration is continued.
And 3, correcting a target tracking frame of the tracker by adopting the moving target segmented by the K-means algorithm, continuing to track the target, and repeating the step 2 and the step 3 to improve the tracking precision.
The correction mode is as follows: marking a target tracking frame used by the tracker as a target tracking frame A; and recording the minimum outsourcing moment of the moving target segmented by the K-means algorithm as a target tracking frame B. Then the size and position of the target tracking frame B and the target tracking frame a are integrated to update the target tracking frame actually used by the tracker.
The specific update operation may be: and taking the overlapped area of the target tracking frame B and the target tracking frame A as a new target tracking frame of the tracker, or comparing the center point coordinates, the length and the width of the two target tracking frames, if the difference between the center point coordinates, the length and the width of the two target tracking frames is smaller than a set value, taking the mean value of the two target tracking frames as new target tracking frame data, and if the difference is larger than or equal to the set value, taking the data of the target tracking frame B as new target tracking frame data (the center point coordinates, the length and the width can be respectively compared and updated).
The tracking operation in the step 2 and the step 3 can be performed once per frame, so that the tracking accuracy is improved, and meanwhile, the required computing resources can be reduced.
The above embodiments only describe the design principle of the present invention, and the shapes and names of the components in the description may be different without limitation. Therefore, a person skilled in the art of the present invention can modify or substitute the technical solutions described in the foregoing embodiments; such modifications and substitutions do not depart from the spirit and scope of the present invention.

Claims (5)

1. A ground moving object tracking method based on differential forward and backward optical flows is characterized by comprising the following steps:
step 1, extracting a target frame aiming at an image containing a target, taking the size of the target frame twice as the size of an initial target tracking frame, and tracking the target by adopting a tracker for realizing target tracking by utilizing optical flow field calculation;
step 2, inputting the optical flow vector field of the actual pixel in the target tracking frame acquired by the tracker into a K-means algorithm to classify the optical flow vector field of the moving target and the optical flow vector field of the static background;
and 3, correcting the target tracking frame of the tracker by adopting the moving target segmented by the K-means algorithm, continuing to track the target, and repeating the step 2 and the step 3.
2. The method of claim 1, wherein a target tracking box used by the tracker is denoted as target tracking box a; marking the minimum outsourcing rectangle of the moving target segmented by the K-means algorithm as a target tracking frame B;
the correction is: and (4) integrating the sizes and the positions of the target tracking frame B and the target tracking frame A, and updating the tracking frame actually used by the tracker.
3. The method of claim 1, wherein the size and position of the target tracking frame B and the target tracking frame a are integrated, and the tracking frame actually used by the tracker is updated as follows:
and taking the overlapped area of the target tracking frame B and the target tracking frame A as a new target tracking frame of the tracker.
4. The method of claim 1, wherein the step 2 and the step 3 are performed once during the tracking process of each frame to correct the target tracking frame.
5. The method of claim 1, wherein the tracker is a Lucas-Kanade tracker.
CN202011353920.3A 2020-11-26 2020-11-26 Ground moving target tracking method based on differential forward and backward optical flows Pending CN112614153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011353920.3A CN112614153A (en) 2020-11-26 2020-11-26 Ground moving target tracking method based on differential forward and backward optical flows

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011353920.3A CN112614153A (en) 2020-11-26 2020-11-26 Ground moving target tracking method based on differential forward and backward optical flows

Publications (1)

Publication Number Publication Date
CN112614153A true CN112614153A (en) 2021-04-06

Family

ID=75225522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011353920.3A Pending CN112614153A (en) 2020-11-26 2020-11-26 Ground moving target tracking method based on differential forward and backward optical flows

Country Status (1)

Country Link
CN (1) CN112614153A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296742A (en) * 2016-08-19 2017-01-04 华侨大学 A kind of online method for tracking target of combination Feature Points Matching
US20180365843A1 (en) * 2015-07-01 2018-12-20 China University Of Mining And Technology Method and system for tracking moving objects based on optical flow method
CN109785363A (en) * 2018-12-29 2019-05-21 中国电子科技集团公司第五十二研究所 A kind of unmanned plane video motion Small object real-time detection and tracking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180365843A1 (en) * 2015-07-01 2018-12-20 China University Of Mining And Technology Method and system for tracking moving objects based on optical flow method
CN106296742A (en) * 2016-08-19 2017-01-04 华侨大学 A kind of online method for tracking target of combination Feature Points Matching
CN109785363A (en) * 2018-12-29 2019-05-21 中国电子科技集团公司第五十二研究所 A kind of unmanned plane video motion Small object real-time detection and tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张晶等: "融合CN跟踪算法改进的TLD实时目标跟踪算法", 《计算机工程与科学》 *
杨勇等: "融合图像显著性与特征点匹配的形变目标跟踪", 《中国图象图形学报》 *

Similar Documents

Publication Publication Date Title
CN111814654B (en) Markov random field-based remote tower video target tagging method
CN110738673A (en) Visual SLAM method based on example segmentation
CN109785385B (en) Visual target tracking method and system
CN110287826B (en) Video target detection method based on attention mechanism
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN111340855A (en) Road moving target detection method based on track prediction
CN111160212B (en) Improved tracking learning detection system and method based on YOLOv3-Tiny
CN111209920B (en) Airplane detection method under complex dynamic background
CN113516664A (en) Visual SLAM method based on semantic segmentation dynamic points
CN111931720B (en) Method, apparatus, computer device and storage medium for tracking image feature points
CN111192294B (en) Target tracking method and system based on target detection
CN108734109B (en) Visual target tracking method and system for image sequence
CN112541491A (en) End-to-end text detection and identification method based on image character region perception
CN112037268B (en) Environment sensing method based on probability transfer model in dynamic scene
CN111931571B (en) Video character target tracking method based on online enhanced detection and electronic equipment
CN114708293A (en) Robot motion estimation method based on deep learning point-line feature and IMU tight coupling
CN110176022B (en) Tunnel panoramic monitoring system and method based on video detection
CN111462132A (en) Video object segmentation method and system based on deep learning
CN113092807A (en) Urban elevated road vehicle speed measuring method based on multi-target tracking algorithm
Gökçe et al. Recognition of dynamic objects from UGVs using Interconnected Neuralnetwork-based Computer Vision system
Zhang et al. Small target detection based on squared cross entropy and dense feature pyramid networks
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN113838072B (en) High-dynamic star map image segmentation method
CN112614153A (en) Ground moving target tracking method based on differential forward and backward optical flows
CN116129386A (en) Method, system and computer readable medium for detecting a travelable region

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210406