CN111680713B - Unmanned aerial vehicle ground target tracking and approaching method based on visual detection - Google Patents

Unmanned aerial vehicle ground target tracking and approaching method based on visual detection Download PDF

Info

Publication number
CN111680713B
CN111680713B CN202010337110.2A CN202010337110A CN111680713B CN 111680713 B CN111680713 B CN 111680713B CN 202010337110 A CN202010337110 A CN 202010337110A CN 111680713 B CN111680713 B CN 111680713B
Authority
CN
China
Prior art keywords
target
aerial vehicle
unmanned aerial
classifier
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010337110.2A
Other languages
Chinese (zh)
Other versions
CN111680713A (en
Inventor
王贺
谭冲
卜智勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Microsystem and Information Technology of CAS
Original Assignee
Shanghai Institute of Microsystem and Information Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Microsystem and Information Technology of CAS filed Critical Shanghai Institute of Microsystem and Information Technology of CAS
Priority to CN202010337110.2A priority Critical patent/CN111680713B/en
Publication of CN111680713A publication Critical patent/CN111680713A/en
Application granted granted Critical
Publication of CN111680713B publication Critical patent/CN111680713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an unmanned aerial vehicle ground target tracking and approaching method based on visual detection, which comprises the following steps: selecting a frame as an initial frame in an image returned by the unmanned aerial vehicle, densely sampling by using a cyclic matrix, extracting HOG, lab and LBP characteristics of the target, and training a classifier by using ridge regression; for the subsequent frames, firstly calculating a currently required scale factor by using the relative speed of the unmanned aerial vehicle and the target, then scaling the target scale, extracting the characteristics of the target, solving the response value of the classifier and the target by using a correlation filtering method, calculating the peak side lobe ratio of the response value, determining the center position of the target according to the position of the maximum response value, and selectively updating the classifier according to the comparison result of the peak side lobe ratio and the set threshold; if the target is off-centered from the horizontal center of the field of view, then the proportional guidance law is used. The application can improve the robustness to the target scale change and the target partial shielding in the tracking process.

Description

Unmanned aerial vehicle ground target tracking and approaching method based on visual detection
Technical Field
The application relates to the technical field of unmanned aerial vehicle tracking, in particular to an unmanned aerial vehicle ground target tracking and approaching method based on visual detection.
Background
The unmanned aerial vehicle has the characteristics of high flexibility and strong maneuverability, and after the camera is carried, compared with traditional supervisory equipment, the monitoring range is bigger, and application scene is wider. Ground target tracking based on unmanned aerial vehicle has wide application in fields such as security inspection, smart city, disaster rescue and the like, and approaching to targets can finish military tasks such as approaching reconnaissance, accurate striking and the like, so that the unmanned aerial vehicle tracking system becomes one of current research hotspots.
The target position is estimated firstly based on the tracking and approximation of the unmanned aerial vehicle to the ground target, then unmanned aerial vehicle control is carried out, and the unmanned aerial vehicle is guided to complete the target task. Currently, a target tracking algorithm is mainly used for detecting the position of a target in a video sequence, and then a corresponding control algorithm is used for completing the guidance of the unmanned aerial vehicle.
The target tracking algorithm based on the correlation filtering represented by the KCF algorithm is widely applied in engineering due to the fact that the real-time performance and the robustness are both achieved. The method mainly comprises the following steps: (1) initializing a classifier. Selecting a target area in a first frame of a video sequence, constructing positive and negative samples by using a cyclic matrix for dense sampling, extracting target characteristics, and training a classifier; (2) target position detection. For the subsequent frames, firstly extracting target features of the region to be detected, and obtaining a target position by convolution response of the classifier and the region to be detected, wherein the position with the maximum response value is used as the center position of the target; (3) classifier updating. And (3) taking the new target position as a sample to carry out dense sampling again, training a template matrix, and updating the classifier in a linear interpolation mode to improve the robustness of the classifier. The KCF algorithm uses FHOG and CIELab to extract the gradient histogram and Lab color histogram of the target. The KCF algorithm has strong robustness and real-time performance, but the problems of target scale change and target shielding existing in the ground target tracking process of the unmanned aerial vehicle are still difficult to study.
Aiming at the problem of target scale change, the SAMF algorithm uses the color attribute feature and the HOG feature to fuse, proposes a translation filter to detect the target on the image block with multi-scale scaling, and takes the translation position with the largest response and the scale where the translation position is located, so as to solve the multi-scale problem. The DSST algorithm utilizes two independent filters to perform position estimation and scale estimation of the target, respectively. But also brings about complexity and redundancy of the algorithm, resulting in reduced real-time.
Aiming at the target shielding problem, the LTD tracking algorithm divides the tracking problem into three parts: tracking, learning and detecting, wherein the local shielding and the target deformation in the target tracking process are effectively processed by continuously following new model parameters through an online learning mechanism, and the method has higher robustness. Zhang Yubing, etc., blocks the object and each component is tracked separately using the kernel gray level histogram as a feature. Not only increases the robustness to shielding, but also solves the problem of non-rigid deformation of the target. The above algorithm can solve the problem of partial shielding of the target, but cannot guarantee real-time performance.
For guiding the unmanned aerial vehicle, a Lyapunov vector field method is used, a vector field matched with an expected track is constructed by means of a Lyapunov function, each point in the field has a speed vector matched with the vector field, and accurate guidance can be achieved as long as the unmanned aerial vehicle flies in the vector field according to the matched speed, but finding an ideal Lyapunov function is a great difficulty. The image servo controller based on the four rotors is designed, a dynamic model is built for the unmanned aerial vehicle, the pseudo-inverse of the jacobian matrix is solved, the control quantity is obtained, and the autonomous landing of the four rotors can be realized. For a quad-rotor with underactuated characteristics, a complex dynamics model needs to be built for the drone.
Disclosure of Invention
The application aims to solve the technical problem of providing an unmanned aerial vehicle ground target tracking and approaching method based on visual detection, which can improve the robustness to target scale change and target partial shielding in the tracking process.
The technical scheme adopted for solving the technical problems is as follows: the unmanned aerial vehicle ground target tracking and approaching method based on visual detection comprises the following steps:
(1) Selecting a frame as an initial frame in an image returned by the unmanned aerial vehicle, densely sampling by using a cyclic matrix, extracting HOG, lab and LBP characteristics of the target, and training a classifier by using ridge regression;
(2) For the subsequent frames, firstly calculating a currently required scale factor by using the relative speed of the unmanned aerial vehicle and the target, then scaling the target scale, extracting the characteristics of the target, solving the response value of the classifier and the target by using a correlation filtering method, calculating the peak side lobe ratio of the response value, determining the center position of the target according to the position of the maximum response value, and selectively updating the classifier according to the comparison result of the peak side lobe ratio and the set threshold;
(3) If the target is positioned in the center of the visual field, a two-dimensional proportional guiding method is used for obtaining a speed instruction required by the unmanned aerial vehicle, and the gesture of the unmanned aerial vehicle is controlled; if the target deviates from the horizontal center of the visual field, the yaw of the unmanned aerial vehicle is controlled to be in the center, and then the proportional guidance law is used.
The step (1) of extracting LBP features comprises the following steps:
(a) Dividing the sample image into units of 5*5 pixels and blocks of 2 x 2 units;
(b) Extracting the gray gradient direction histogram feature of the target by using an FHOG method to obtain a 31-dimensional feature;
(c) Quantizing the Lab color space to 15 centroids by using a K-means method, replacing the colors of all pixels in the area by using the centroids according to the nearest neighbor principle, and counting the color histogram in the unit to obtain 15-dimensional characteristics;
(d) Extracting LBP characteristics of a target by using an equivalent mode LBP, wherein the radius is 2, the number of neighborhood pixel points is 8, and 59-dimensional characteristics are obtained;
(e) All features are connected in series to form 105-dimensional features of the multi-feature fusion.
Defining nonlinearity when training the classifier by using ridge regression in the step (1)The spatial ridge regression function is:wherein alpha is i Coefficients representing a non-linear mapping, z representing a variable, x i Representing samples, K () representing a kernel correlation operation, diagonalized property solving using least squares and a cyclic matrix, resulting in a classifier->Wherein (1)>Fourier transform conjugate representing sample label, +.>Representing the fourier transform of the sample correlation matrix, λ represents the regularization coefficient.
The calculation formula of the scale factor in the step (2) is as follows: l=1+v 0.01, where L is a scale factor and V is the relative speed of the unmanned aerial vehicle and the target along the direction of the line of sight angle.
The calculation formula adopted when the response value of the classifier and the target is solved by using the correlation filtering method in the step (2) is as followsWherein (1)>For training classifier, x' is the target model learned and obtained in the previous frame, z is the training sample obtained in the new frame,/o>Is a cross-correlation matrix.
And (3) in the step (2), after the peak sidelobe ratio is reduced to a threshold value, updating the classifier is not performed.
The calculation formula of the proportional guidance law in the step (3) is as follows: η (eta) c =NV c Lambda', where,η c To control acceleration, N is the effective guidance coefficient, V c For missile target relative velocity, λ' is the target linear direction angle rate of change.
Advantageous effects
Due to the adoption of the technical scheme, compared with the prior art, the application has the following advantages and positive effects: according to the application, three features of HOG, lab and LBP of the target are extracted, the features are fused, the problem of target change is solved by using the self-adaptive scale factors, the problem of target partial shielding is solved by using the self-adaptive template and a new strategy, the robustness of target scale change and target partial shielding in the tracking process is improved, meanwhile, a proportional guidance method is used for an unmanned plane, and the tracking and approaching of the ground target based on the unmanned plane are completed by combining a target tracking algorithm.
Drawings
FIG. 1 is a flow chart of the present application;
FIG. 2 is a schematic diagram of the unmanned aerial vehicle proportional guidance law of the present application;
FIG. 3 is a graph of the results of a comparative experiment of the present application with the prior art;
FIG. 4 is a comparison of the present application before and after modification;
fig. 5 is a real-time log of a ground station of the present application.
Detailed Description
The application will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present application and are not intended to limit the scope of the present application. Furthermore, it should be understood that various changes and modifications can be made by one skilled in the art after reading the teachings of the present application, and such equivalents are intended to fall within the scope of the application as defined in the appended claims.
The embodiment of the application relates to a ground target tracking and approaching method of an unmanned aerial vehicle based on visual detection, which considers the conditions of target scale change, partial shielding and the like in the tracking process, adopts an improved kernel correlation filtering algorithm to detect the target position, controls the unmanned aerial vehicle to move by using a proportional guidance law, completes the target tracking and approaching task, and has the whole scheme flow shown in figure 1, and comprises the following steps:
step one, selecting a frame as an initial frame in an image returned by an unmanned aerial vehicle, densely sampling by using a cyclic matrix, extracting HOG, lab and LBP characteristics of the target, and training a classifier by using ridge regression;
in this embodiment, three features HOG, lab, LBP are fused together, and the specific extraction process is as follows:
a. dividing a sample image into cells with a size of 5*5 pixels and blocks with a size of 2 x 2 cells;
b. extracting the gray gradient direction histogram feature of the target by using an FHOG method to obtain a 31-dimensional feature;
c. quantizing the Lab color space to 15 centroids by using a K-means method, and counting color histograms in cells by using colors of pixels in a centroid replacement area according to a nearest neighbor principle to obtain 15-dimensional characteristics;
d. extracting LBP characteristics of a target by using an equivalent mode LBP, wherein the radius is 2, the number of neighborhood pixel points is 8, and 59-dimensional characteristics are obtained;
e. all features are concatenated to form a vector x of 105-dimensional features of the multi-feature fusion.
In this step, when training the classifier by using ridge regression, the ridge regression function defining the nonlinear space is:wherein alpha is i Coefficients representing a non-linear mapping, z representing a variable, x i Representing samples, K () representing a kernel correlation operation, diagonalized property solving using least squares and a cyclic matrix, resulting in a classifier->Wherein (1)>Fourier transform conjugate representing sample label, +.>Representing a sample correlation matrixFourier transform, λ represents regularized coefficients.
And secondly, for the subsequent frames, calculating a currently required scale factor by using the relative speed of the unmanned aerial vehicle and the target, scaling the target scale, extracting the characteristics of the target, solving the response value of the classifier and the target by using a correlation filtering method, calculating the peak side lobe ratio of the response value, determining the center position of the target according to the position of the maximum response value, and selectively updating the classifier according to the comparison result of the peak side lobe ratio and the set threshold.
In order to solve the problem of scale change in the target tracking process, a scale pool method is used by the SAMF algorithm, the detected target is scaled to obtain different scale values, corresponding response values are calculated after passing through a filter, then the sizes of the different scale response values are compared, and the scale corresponding to the maximum value is the optimal target scale. SAMF uses 7 scales, step = {0.985,0.99,0.995,1.0,1.005,1.01,1.015}, but the computation amount is high when a large image block is encountered, and instantaneity cannot be guaranteed. For the target tracking and approaching task, the size of the target in the field of view is increased at a speed proportional to the speed of the unmanned aerial vehicle, and the self-adaptive scale factor is adopted in the embodiment, so that the operation amount can be reduced, the detection speed is improved, and the calculation formula of the scale factor is as follows: l=1+v 0.01, where L is a scale factor and V is the relative speed of the unmanned aerial vehicle and the target along the direction of the line of sight angle.
In this step, when the response value between the classifier and the target is solved by using a correlation filtering method: the adopted calculation formula isWherein (1)>For training classifier, x' is the target model learned and obtained in the previous frame, z is the training sample obtained in the new frame,/o>For the cross-correlation matrix +.>
Thirdly, if the target is positioned in the center of the visual field, a two-dimensional proportional guiding method is used for obtaining a speed instruction required by the unmanned aerial vehicle, and the gesture of the unmanned aerial vehicle is controlled; if the target deviates from the horizontal center of the visual field, the yaw of the unmanned aerial vehicle is controlled to be in the center, and then the proportional guidance law is used.
Tracking failure detection and template self-adaptive updating can effectively solve the problem of partial shielding of the target. In the present embodiment, a method for detecting tracking failure using a PSR (peak-to-side lobe ratio) value sets a threshold value, and determines whether to update a template (i.e., classifier) by comparing the threshold values, thereby solving the problem of partial occlusion of a target. When the PSR value is detected to be reduced to the threshold value, updating of the template is not performed, so that excessive background information is prevented from being learned.
The proportional guidance law can generate an acceleration instruction of the missile, the direction is in direct proportion to the instantaneous direction of the missile target line, the target line direction angle change rate and the relative speed of the missile and the target, and the calculation formula is as follows: η (eta) c =NV c Lambda', where eta c For controlling acceleration, N is an effective guidance factor, usually 3-5, V c For missile target relative velocity, λ' is the target linear direction angle rate of change.
And controlling the yaw of the unmanned aerial vehicle according to the returned result of the target tracking algorithm, so that the target is positioned at the center of the visual field in the horizontal direction. At the moment, the motion equation is limited in the vertical two-dimensional section, and the control acceleration of the unmanned aerial vehicle can be obtained by solving the change rate of the line-of-sight angle and the relative speed of the unmanned aerial vehicle and the target.
As shown in fig. 2, a coordinate system is established in a vertical plane: t is the target; m is the focal point of the camera; according to the principle of pinhole imaging, the point A is an image point of T; point B is the center point of the imaging plane in the horizontal direction. According to the height H of the unmanned aerial vehicle, the focal length f of the camera and the distance AB from the target in the visual field to the center of the visual field, the sight angle between the unmanned aerial vehicle and the target can be obtained by using the similar triangle theoremDistance between unmanned plane and target->The change rate of the angle of the line of sight and the relative speed can be obtained according to the frame rate of the image transmission.
The beneficial effects of the application are further illustrated by simulation tests below.
The UAV123 dataset is a dataset photographed by the drone for target tracking. To verify the effectiveness of the above embodiment, three sequences (rake 1, person12, car 18) are selected from the UAV123 dataset for comparison experiments with KCF algorithm, DAT algorithm, and STAPLE algorithm, and tracking accuracy and success rate are used for evaluation.
Distance error is defined as the center point (x 1 ,y 1 ) And the true center point (x 2 ,y 2 ) Is expressed by formula (1).
Setting the threshold to 20 pixels, the tracking accuracy is the proportion of the number of frames with an error distance less than 20 pixels to all frames.
The coincidence ratio is defined as the ratio of the target frame (marked as a) estimated by the algorithm to the real target frame (marked as b) and expressed by a formula (2).
Assuming that when the coincidence ratio is greater than 0.5, the tracking is considered successful, and the success rate is the proportion of all frames with successful tracking to all frames.
As shown in fig. 3, according to the method of the present embodiment, the tracking accuracy can be stabilized at 0.6 or more by setting the distance error threshold value to 20 pixels, the overlap ratio threshold value to 0.5, and the success rate to 0.73 or more. Compared with a KCF algorithm, the method effectively solves the problems of partial shielding, scale change and the like of the target, and improves tracking robustness. While the performance improvement of the algorithm is not obvious compared with other algorithms, the improvement of the algorithm can ensure the real-time performance.
FIG. 4 is a comparison of the before and after improvement, selected from the person12 sequence, frame 910, with partial occlusion of the target, where the original algorithm is still continuously learning and updating the template matrix so that a large amount of background information is learned. In the embodiment, the PSR value is smaller than the threshold value, so that the improved algorithm avoids the interference of the background. At frame 1063, the original algorithm stays where the target is occluded, while the method of this embodiment keeps track of accurately.
The embodiment is transplanted to a ground station for a outfield experiment, and a gesture frame is used for selecting a target, so that real-time tracking and approximation of the target can be realized, and the tracking frame scale can be adaptively changed as shown in fig. 5.
It is easy to find that the method solves the problem of target partial shielding by extracting the HOG, lab and LBP features of the target, fusing the features, using the self-adaptive scale factors to solve the problem of target change, using the self-adaptive template and a new strategy, improving the robustness of target scale change and target partial shielding in the tracking process, using the proportional guidance method for the unmanned plane platform, and combining the target tracking algorithm to complete the tracking and approximation of the ground target based on the unmanned plane.

Claims (6)

1. The unmanned aerial vehicle ground target tracking and approaching method based on visual detection is characterized by comprising the following steps of:
(1) Selecting a frame as an initial frame in an image returned by the unmanned aerial vehicle, densely sampling by using a cyclic matrix, extracting HOG, lab and LBP characteristics of the target, and training a classifier by using ridge regression;
(2) For the subsequent frames, firstly calculating a currently required scale factor by using the relative speed of the unmanned aerial vehicle and the target, then scaling the target scale, extracting the characteristics of the target, solving the response value of the classifier and the target by using a correlation filtering method, calculating the peak side lobe ratio of the response value, determining the center position of the target according to the position of the maximum response value, and selectively updating the classifier according to the comparison result of the peak side lobe ratio and the set threshold; the calculation formula of the scale factor is as follows: l=1+v 0.01, L being a scale factor, V being the relative speed of the unmanned aerial vehicle and the target along the direction of the line of sight angle;
(3) If the target is positioned in the center of the visual field, a two-dimensional proportional guiding method is used for obtaining a speed instruction required by the unmanned aerial vehicle, and the gesture of the unmanned aerial vehicle is controlled; if the target deviates from the horizontal center of the visual field, the yaw of the unmanned aerial vehicle is controlled to be in the center, and then the proportional guidance law is used.
2. The unmanned aerial vehicle ground target tracking and approaching method based on visual detection according to claim 1, wherein the step (1) of extracting HOG, lab and LBP features comprises the steps of:
(a) Dividing the sample image into units of 5*5 pixels and blocks of 2 x 2 units;
(b) Extracting the gray gradient direction histogram feature of the target by using an FHOG method to obtain a 31-dimensional feature;
(c) Quantizing the Lab color space to 15 centroids by using a K-means method, replacing the colors of all pixels in the area by using the centroids according to the nearest neighbor principle, and counting the color histogram in the unit to obtain 15-dimensional characteristics;
(d) Extracting LBP characteristics of a target by using an equivalent mode LBP, wherein the radius is 2, the number of neighborhood pixel points is 8, and 59-dimensional characteristics are obtained;
(e) All features are connected in series to form 105-dimensional features of the multi-feature fusion.
3. The unmanned aerial vehicle ground target tracking and approaching method based on visual detection according to claim 1, wherein when the classifier is trained by using ridge regression in the step (1), a ridge regression function defining a nonlinear space is:wherein alpha is i Coefficients representing a non-linear mapping, z representing a variable, x i Representing samples, K () representing a kernel correlation operation, diagonalized property solving using least squares and a cyclic matrix, resulting in a classifier->Wherein (1)>Fourier transform conjugate representing sample label, +.>Representing the fourier transform of the sample correlation matrix, λ represents the regularization coefficient.
4. The unmanned aerial vehicle ground target tracking and approaching method based on visual detection according to claim 1, wherein the calculation formula adopted when the response value of the classifier and the target is solved by using the correlation filtering method in the step (2) is as followsWherein (1)>For training classifier, x' is the target model learned and obtained in the previous frame, z is the training sample obtained in the new frame,/o>Is a cross-correlation matrix.
5. The method for tracking and approximating a ground target by an unmanned aerial vehicle based on visual inspection according to claim 1, wherein in the step (2), when the peak sidelobe ratio is reduced to a threshold value, the classifier is not updated.
6. The visual inspection-based drone of claim 1 to groundThe target tracking and approaching method is characterized in that the calculation formula of the proportional guidance law in the step (3) is as follows: η (eta) c =NV c Lambda', where eta c To control acceleration, N is the effective guidance coefficient, V c For missile target relative velocity, λ' is the target linear direction angle rate of change.
CN202010337110.2A 2020-04-26 2020-04-26 Unmanned aerial vehicle ground target tracking and approaching method based on visual detection Active CN111680713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010337110.2A CN111680713B (en) 2020-04-26 2020-04-26 Unmanned aerial vehicle ground target tracking and approaching method based on visual detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010337110.2A CN111680713B (en) 2020-04-26 2020-04-26 Unmanned aerial vehicle ground target tracking and approaching method based on visual detection

Publications (2)

Publication Number Publication Date
CN111680713A CN111680713A (en) 2020-09-18
CN111680713B true CN111680713B (en) 2023-11-03

Family

ID=72452178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010337110.2A Active CN111680713B (en) 2020-04-26 2020-04-26 Unmanned aerial vehicle ground target tracking and approaching method based on visual detection

Country Status (1)

Country Link
CN (1) CN111680713B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112233141B (en) * 2020-09-28 2022-10-14 国网浙江省电力有限公司杭州供电公司 Moving target tracking method and system based on unmanned aerial vehicle vision in electric power scene
CN112613565B (en) * 2020-12-25 2022-04-19 电子科技大学 Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN113538585B (en) * 2021-09-17 2022-01-11 深圳火眼智能有限公司 High-precision multi-target intelligent identification, positioning and tracking method and system based on unmanned aerial vehicle
CN114296479B (en) * 2021-12-30 2022-11-01 哈尔滨工业大学 Image-based ground vehicle tracking method and system by unmanned aerial vehicle
CN116310742B (en) * 2023-04-17 2023-11-28 中国人民解放军军事科学院军事医学研究院 A class brain intelligent processing system for unmanned aerial vehicle reaction

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887011A (en) * 2017-01-20 2017-06-23 北京理工大学 A kind of multi-template method for tracking target based on CNN and CF
CN107205255A (en) * 2017-05-15 2017-09-26 中国科学院上海微***与信息技术研究所 Towards the method for tracking target of the wireless sensor network based on imaging sensor
CN107633226A (en) * 2017-09-19 2018-01-26 北京师范大学珠海分校 A kind of human action Tracking Recognition method and system
CN108510521A (en) * 2018-02-27 2018-09-07 南京邮电大学 A kind of dimension self-adaption method for tracking target of multiple features fusion
CN108549839A (en) * 2018-03-13 2018-09-18 华侨大学 The multiple dimensioned correlation filtering visual tracking method of self-adaptive features fusion
CN109375643A (en) * 2018-10-24 2019-02-22 中北大学 The more quadrotors face-off tracking goal direct rule formed into columns based on navigator-trailing type triangle
CN109685073A (en) * 2018-12-28 2019-04-26 南京工程学院 A kind of dimension self-adaption target tracking algorism based on core correlation filtering
CN109858415A (en) * 2019-01-21 2019-06-07 东南大学 The nuclear phase followed suitable for mobile robot pedestrian closes filtered target tracking
US10515458B1 (en) * 2017-09-06 2019-12-24 The United States Of America, As Represented By The Secretary Of The Navy Image-matching navigation method and apparatus for aerial vehicles

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106887011A (en) * 2017-01-20 2017-06-23 北京理工大学 A kind of multi-template method for tracking target based on CNN and CF
CN107205255A (en) * 2017-05-15 2017-09-26 中国科学院上海微***与信息技术研究所 Towards the method for tracking target of the wireless sensor network based on imaging sensor
US10515458B1 (en) * 2017-09-06 2019-12-24 The United States Of America, As Represented By The Secretary Of The Navy Image-matching navigation method and apparatus for aerial vehicles
CN107633226A (en) * 2017-09-19 2018-01-26 北京师范大学珠海分校 A kind of human action Tracking Recognition method and system
CN108510521A (en) * 2018-02-27 2018-09-07 南京邮电大学 A kind of dimension self-adaption method for tracking target of multiple features fusion
CN108549839A (en) * 2018-03-13 2018-09-18 华侨大学 The multiple dimensioned correlation filtering visual tracking method of self-adaptive features fusion
CN109375643A (en) * 2018-10-24 2019-02-22 中北大学 The more quadrotors face-off tracking goal direct rule formed into columns based on navigator-trailing type triangle
CN109685073A (en) * 2018-12-28 2019-04-26 南京工程学院 A kind of dimension self-adaption target tracking algorism based on core correlation filtering
CN109858415A (en) * 2019-01-21 2019-06-07 东南大学 The nuclear phase followed suitable for mobile robot pedestrian closes filtered target tracking

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
An autonomous vision-based target tracking system for rotorcraft unmanned aerial vehicles;Hui Cheng等;《 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)》;1732-1738 *
Coarse-to-Fine UAV Target Tracking With Deep Reinforcement Learning;Wei Zhang等;《IEEE Transactions on Automation Science and Engineering》;第16卷(第4期);1522-1530 *
基于核相关滤波视觉检测的多旋翼无人机对地目标跟踪与逼近;王贺等;《中国科学院大学学报》;217-223 *
多特征重检测的相关滤波无人机视觉跟踪;董美宝;杨涵文;郭文;马思源;郑创;;图学学报(06);1079-1086 *
适合长时跟踪的自适应相关滤波算法;肖逸清;葛洪伟;;计算机辅助设计与图形学学报(01);121-129 *

Also Published As

Publication number Publication date
CN111680713A (en) 2020-09-18

Similar Documents

Publication Publication Date Title
CN111680713B (en) Unmanned aerial vehicle ground target tracking and approaching method based on visual detection
CN109102522B (en) Target tracking method and device
CN104200495B (en) A kind of multi-object tracking method in video monitoring
CN104282020B (en) A kind of vehicle speed detection method based on target trajectory
CN108109162B (en) Multi-scale target tracking method using self-adaptive feature fusion
CN111932580A (en) Road 3D vehicle tracking method and system based on Kalman filtering and Hungary algorithm
CN106709472A (en) Video target detecting and tracking method based on optical flow features
KR101455835B1 (en) Lane Recognition and Tracking System Using Images, And Method For Recognition And Tracking Lane Using The Same
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN112634325B (en) Unmanned aerial vehicle video multi-target tracking method
EP2887315B1 (en) Camera calibration device, method for implementing calibration, program and camera for movable body
CN113312973B (en) Gesture recognition key point feature extraction method and system
CN116188999B (en) Small target detection method based on visible light and infrared image data fusion
CN112052802A (en) Front vehicle behavior identification method based on machine vision
CN109902578B (en) Infrared target detection and tracking method
CN111797684A (en) Binocular vision distance measuring method for moving vehicle
Yevsieiev et al. The Canny Algorithm Implementation for Obtaining the Object Contour in a Mobile Robot’s Workspace in Real Time
CN109241981B (en) Feature detection method based on sparse coding
CN112613565B (en) Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
Lu et al. Hybrid deep learning based moving object detection via motion prediction
Khemmar et al. Real time pedestrian and object detection and tracking-based deep learning. application to drone visual tracking
CN116777956A (en) Moving target screening method based on multi-scale track management
CN115100565B (en) Multi-target tracking method based on spatial correlation and optical flow registration
CN116185049A (en) Unmanned helicopter autonomous landing method based on visual guidance
CN115018883A (en) Transmission line unmanned aerial vehicle infrared autonomous inspection method based on optical flow and Kalman filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant