CN114973033A - Unmanned aerial vehicle automatic target detection and tracking method - Google Patents

Unmanned aerial vehicle automatic target detection and tracking method Download PDF

Info

Publication number
CN114973033A
CN114973033A CN202210597472.4A CN202210597472A CN114973033A CN 114973033 A CN114973033 A CN 114973033A CN 202210597472 A CN202210597472 A CN 202210597472A CN 114973033 A CN114973033 A CN 114973033A
Authority
CN
China
Prior art keywords
image
stage
target
feature
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210597472.4A
Other languages
Chinese (zh)
Other versions
CN114973033B (en
Inventor
刘明华
邵洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Science and Technology
Original Assignee
Qingdao University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Science and Technology filed Critical Qingdao University of Science and Technology
Priority to CN202210597472.4A priority Critical patent/CN114973033B/en
Publication of CN114973033A publication Critical patent/CN114973033A/en
Application granted granted Critical
Publication of CN114973033B publication Critical patent/CN114973033B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Remote Sensing (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic target detection and tracking method for an unmanned aerial vehicle, and relates to the technical field of automatic target detection and tracking of the unmanned aerial vehicle.

Description

Unmanned aerial vehicle automatic target detection and tracking method
Technical Field
The invention relates to the technical field of automatic detection and tracking of unmanned aerial vehicles, in particular to an automatic target detection and tracking method for an unmanned aerial vehicle.
Background
The detection and tracking of the target is an important component in the image processing technology and comprises two subtasks of target detection and target tracking. Object detection is the process of detecting and classifying object objects in an image. The target tracking technology is a process of continuously obtaining the motion state of a target in subsequent frames by using a tracking target selected manually or given by a detector with a certain frame of a video sequence as a starting point.
Although the detection method alone can well obtain the positions of all targets and label the categories of the targets, the processing speed of detection is slow. The tracking method used alone firstly needs to manually set the initial position of the target to be tracked, secondly cannot process the newly appeared target, and although the speed is high, the method cannot cope with the actual scene. Therefore, a method combining detection and tracking needs to be found, so that the advantages of the detection and tracking are both considered, and the method can be applied to complex tasks.
The existing detection and technical means can only detect and track the size of a detection frame formed by a specific basic geometric object in a detection frame as a basis without considering that a polyhedral target can rotate when moving, so that the detection angles of continuous frames of the unmanned aerial vehicle are different, and the size of the detection frame can be changed due to the change of the detection angles even at the same detection position, so that the tracking judgment of the unmanned aerial vehicle is influenced very easily.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an automatic target detection and tracking method for an unmanned aerial vehicle, which comprises the following steps:
step 1: establishing a target three-dimensional model, acquiring a feature vector of an image feature information total set of the target three-dimensional model, and classifying and naming the feature vector;
step 2: when the unmanned aerial vehicle obtains the target n (n is more than or equal to 2) after the frame image, determining the corresponding image characteristic information in the image where the target is positioned, and obtaining the second image by adopting a Two-Stage algorithm n A post-stage detection frame area where the target is located in the frame image; obtaining a preceding Stage detection frame area where the target in the (n-1) th frame image is located by adopting a Two-Stage algorithm;
and step 3: and comparing the image characteristic information of the rear-stage detection frame area with the image characteristic information of the front-stage detection frame area, acquiring the moving speed of the target in the space, and adjusting the moving speed of the unmanned aerial vehicle to keep the moving speed consistent with the target.
Preferably, after the target stereo model is established, multi-angle image data of the target are acquired based on the stereo model, and an image characteristic area obtained by detecting each angle image data is recorded to form an image characteristic information aggregate.
Preferably, in step 1, the image feature information total set acquiring process specifically includes the following steps:
step 11: acquiring a target image data acquisition angle by adopting a loop subdivision algorithm, acquiring a plurality of acquired images according to the acquisition angle to form an image aggregate, and marking the image aggregate according to the acquisition angle;
step 12: carrying out data and processing, and carrying out data normalization processing on the image aggregate by taking the average pixel height as a unit to obtain a relative height; normalizing the image total set by taking the average pixel width as a unit to obtain a relative width, and normalizing by taking the average value of the image pixel points in a regular range as a unit to obtain a relative proportion;
step 13: inputting the processed image aggregate into a Transformer network, and performing feature extraction and information understanding to finally obtain a feature vector;
step 14: and adopting a full connection layer for the feature vectors to obtain the final classification dimensionality and classification result of the image feature information, and recording the final classification dimensionality and classification result as an image feature information total set.
Preferably, in the step 3, when comparing the image feature information of the rear-stage detection frame region with the image feature information of the front-stage detection frame region to acquire the moving speed of the target in the space, the method specifically includes the following steps:
step 31: will be first n The frame image is recorded as the post-stage image to be measured, the nth- 1 Recording the frame image as a preceding-stage image to be detected, inputting the subsequent-stage image to be detected into a feature point detection network to obtain a subsequent-stage feature result to be detected, inputting the preceding-stage image to be detected into the feature point detection network to obtain a preceding-stage feature result to be detected, and judging whether the target is lost;
step 32: and under the condition that the target is not lost, comparing the characteristic result to be detected at the later stage with the characteristic result to be detected at the former stage to obtain the target moving speed.
Preferably, in step 31, the feature point detection network is composed of a feature extraction module, a priori region generation module and an attention mechanism module, and the feature extraction module extracts edge features, texture features and semantic features of the acquired image according to the image feature total set; the prior area generation module is used for generating a prior frame with a fixed size on the acquired image, the areas correspond to a plurality of areas of the acquired image, and the areas of the prior frame correspond to a plurality of areas of a single acquired image, so that the extraction difficulty of the image characteristic area is reduced; the attention mechanism module is used for enabling the feature point detection network to put more attention on the image feature area.
Preferably, in the step 32, when the target moving speed is obtained by comparing the feature result to be measured at the later stage with the feature result to be measured at the earlier stage, the method specifically includes the following steps:
step 321: obtaining a rear-stage acquisition angle of the nth frame image and a front-stage acquisition angle of the n-1 th frame image according to the rear-stage characteristic result to be detected and the front-stage characteristic result to be detected;
step 322: acquiring a preceding-stage acquisition image corresponding to a preceding-stage acquisition angle in an image total set, screening a target area by adopting a seed growth algorithm to obtain a preceding-stage standard detection frame, and calculating the relative preceding-stage occupation ratio of the preceding-stage detection frame and the preceding-stage standard detection frame;
step 323: acquiring a post-stage acquisition image corresponding to a post-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a post-stage standard detection frame, and calculating the post-stage relative proportion of the post-stage detection frame and the post-stage standard detection frame;
step 324: and obtaining the moving speed of the target according to the difference value of the relative occupation ratio of the front stage and the relative occupation ratio of the rear stage.
Preferably, in step 31, when determining whether the target is lost, the method specifically includes the following steps:
step 311: accumulating the confidence coefficients of the targets from the first frame image to the nth frame image to obtain a parameter confidence coefficient;
step 312: and judging whether the parameter confidence coefficient is smaller than a first preset value, if so, determining that the target is lost, and if not, determining that the target is not lost.
Preferably, in step 2, when the Two-Stage algorithm is adopted to obtain the detection frame region where the target is located in the image, the HrNet18 network is used as the main network thereof to perform feature extraction, so as to screen out the target image with quality which is not as expected.
Preferably, when the target images with the quality not meeting the expectation are screened out through the HrNet18 network, the method specifically comprises the following steps:
s21: performing data preprocessing, including data size change and image data normalization, wherein the image data normalization includes image rotation and image turnover;
s22: inputting the enhanced image data into an HrNet18 network, performing feature extraction to finally obtain feature vectors, and passing the feature vectors through a full connection layer to obtain finally classified dimensions;
s23: and marking the photos, dividing the photos into images with normal quality and images with abnormal quality, and deleting the images with abnormal quality.
Preferably, in step 3, when the moving speed of the unmanned aerial vehicle is adjusted to be consistent with the target, specifically, the photographing view center position of the photographing device in the unmanned aerial vehicle is aligned with the geometric center of the rear-stage detection frame, and the moving speed of the unmanned aerial vehicle is adjusted.
The invention has the beneficial effects that:
according to the invention, images of a detected target during multi-angle acquisition can be acquired according to three-dimensional modeling and a loop algorithm, and then the acquired images are subjected to feature extraction processing to obtain image feature data under different acquisition angles, so that the real-time acquired data are compared and calculated to obtain the motion speed of the target in adjacent frame images, and the shooting angle and the moving speed of the unmanned aerial vehicle are adjusted according to the moving speed of the target, thereby greatly improving the target detection tracking effect of the unmanned aerial vehicle.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a flowchart of an automatic target detection and tracking method for an unmanned aerial vehicle according to the present invention;
fig. 2 is a flow of an image feature information collection acquiring process of an unmanned aerial vehicle automatic target detection and tracking method provided by the invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
As shown in fig. 1, an automatic target detection and tracking method for an unmanned aerial vehicle includes the following steps:
step 1: establishing a target three-dimensional model, acquiring a feature vector of an image feature information total set of the target three-dimensional model, and classifying and naming the feature vector;
step 2: when the unmanned aerial vehicle acquires the nth (n is more than or equal to 2) frame image of the target, determining corresponding image characteristic information in the image where the target is located, and obtaining a rear-Stage detection frame area where the target is located in the nth frame image by adopting a Two-Stage algorithm; obtaining a preceding Stage detection frame area where the target in the (n-1) th frame image is located by adopting a Two-Stage algorithm;
and step 3: and comparing the image characteristic information of the rear-stage detection frame area with the image characteristic information of the front-stage detection frame area, acquiring the moving speed of the target in the space, and adjusting the moving speed of the unmanned aerial vehicle to keep the moving speed consistent with the target.
More specifically, after a target stereo model is established, multi-angle image data of the target are obtained based on the stereo model, and image characteristic areas obtained by detecting the image data of each angle are recorded to form an image characteristic information aggregate.
The C4D modeling software may be used to create the stereo model here.
As shown in fig. 2, more specifically, the image feature information collection acquiring process specifically includes the following steps:
step 11: acquiring a target image data acquisition angle by adopting a loop subdivision algorithm, acquiring a plurality of acquired images according to the acquisition angle to form an image aggregate, and marking the image aggregate according to the acquisition angle;
step 12: carrying out data and processing, and carrying out data normalization processing on the image aggregate by taking the average pixel height as a unit to obtain a relative height; normalizing the image aggregate by taking the average pixel width as a unit to obtain a relative width, and normalizing by taking the image pixel point mean value in a regular range as a unit to obtain a relative proportion;
step 13: inputting the processed image aggregate into a Transformer network, and performing feature extraction and information understanding to finally obtain a feature vector;
step 14: and adopting a full connection layer for the feature vectors to obtain the final classification dimensionality and classification result of the image feature information, and recording the final classification dimensionality and classification result as an image feature information total set.
The loop subdivision method is characterized in that a triangle is divided into four triangles, new vertexes and old vertexes are distinguished to change positions respectively, the surface of a model is finally smoother, when a loop subdivision algorithm is adopted to obtain an acquisition angle of target image data, the geometric center of a three-dimensional model is obtained according to the three-dimensional model of a target, a regular icosahedron is built by taking the geometric center as an original point, the regular icosahedron is subdivided for multiple times by adopting the loop subdivision algorithm to obtain a plurality of vertexes, at the moment, a graph formed by connecting the plurality of vertexes is similar to a sphere, a single vertex is taken as an acquisition angle, and therefore a plurality of acquired images can be obtained to form an image aggregate.
More specifically, in step 3, when comparing the image feature information of the rear-stage detection frame region with the image feature information of the front-stage detection frame region to obtain the moving speed of the target in the space, the method specifically includes the following steps:
step 31: recording the nth frame image as a rear-stage image to be detected and recording the n-1 th frame image as a front-stage image to be detected, inputting the rear-stage image to be detected into a feature point detection network to obtain a rear-stage feature result to be detected, inputting the front-stage image to be detected into the feature point detection network to obtain a front-stage feature result to be detected, and judging whether the target is lost;
step 32: and under the condition that the target is not lost, comparing the characteristic result to be detected at the later stage with the characteristic result to be detected at the former stage to obtain the target moving speed.
More specifically, the feature point detection network is composed of a feature extraction module, a prior region generation module and an attention mechanism module, wherein the feature extraction module is used for extracting edge features, texture features and semantic features of the collected image according to the image feature total set; the prior area generation module is used for generating a prior frame with a fixed size on the acquired image, the areas correspond to a plurality of areas of the acquired image, and the areas of the prior frame correspond to a plurality of areas of a single acquired image, so that the extraction difficulty of the image characteristic area is reduced; the attention mechanism module is used for enabling the feature point detection network to put more attention on the image feature area.
The characteristic extraction module comprises a convolution layer and a pooling layer, and is mainly used for extracting the characteristics of the acquired image according to the image characteristic collection; the prior region generation module can transfer the network from global detection feature points to local detection, so that the difficulty of extracting feature points from the whole image is reduced; by introducing the attention mechanism module, the network can invest more resources in the area near the characteristic point and filter out some irrelevant information.
More specifically, in the step 32, when the target moving speed is obtained by comparing the feature result to be measured at the later stage with the feature result to be measured at the earlier stage, the method specifically includes the following steps:
step 321: obtaining a rear-stage acquisition angle of the nth frame image and a front-stage acquisition angle of the n-1 th frame image according to the rear-stage characteristic result to be detected and the front-stage characteristic result to be detected;
step 322: acquiring a preceding-stage acquisition image corresponding to a preceding-stage acquisition angle in an image total set, screening a target area by adopting a seed growth algorithm to obtain a preceding-stage standard detection frame, and calculating the relative preceding-stage occupation ratio of the preceding-stage detection frame and the preceding-stage standard detection frame;
step 323: acquiring a post-stage acquisition image corresponding to a post-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a post-stage standard detection frame, and calculating the post-stage relative proportion of the post-stage detection frame and the post-stage standard detection frame;
step 324: and obtaining the moving speed of the target according to the difference value of the relative occupation ratio of the front stage and the relative occupation ratio of the rear stage.
More specifically, in the step 31, when determining whether the target is lost, the method specifically includes the following steps:
step 311: accumulating the confidence degrees of the targets from the first frame image to the nth frame image to obtain a parameter confidence degree;
step 312: and judging whether the parameter confidence coefficient is smaller than a first preset value, if so, determining that the target is lost, and if not, determining that the target is not lost.
More specifically, in step 2, when the Two-Stage algorithm is adopted to obtain the detection frame region where the target is located in the image, the HrNet18 network is used as the main network to perform feature extraction, so as to screen out the target image with quality which is not up to the expectation.
More specifically, when a target image with a quality not meeting the expectation is screened out through the HrNet18 network, the method specifically comprises the following steps:
s21: performing data preprocessing, including data size change and image data normalization, wherein the image data normalization includes image rotation and image turnover;
s22: inputting the enhanced image data into an HrNet18 network, performing feature extraction to finally obtain feature vectors, and passing the feature vectors through a full connection layer to obtain finally classified dimensions;
s23: and marking the photos, dividing the photos into normal quality images and abnormal quality images, and deleting the abnormal quality images.
More specifically, in step 3, adjust unmanned aerial vehicle's translation rate, when making it keep unanimous with the target, specifically aim at the geometric center of back level detection frame with the shooting field of vision central point of equipment among the unmanned aerial vehicle to adjust unmanned aerial vehicle's translation rate.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (10)

1. An unmanned aerial vehicle automatic target detection and tracking method is characterized by comprising the following steps:
step 1: establishing a target three-dimensional model, acquiring a feature vector of an image feature information total set of the target three-dimensional model, and classifying and naming the feature vector;
step 2: when the unmanned aerial vehicle acquires the nth (n is more than or equal to 2) frame image of the target, determining corresponding image characteristic information in the image where the target is located, and obtaining a rear-Stage detection frame area where the target is located in the nth frame image by adopting a Two-Stage algorithm; obtaining a preceding Stage detection frame area where the target in the (n-1) th frame image is located by adopting a Two-Stage algorithm;
and step 3: and comparing the image characteristic information of the rear-stage detection frame area with the image characteristic information of the front-stage detection frame area, acquiring the moving speed of the target in the space, and adjusting the moving speed of the unmanned aerial vehicle to keep the moving speed consistent with the target.
2. The automatic target detection and tracking method for the unmanned aerial vehicle as claimed in claim 1, wherein in step 1, after the target stereo model is established, multi-angle image data of the target are obtained based on the stereo model, and an image feature area obtained by detecting each angle image data is recorded to form an image feature information aggregate.
3. The unmanned aerial vehicle automatic target detection and tracking method according to claim 2, wherein the image feature information collection obtaining process specifically comprises the following steps:
step 11: acquiring a target image data acquisition angle by adopting a loop subdivision algorithm, acquiring a plurality of acquired images according to the acquisition angle to form an image aggregate, and marking the image aggregate according to the acquisition angle;
step 12: carrying out data and processing, and carrying out data normalization processing on the image aggregate by taking the average pixel height as a unit to obtain a relative height; normalizing the image aggregate by taking the average pixel width as a unit to obtain a relative width, and normalizing by taking the image pixel point mean value in a regular range as a unit to obtain a relative proportion;
step 13: inputting the processed image aggregate into a Transformer network, and performing feature extraction and information understanding to finally obtain a feature vector;
step 14: and adopting a full connection layer for the feature vectors to obtain the final classification dimensionality and classification result of the image feature information, and recording the final classification dimensionality and classification result as an image feature information total set.
4. The automatic target detection and tracking method for the unmanned aerial vehicle as claimed in claim 3, wherein in the step 3, the image feature information comparison is performed between the rear-stage detection frame area and the front-stage detection frame area, and when the speed of the target moving in the space is obtained, the method specifically comprises the following steps:
step 31: recording the nth frame image as a rear-stage image to be detected and recording the n-1 th frame image as a front-stage image to be detected, inputting the rear-stage image to be detected into a feature point detection network to obtain a rear-stage feature result to be detected, inputting the front-stage image to be detected into the feature point detection network to obtain a front-stage feature result to be detected, and judging whether the target is lost;
step 32: and under the condition that the target is not lost, comparing the characteristic result to be detected at the later stage with the characteristic result to be detected at the former stage to obtain the target moving speed.
5. The automatic target detection and tracking method for the unmanned aerial vehicle as claimed in claim 4, wherein in step 31, the feature point detection network is composed of a feature extraction module, a priori region generation module and an attention mechanism module, wherein the feature extraction module comprises edge feature, texture feature and semantic feature extraction on the collected image according to the image feature total set; the prior area generation module is used for generating a prior frame with a fixed size on the acquired image, the areas correspond to a plurality of areas of the acquired image, and the areas of the prior frame correspond to a plurality of areas of a single acquired image, so that the extraction difficulty of the image characteristic area is reduced; the attention mechanism module is used for enabling the feature point detection network to put more attention on the image feature area.
6. The automatic target detection and tracking method for the unmanned aerial vehicle as claimed in claim 4, wherein in the step 32, when the target moving speed is obtained by comparing the feature result to be detected at the later stage with the feature result to be detected at the earlier stage, the method specifically comprises the following steps:
step 321: obtaining a rear-stage acquisition angle of the nth frame image and a front-stage acquisition angle of the n-1 th frame image according to the rear-stage characteristic result to be detected and the front-stage characteristic result to be detected;
step 322: acquiring a preceding-stage acquisition image corresponding to a preceding-stage acquisition angle in an image total set, screening a target area by adopting a seed growth algorithm to obtain a preceding-stage standard detection frame, and calculating the relative preceding-stage occupation ratio of the preceding-stage detection frame and the preceding-stage standard detection frame;
step 323: acquiring a post-stage acquisition image corresponding to a post-stage acquisition angle in the image total set, screening a target area by adopting a seed growth algorithm to obtain a post-stage standard detection frame, and calculating the post-stage relative proportion of the post-stage detection frame and the post-stage standard detection frame;
step 324: and obtaining the moving speed of the target according to the difference value of the relative occupation ratio of the front stage and the relative occupation ratio of the rear stage.
7. The unmanned aerial vehicle automatic target detection and tracking method according to claim 4, wherein in step 31, when determining whether the target is lost, the method specifically comprises the following steps:
step 311: accumulating the confidence degrees of the targets from the first frame image to the nth frame image to obtain a parameter confidence degree;
step 312: and judging whether the parameter confidence coefficient is smaller than a first preset value, if so, determining that the target is lost, and if not, determining that the target is not lost.
8. The automatic target detection and tracking method for the unmanned aerial vehicle as claimed in claim 1, wherein in step 2, when a Two-Stage algorithm is adopted to obtain a detection frame region where a target in the image is located, the HrNet18 network is used as a backbone network for feature extraction, so as to screen out a target image with quality which is not as expected.
9. The automatic target detection and tracking method for the unmanned aerial vehicle according to claim 8, wherein when the target images with the quality not meeting the expectation are screened out through the HrNet18 network, the method specifically comprises the following steps:
s21: performing data preprocessing, including data size change and image data normalization, wherein the image data normalization includes image rotation and image turnover;
s22: inputting the enhanced image data into an HrNet18 network, performing feature extraction to finally obtain feature vectors, and passing the feature vectors through a full connection layer to obtain finally classified dimensions;
s23: and marking the photos, dividing the photos into normal quality images and abnormal quality images, and deleting the abnormal quality images.
10. The automatic target detection and tracking method for the unmanned aerial vehicle as claimed in claim 1, wherein in step 3, the moving speed of the unmanned aerial vehicle is adjusted to keep the moving speed consistent with the target, specifically, the center of the photographing field of view of the photographing device in the unmanned aerial vehicle is aligned with the geometric center of the post-stage detection frame, and the moving speed of the unmanned aerial vehicle is adjusted.
CN202210597472.4A 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method Active CN114973033B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210597472.4A CN114973033B (en) 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210597472.4A CN114973033B (en) 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method

Publications (2)

Publication Number Publication Date
CN114973033A true CN114973033A (en) 2022-08-30
CN114973033B CN114973033B (en) 2024-03-01

Family

ID=82957483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210597472.4A Active CN114973033B (en) 2022-05-30 2022-05-30 Unmanned aerial vehicle automatic detection target and tracking method

Country Status (1)

Country Link
CN (1) CN114973033B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
US20170200061A1 (en) * 2016-01-11 2017-07-13 Netradyne Inc. Driver behavior monitoring
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
US20180012065A1 (en) * 2016-07-08 2018-01-11 UBTECH Robotics Corp. Face detecting and tracking method, method for controlling rotation of robot head and robot
CN109001484A (en) * 2018-04-18 2018-12-14 广州视源电子科技股份有限公司 Method and device for detecting rotation speed
CN110401799A (en) * 2019-08-02 2019-11-01 睿魔智能科技(深圳)有限公司 A kind of auto-tracking shooting method and system
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
CN111798482A (en) * 2020-06-16 2020-10-20 浙江大华技术股份有限公司 Target tracking method and device
CN112184760A (en) * 2020-10-13 2021-01-05 中国科学院自动化研究所 High-speed moving target detection tracking method based on dynamic vision sensor
CN112907634A (en) * 2021-03-18 2021-06-04 沈阳理工大学 Vehicle tracking method based on unmanned aerial vehicle
WO2021189507A1 (en) * 2020-03-24 2021-09-30 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9443320B1 (en) * 2015-05-18 2016-09-13 Xerox Corporation Multi-object tracking with generic object proposals
US20170200061A1 (en) * 2016-01-11 2017-07-13 Netradyne Inc. Driver behavior monitoring
US20180012065A1 (en) * 2016-07-08 2018-01-11 UBTECH Robotics Corp. Face detecting and tracking method, method for controlling rotation of robot head and robot
CN107292297A (en) * 2017-08-09 2017-10-24 电子科技大学 A kind of video car flow quantity measuring method tracked based on deep learning and Duplication
CN109001484A (en) * 2018-04-18 2018-12-14 广州视源电子科技股份有限公司 Method and device for detecting rotation speed
CN110401799A (en) * 2019-08-02 2019-11-01 睿魔智能科技(深圳)有限公司 A kind of auto-tracking shooting method and system
CN110458868A (en) * 2019-08-15 2019-11-15 湖北经济学院 Multiple target tracking based on SORT identifies display systems
WO2021189507A1 (en) * 2020-03-24 2021-09-30 南京新一代人工智能研究院有限公司 Rotor unmanned aerial vehicle system for vehicle detection and tracking, and detection and tracking method
CN111798482A (en) * 2020-06-16 2020-10-20 浙江大华技术股份有限公司 Target tracking method and device
CN112184760A (en) * 2020-10-13 2021-01-05 中国科学院自动化研究所 High-speed moving target detection tracking method based on dynamic vision sensor
CN112907634A (en) * 2021-03-18 2021-06-04 沈阳理工大学 Vehicle tracking method based on unmanned aerial vehicle

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BAO XIN CHEN 等: "Fast Visual Object Tracking with Rotated Bounding Boxes", 《ARXIV:1907.03892V5》 *
杨廷召 等: "第一人称视角下的社会力优化多行人跟踪", 《中国图象图形学报》 *
王晓 等: "图像处理在客流检测中的算法研究", 《中国海洋大学学报》 *

Also Published As

Publication number Publication date
CN114973033B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN102903122B (en) Video object tracking method based on feature optical flow and online ensemble learning
CN111951212A (en) Method for identifying defects of contact network image of railway
CN113112519B (en) Key frame screening method based on interested target distribution
CN110415260B (en) Smoke image segmentation and identification method based on dictionary and BP neural network
CN109389609B (en) Interactive self-feedback infrared target detection method based on FART neural network
CN110189375A (en) A kind of images steganalysis method based on monocular vision measurement
CN112258470B (en) Intelligent industrial image critical compression rate analysis system and method based on defect detection
CN112819812B (en) Powder bed defect detection method based on image processing
CN111199245A (en) Rape pest identification method
CN109658429A (en) A kind of infrared image cirrus detection method based on boundary fractal dimension
CN113313107A (en) Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN111683221A (en) Real-time video monitoring method and system for natural resources embedded with vector red line data
CN117636268A (en) Unmanned aerial vehicle aerial natural driving data set construction method oriented to ice and snow environment
CN114973033B (en) Unmanned aerial vehicle automatic detection target and tracking method
CN108122231B (en) Image quality evaluation method based on ROI Laplacian algorithm under monitoring video
CN110765853A (en) Image processing method of multi-spectrum phase machine
CN107564029B (en) Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA
CN116311218A (en) Noise plant point cloud semantic segmentation method and system based on self-attention feature fusion
CN113496159B (en) Multi-scale convolution and dynamic weight cost function smoke target segmentation method
Chuang et al. Moving object segmentation and tracking using active contour and color classification models
CN114926456A (en) Rail foreign matter detection method based on semi-automatic labeling and improved deep learning
CN108010051A (en) Multisource video subject fusion tracking based on AdaBoost algorithms
CN113688819A (en) Target object expected point tracking matching method based on mark points
CN117975175B (en) Plastic pipeline appearance defect detection method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant