CN112488066A - Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance - Google Patents

Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance Download PDF

Info

Publication number
CN112488066A
CN112488066A CN202011511072.4A CN202011511072A CN112488066A CN 112488066 A CN112488066 A CN 112488066A CN 202011511072 A CN202011511072 A CN 202011511072A CN 112488066 A CN112488066 A CN 112488066A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
target
reconnaissance
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011511072.4A
Other languages
Chinese (zh)
Inventor
姜梁
马祥森
吴国强
李午申
孙浩惠
黄坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Spaceflight Electronic Technology Research Institute
Aerospace Times Feihong Technology Co ltd
China Academy of Aerospace Electronics Technology Co Ltd
Original Assignee
China Spaceflight Electronic Technology Research Institute
Aerospace Times Feihong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Spaceflight Electronic Technology Research Institute, Aerospace Times Feihong Technology Co ltd filed Critical China Spaceflight Electronic Technology Research Institute
Priority to CN202011511072.4A priority Critical patent/CN112488066A/en
Publication of CN112488066A publication Critical patent/CN112488066A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a target real-time detection method under multi-machine cooperative reconnaissance of an unmanned aerial vehicle, and belongs to the field of target detection and computer vision. The method comprises the following steps: respectively inputting the multi-path visible light battlefield images into the trained unmanned aerial vehicle reconnaissance target detection network to obtain the unmanned aerial vehicle reconnaissance target information under the single visual angle of each unmanned aerial vehicle; roughly correcting the visible light battlefield image, and roughly matching each unmanned aerial vehicle reconnaissance target to obtain an approximate unmanned aerial vehicle reconnaissance target; carrying out accurate registration on the unmanned aerial vehicle reconnaissance targets, and determining the same type of unmanned aerial vehicle reconnaissance targets under multiple visual angles; and (5) introducing a comprehensive confidence algorithm, fusing the detection results of each target under multi-machine reconnaissance, and determining the final detection result of the target. The technical scheme of the invention improves the detection effect of the network structure on the small target and improves the multi-machine cooperative detection effect.

Description

Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance
Technical Field
The invention belongs to the field of target detection and computer vision, and particularly relates to a real-time target detection method under multi-machine cooperative reconnaissance of an unmanned aerial vehicle.
Background
Under a new war form, battlefield target information needs to be rapidly and effectively acquired all the day, and the problems of uneven illumination of a reconnaissance image, smoke shielding, insufficient definition and the like exist in a complex battlefield environment, so that the detection and identification precision of the battlefield target is low, and the existing unmanned aerial vehicle sensing technology cannot completely meet the reconnaissance requirement of the modern battlefield. Therefore, in order to further improve the combat efficiency of unmanned aerial vehicle system equipment and meet the urgent requirements of real-time sensing of battlefield situation, quick acquisition of information and accurate target reconnaissance and positioning, the target real-time detection technology based on unmanned aerial vehicle cluster cooperative reconnaissance is bound to become a research hotspot in the field of unmanned aerial vehicle multi-machine cooperative reconnaissance combat.
The automatic target identification technology can directly convert data resources into available information, can effectively improve the battlefield combat response capability, and is a precondition for weapon equipment automation. The unmanned aerial vehicle shoots the target and has the characteristics of small area and unobvious characteristics, so that how to stably detect and identify the weak target still needs to be solved urgently.
Disclosure of Invention
The invention adopts a multi-machine cooperative reconnaissance target detection technology based on deep learning to construct a deep detection network structure, determines target position information while classifying target types, and improves the detection effect of the network structure on small targets by combining an unmanned aerial vehicle reconnaissance target extraction technology based on residual fusion compensation; extracting detection target characteristics through a detection target matching technology, and further determining the same target through comparison characteristics; and a multi-machine detection decision model is designed, multi-machine target detection information is effectively fused, and the multi-machine cooperative detection effect is improved.
According to the technical scheme of the invention, the invention provides a real-time target detection method under the cooperative reconnaissance of multiple unmanned aerial vehicles, which is characterized by comprising the following steps:
step 1: respectively inputting multi-path visible light battlefield images acquired by the unmanned aerial vehicle into the trained unmanned aerial vehicle reconnaissance target detection network to obtain the information of the reconnaissance target of the unmanned aerial vehicle under the single visual angle of each unmanned aerial vehicle;
step 2: extracting and processing unmanned aerial vehicle attitude information corresponding to the current visible light battlefield image to carry out coarse correction on the visible light battlefield image, and carrying out coarse matching on each unmanned aerial vehicle reconnaissance target according to the coarse positioning longitude and latitude information of the reconnaissance target of the unmanned aerial vehicle to obtain an approximate unmanned aerial vehicle reconnaissance target after the coarse matching;
and step 3: extracting the significant features of each unmanned aerial vehicle reconnaissance target, performing accurate registration on the unmanned aerial vehicle reconnaissance targets, and determining the same type of unmanned aerial vehicle reconnaissance targets under multiple visual angles;
and 4, step 4: and according to the matching results of the same type of unmanned aerial vehicle reconnaissance targets under the shooting visual angles of the multiple unmanned aerial vehicles, introducing a comprehensive confidence algorithm, fusing the detection results of each target under the multi-machine reconnaissance, and determining the final detection result of the target. Therefore, the classification error condition of the detected target of the single unmanned aerial vehicle is corrected, and the target detection accuracy is improved.
Further, the step 1 specifically includes:
step 11: the unmanned aerial vehicle cluster system carries a photoelectric load, collects multi-path visible light battlefield images and returns the images through a link;
step 12: respectively inputting the collected multi-path visible light battlefield images into the trained unmanned aerial vehicle reconnaissance target detection network;
step 13: dividing the visible light battlefield image into uniform grids, predicting a plurality of frame information aiming at each grid, and predicting a plurality of target windows according to the frame information;
step 14: removing the less likely target window according to (threshold <0.5) and removing the redundant window based on a Non-maximum suppression (NMS) algorithm;
step 15: and obtaining the target position and the type.
Further, in step 12, a training data set is established by adopting the real-time video of the unmanned aerial vehicle, the unmanned aerial vehicle reconnaissance target detection framework is trained and optimized, and a single-machine target detection network is established.
Further, in step 12, a multi-scale feature fusion algorithm is adopted to extract features, in the feature fusion process, a large-scale feature map is fused with a small-scale feature map after being zoomed, and four scale feature maps are merged in the last layer to obtain the final high-level features.
Further, in step 13, the frame information includes a confidence that each frame is the target and a probability of each frame region in a plurality of categories.
Further, for the unmanned aerial vehicle reconnaissance target detection framework, the number of convolution channels of the last layer of the network is reduced, and the number of convolution kernels of the last module is 512.
Further, the step 2 specifically includes:
step 21: according to the attitude information of the unmanned aerial vehicle, image geometric correction is carried out on the visible light battlefield image in a mode of collecting ground control points, and coarse correction is realized;
further, in step 21, the attitude information of the unmanned aerial vehicle is position information such as a load angle and a pitch angle.
Step 22: according to image multiplexing data downloaded by the unmanned aerial vehicle, longitude and latitude information of central points of the unmanned aerial vehicle reconnaissance targets is analyzed, actual distances between the central points of the unmanned aerial vehicle reconnaissance targets are calculated according to the longitude and latitude information, and when the distances between the unmanned aerial vehicle reconnaissance targets are smaller than a preset threshold value (0.5-2 m), approximate unmanned aerial vehicle reconnaissance targets after rough matching are obtained.
Further, the step 3 specifically includes:
step 31: extracting target image areas of the roughly matched approximate unmanned aerial vehicle reconnaissance targets;
step 32: inputting the target image area into a target matching network for feature extraction;
step 33: and calculating the difference between the approximate targets through the extracted features to perform accurate registration, and determining the approximate targets as the same type of unmanned aerial vehicle reconnaissance targets under multiple viewing angles when the difference is smaller than a threshold value (0.1-0.3).
Further, the step 4 specifically includes:
step 41: obtaining the detection type of each unmanned aerial vehicle reconnaissance target according to the unmanned aerial vehicle reconnaissance target information under the single visual angle of each unmanned aerial vehicle;
step 42: according to the matched targets of the same type of unmanned aerial vehicle reconnaissance, obtaining a detection result of each target under multi-machine reconnaissance;
step 43: and introducing comprehensive confidence, fusing the detection results of each target under multi-machine reconnaissance, and determining the final detection result of the target.
Further, the step 43 specifically includes:
assuming that n unmanned aerial vehicles shoot the same area, K types of targets are shared in the scene, and f is used in the detection process of the ith aircrafti(. the confidence of the detection result is represented by Pi(. h), then for the target O, if the detection target result of the ith unmanned aerial vehicle is the kth type, the result f of the ith unmanned aerial vehicle detecting the target OiK, with confidence Pi(O), n is more than or equal to 2 and is a positive integer, K is more than or equal to 1 and is a positive integer, K belongs to K,
for the object O, in the same shooting process, C is used for detecting the comprehensive confidence coefficient that the object O is k typeskIndicate that
Figure BDA0002846425540000031
Taking the class with the maximum comprehensive confidence coefficient of the target O as the final class K of detectionO
Compared with the prior art, the invention has the following advantages:
1) according to the invention, an improved YOLO-v3 network structure is adopted, a multi-scale characteristic diagram is reformed, the number of convolution channels of the network structure is reduced, target detection is carried out on the returned image of the unmanned aerial vehicle cluster, the detection speed is increased, and the problem of poor detection effect of a weak target under the vision of the unmanned aerial vehicle is solved;
2) according to the invention, the real-time parameter information of the unmanned aerial vehicle is adopted to correct the image, and the rough matching of the target is carried out according to the longitude and latitude information of the target positioning, so that the target matching efficiency and accuracy are improved;
3) the invention adopts a comprehensive confidence algorithm to perform fusion decision on target detection results under multi-machine cooperative reconnaissance, thereby greatly improving the real-time target detection precision of the unmanned aerial vehicle cluster under a complex scene.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
In the drawings:
FIG. 1 is a general flow chart of a target real-time detection algorithm under cooperative reconnaissance of an unmanned aerial vehicle cluster according to the present invention;
FIG. 2 is a flow chart of target detection according to the present invention;
FIG. 3 is a schematic diagram of a deep network architecture according to the present invention;
FIG. 4 is a schematic diagram of a target feature extraction structure according to the present invention;
FIG. 5 is a schematic view of a segmentation visualization of an image component according to the present invention;
fig. 6 is a diagram of a multi-machine cooperative target matching network according to the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terms first, second and the like in the description and in the claims of the present invention are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
A plurality, including two or more.
And/or, it should be understood that, as used herein, the term "and/or" is merely one type of association that describes an associated object, meaning that three types of relationships may exist. For example, a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone.
The invention adopts a target real-time detection method under the multi-machine cooperative reconnaissance of the unmanned aerial vehicle to quickly acquire reconnaissance target information under the complex battlefield environment, and provides high-precision information for command decision. Taking a visible light image shot by a photoelectric load of an unmanned aerial vehicle as an example, firstly, detecting multiple types of attention targets on a scout image returned by each single machine in an unmanned aerial vehicle cluster based on an improved YOLO-v3 target detection model to obtain target position and type information; then, roughly correcting the image according to the flight parameters of the unmanned aerial vehicle, roughly matching the multi-machine detection target according to the longitude and latitude information of the target rough positioning result, and then accurately registering through the characteristic that the learning object has significant difference to complete the matching of the multi-machine reconnaissance target; and finally, a comprehensive confidence algorithm is introduced to fuse the same target information under the multi-machine cooperative reconnaissance, so that the target detection precision is improved. The overall algorithm flow is shown in fig. 1.
The algorithm can be summarized as the following steps:
1) the unmanned aerial vehicle cluster system carries a photoelectric load, collects multi-path visible light battlefield images and returns the images through a link;
2) respectively inputting each path of video image into a target detection framework to obtain scout target information under the single visual angle of each unmanned aerial vehicle;
3) extracting unmanned aerial vehicle parameter information corresponding to the currently processed video image to perform coarse correction on the image, and performing coarse matching on each unmanned aerial vehicle reconnaissance target according to the target coarse positioning longitude and latitude information;
4) extracting the significant features of the unmanned aerial vehicle reconnaissance targets, performing accurate registration of the targets, and determining reconnaissance information of the same target at multiple viewing angles;
5) according to the matching results of the same targets under the shooting visual angles of the multiple unmanned aerial vehicles, a comprehensive confidence algorithm is introduced, the single-machine target classification error condition is corrected, and the target detection precision of the whole unmanned aerial vehicle cluster system is improved.
Thus, the technical solution of the present invention is to: based on the characteristics of unmanned aerial vehicle cluster reconnaissance images and the defects of the prior art in the aspect of target real-time detection under unmanned aerial vehicle cluster reconnaissance, the performance and the adaptability of the algorithm are integrated, the target real-time detection algorithm under unmanned aerial vehicle multi-machine cooperative reconnaissance is provided, and the problem of low battlefield target detection precision in an unmanned aerial vehicle cluster reconnaissance system is solved. The method mainly comprises the following steps:
1) the aerial remote sensing image has various scales, the shooting heights of the aerial remote sensing image are from hundreds of meters to nearly ten thousand meters, and the ground targets are different in size even being similar targets, so that the requirement on the universality of a target detector is high;
2) the visual angle specificity is that the visual angles of the aerial remote sensing images are basically high-altitude overlooking, but most of the detection scenes of the conventional targets are ground horizontal visual angles, so that the modes of the same target are different, and the good detector trained on the conventional data set has poor effect when being used on the aerial remote sensing images;
3) the image shot by a single unmanned aerial vehicle is limited by illumination, climatic conditions and an imaging mechanism, and the problem of battlefield target information cannot be completely reflected;
4) the target size problem, according to the flight experience of the unmanned aerial vehicle, the size of a target in a reconnaissance image of the unmanned aerial vehicle is usually below 50 multiplied by 50, so that the target characteristics are not obvious, and the classification accuracy is low;
5) background complexity is high, and the field of view may contain various backgrounds, which may cause strong interference to target detection.
The following describes the key technologies involved in the algorithm implementation in detail with reference to the flow chart.
Unmanned aerial vehicle reconnaissance target detection model construction
The invention realizes the target detection under the vision of the unmanned aerial vehicle based on the improved YOLO-v3 network, and integrates the target candidate region selection, the feature extraction, the target positioning and the target identification into a neural network.
Given an input image, firstly dividing the image into uniform grids; for each mesh, predicting a plurality of bounding boxes (including the confidence that each bounding box is a target and the probability that each bounding box region is in a plurality of categories); and after the predicted target window is obtained, removing the target window with low possibility according to a threshold value, and finally removing a redundant window based on a Non-maximum suppression (NMS) algorithm. It can be seen that the whole process is very simple, the intermediate candidate region extraction process is not needed, and the position and category judgment can be completed by directly performing network regression. The detection method herein comprises three parts: obtaining a candidate frame, carrying out multi-scale fusion detection and target classification, and setting the confidence coefficient of a part of prediction frames to be zero in the process of target detection so as to reduce the difficulty of network learning. The detection flow is shown in fig. 2.
Aiming at the characteristics that targets are generally weak and have high real-time requirements under the vision of an unmanned aerial vehicle, on the basis of a YOLOv3 network, a multi-scale feature map is recombined, the identification precision of the small targets is improved, the number of convolution channels is reduced to narrow the network, and the detection speed is improved. The deep network structure is schematically shown in fig. 3.
In the aspect of feature extraction, a multi-scale feature fusion algorithm is adopted, and in the feature fusion process, a large-scale feature graph is fused with a small-scale feature graph after being subjected to maximum pooling, but a large-scale feature graph is zoomed and then is fused with the small-scale feature graph. And combining the four scale feature maps in the last layer to obtain the final high-level features.
Meanwhile, in order to improve the network detection speed, the convolution channel number of the last layer of the network is reduced. First, the deeper and wider the network will generally work better, but at the same time the amount of computations and parameters will increase, which results in slower algorithm speeds and thus a trade-off. First, narrower: in the YOLO v3 algorithm, the last few convolutional layers are wide (for example, the number of convolutional kernels is 1024), so that after feature fusion is introduced in the front, the number of convolutional kernels of the layers is not needed much, and the number of convolutional kernels of the layers is reduced. The number of convolution kernels of the last module is only 512, compared with the last convolution layer of the Yolo algorithm, the number of convolution channels is reduced by one time, network parameters are greatly reduced, and therefore the detection speed is further improved.
Multi-machine cooperative detection target matching technology
And matching image target objects under multi-camera shooting through learning the characteristics of the objects with significant differences. The image is uniformly divided into a plurality of components through the constructed depth network, the characteristics of each component of the image target are extracted, different loss functions are used for training different components, and each component of the image object is divided again according to the target type, so that each component of the object accords with the actual distribution condition.
The target feature extraction network constructed by the invention adopts common GoogLeNet, VGG-Net, ResNet, DesNet and other networks as the basis of feature extraction. The specific network structure is shown in fig. 4.
In the above network structure, the convolutional network takes ResNet50 as the base network of convolution as an example. The full-mean pooling layer is discarded in the ResNet50 network and the extracted features are divided into P horizontal stripes. And (4) performing global mean pooling on each stripe to obtain P2048-dimensional vectors. The dimensionality reduction operation is then performed with a 1 x 1 convolution kernel, which transforms each streak feature into 256 dimensions. And finally, training the feature vector corresponding to each stripe by respectively adopting n classes of softmax multi-classification loss functions to obtain P n classifiers. In the testing stage, P feature vectors are concatenated into a descriptor for matching the target object. In addition, the feature vector used for testing can be from 2048 dimensions, and a feature vector concatenation from 256 dimensions can also be adopted.
Due to the adoption of the uniform segmentation method, the edge information of the assembly is lost, and the feature extraction is not accurate enough. The invention provides a new segmentation method, which mainly readjusts the image components which are uniformly segmented before, and is a soft segmentation method.
As can be seen from fig. 5, uniform segmentation of the target region may result in inaccurate segmentation of the components. Parts 1 to 6 in the figure represent the first component to the 6 th component of the segmentation object respectively, and it can be seen that the components actually intersect at the edge contour, so a new method is required for component repartitioning. By using the method of the invention, the contour points of the edge part of a certain component can be divided again to adjacent components, the continuity in one component is maintained, and the model performance of the application component is enhanced. By relocating peripheral pixel points to more suitable components, the components which are originally uniformly divided are finely adjusted, so that the components have coherence.
The present invention incorporates the method of re-segmenting components into a constructed neural network, as shown in FIG. 6. By adding the method of re-dividing the components, each component is accurately divided, and the global mean pooling operation is used, so that the interior of each component is kept continuous, and the integrity of each component is ensured. The method is more beneficial to extracting the characteristics and completing the process of target matching.
During actual use, firstly, rough correction (image geometric correction) is carried out on images according to flight parameters of the unmanned aerial vehicles, rough matching is carried out on multi-machine detection targets according to longitude and latitude information of target rough positioning results, then the targets shot by the unmanned aerial vehicles are accurately registered through the characteristics, and the corresponding relation is obtained.
Multi-machine cooperative detection decision-making technology
During actual flight, the detection accuracy can be improved in a multi-machine cooperative detection mode. When the multi-unmanned aerial vehicle reconnaissance areas at multiple angles, shooting angles are different, but shooting time is approximately the same, so that detection processes are mutually independent, and the following decision model is designed to make decisions on multi-machine cooperative detection results.
Assuming that n airplanes shoot the same area, K types of targets are shared in the scene, and f is used in the detection process of the ith airplanei(. the confidence of the detection result is represented by Pi(. -) represents. Then, for the target O, if the target detection result of the ith unmanned aerial vehicle is of the kth class, the result f of the target O detected by the ith unmanned aerial vehicle isiK, with confidence Pi(O)。
For the object O, in the same shooting process, C is used for detecting the comprehensive confidence coefficient that the object O is k typeskIndicate that
Figure BDA0002846425540000081
In actual use, the class with the maximum integrated confidence of the target O is used as the final class K of detectionO
At this time, the class of the detection target O is defined as KOSet M of drones. Then the probability of detecting an error in M is
Figure BDA0002846425540000082
Since each detection process is independent, when the drones in the set M all detect that the target O belongs to the category k, the probability of detection error can be expressed as
Figure BDA0002846425540000091
Π denotes the successive multiplication symbol. Then the probability of detecting the correctness is
Figure BDA0002846425540000092
Under the actual condition, the condition of single machine classification errors caused by the influence of shooting angles or other conditions can be corrected by integrating the reference of confidence coefficients, and the classification accuracy is improved. Meanwhile, due to the effect of multi-machine cooperation on the detection result, the final detection accuracy of the image can be improved. According to the experience of single-machine target detection, the confidence of the single-machine detection target in actual detection is usually about 0.6, and if two unmanned aerial vehicles simultaneously detect that the targets belong to the same class, the probability of correct detection can be calculated by equation (4) to obtain P-1-0.4-0.86. Then if three aircraft simultaneously detect that a target belongs to the same category, the accuracy may be raised to P-1-0.4-0.936. In practical situations, the confidence of single-machine detection is generally determined by the indexes such as shooting angle and picture definition.
In summary, the method for detecting the target in real time under the multi-machine cooperative reconnaissance of the unmanned aerial vehicle mainly comprises three parts of a depth detection model construction technology, a multi-machine cooperative detection target matching technology and a multi-machine cooperative detection decision technology.
The depth detection model construction part mainly performs target detection of images shot by a single unmanned aerial vehicle, and obtains the position and the category information of battlefield targets in the single-view-angle images. Based on a YOLO-v3 network, the network structure is improved, a multi-scale characteristic diagram is reformed, the detection capability of the weak and small targets under the vision of the unmanned aerial vehicle is improved, the number of convolution channels of the network structure is reduced, and the detection speed is improved. And establishing a training data set according to the real-time video shot by the unmanned aerial vehicle, completing the training and optimization of the deep network model, and constructing a single-machine target detection framework.
The multi-machine cooperation detection target matching part is mainly used for matching the same target under multi-view images. Firstly, roughly correcting an image according to flight parameters of an unmanned aerial vehicle, roughly matching a multi-machine detection target according to longitude and latitude information of a target rough positioning result, and then accurately registering through learning the characteristic that an object has significant difference to complete target matching under multiple shooting visual angles.
And the multi-machine cooperative detection decision part is mainly used for carrying out fusion decision on the matched multi-view target information. By means of the introduction of the comprehensive confidence degree, the situation that single machine classification is wrong due to the influence of shooting angles or other conditions is corrected, and the classification accuracy is improved.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance is characterized by comprising the following steps:
step 1: respectively inputting multi-path visible light battlefield images acquired by the unmanned aerial vehicle into the trained unmanned aerial vehicle reconnaissance target detection network to obtain the position and the type information of the unmanned aerial vehicle reconnaissance target under the single visual angle of each unmanned aerial vehicle;
step 2: extracting and processing unmanned aerial vehicle attitude information corresponding to the current visible light battlefield image to carry out coarse correction on the visible light battlefield image, and carrying out coarse matching on each unmanned aerial vehicle reconnaissance target according to the coarse positioning longitude and latitude information of the reconnaissance target of the unmanned aerial vehicle to obtain an approximate unmanned aerial vehicle reconnaissance target after the coarse matching;
and step 3: extracting the significant features of each approximate unmanned aerial vehicle reconnaissance target, performing accurate registration of the unmanned aerial vehicle reconnaissance targets, and determining the same type of unmanned aerial vehicle reconnaissance targets under multiple visual angles;
and 4, step 4: and according to the matching results of the same type of unmanned aerial vehicle reconnaissance targets under the shooting visual angles of the multiple unmanned aerial vehicles, introducing a comprehensive confidence algorithm, fusing the detection results of each unmanned aerial vehicle reconnaissance target under the multi-machine reconnaissance, and determining the final detection result of the unmanned aerial vehicle reconnaissance target.
2. The real-time target detection method according to claim 1, wherein the step 1 specifically comprises:
step 11: the unmanned aerial vehicle cluster system carries a photoelectric load, collects multi-path visible light battlefield images and returns the images through a link;
step 12: respectively inputting the collected multi-path visible light battlefield images into the trained unmanned aerial vehicle reconnaissance target detection network;
step 13: respectively dividing the visible light battlefield image into uniform grids, predicting a plurality of frame information aiming at each grid, and predicting a plurality of target windows according to the frame information;
step 14: removing the target window with low possibility according to the threshold value, and removing the redundant window based on a non-maximum value suppression algorithm;
step 15: and obtaining the position and the type of the unmanned aerial vehicle reconnaissance target.
3. The method for real-time object detection according to claim 2, wherein in step 12, a training data set is established by using the real-time video of the unmanned aerial vehicle, and the unmanned aerial vehicle reconnaissance object detection network is trained and optimized to establish a stand-alone unmanned aerial vehicle reconnaissance object detection network.
4. The real-time target detection method according to claim 2, wherein in step 12, a multi-scale feature fusion algorithm is adopted for feature extraction, a large-scale feature map is scaled and then fused with a small-scale feature map, and four scale feature maps are merged in the last layer to obtain final high-level features, so that feature fusion is realized.
5. The method according to claim 2, wherein in step 13, the frame information includes a confidence that each frame is the target and a probability that each frame region is in a plurality of categories.
6. The real-time target detection method of claim 2, wherein for the unmanned aerial vehicle reconnaissance target detection network, the number of convolution channels in the last layer of the network is reduced, and the number of convolution kernels is 512.
7. The method for detecting the target in real time according to claim 1, wherein the step 2 specifically comprises:
step 21: according to the unmanned aerial vehicle attitude information corresponding to the current visible light battlefield image, image geometric correction is carried out on the visible light battlefield image in a mode of collecting ground control points, and coarse correction is achieved;
step 22: and analyzing longitude and latitude information of the central point of the unmanned aerial vehicle reconnaissance target according to the image multiplexing data downloaded by the unmanned aerial vehicle, calculating the actual distance between the central points of the unmanned aerial vehicle reconnaissance targets according to the longitude and latitude information, and obtaining the approximate unmanned aerial vehicle reconnaissance target after coarse matching when the distance between the unmanned aerial vehicle reconnaissance targets is smaller than a preset threshold value.
8. The method for detecting the target in real time according to claim 1, wherein the step 3 specifically comprises:
step 31: extracting target image areas of the roughly matched approximate unmanned aerial vehicle reconnaissance targets;
step 32: inputting the target image area into a target matching network for feature extraction;
step 33: and calculating differences among the approximate unmanned aerial vehicle reconnaissance targets through the extracted features to perform accurate registration, and determining the approximate unmanned aerial vehicle reconnaissance targets as the same type of unmanned aerial vehicle reconnaissance targets under multiple visual angles when the differences are smaller than a threshold value.
9. The method for detecting the target in real time according to claim 1, wherein the step 4 specifically comprises:
step 41: extracting the detection category of each unmanned aerial vehicle reconnaissance target;
step 42: according to the matched same type of unmanned aerial vehicle reconnaissance targets, obtaining a detection result of each unmanned aerial vehicle reconnaissance target under multi-machine reconnaissance;
step 43: and introducing comprehensive confidence, fusing detection results of each unmanned aerial vehicle reconnaissance target under multi-machine reconnaissance, and determining a final detection result of the unmanned aerial vehicle reconnaissance target.
10. The method for real-time target detection according to claim 9, wherein the step 43 specifically comprises:
assuming that n unmanned aerial vehicles shoot the same area, K types of targets are shared in the scene, and f is used in the detection process of the ith aircrafti(. the confidence of the detection result is represented by Pi(. h), then for the target O, if the detection target result of the ith unmanned aerial vehicle is the kth type, the result f of the ith unmanned aerial vehicle detecting the target OiK, with confidence Pi(O), n is more than or equal to 2 and is a positive integer, K is more than or equal to 1 and is a positive integer, K belongs to K,
for the object O, in the same shooting process, C is used for detecting the comprehensive confidence coefficient that the object O is k typeskIndicate that
Figure FDA0002846425530000031
Taking the class with the maximum comprehensive confidence coefficient of the target O as the final class K of detectionO
CN202011511072.4A 2020-12-18 2020-12-18 Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance Pending CN112488066A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011511072.4A CN112488066A (en) 2020-12-18 2020-12-18 Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011511072.4A CN112488066A (en) 2020-12-18 2020-12-18 Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance

Publications (1)

Publication Number Publication Date
CN112488066A true CN112488066A (en) 2021-03-12

Family

ID=74914947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011511072.4A Pending CN112488066A (en) 2020-12-18 2020-12-18 Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance

Country Status (1)

Country Link
CN (1) CN112488066A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884692A (en) * 2021-03-15 2021-06-01 中国电子科技集团公司第十一研究所 Distributed airborne cooperative reconnaissance photoelectric system and unmanned aerial vehicle system
CN112906658A (en) * 2021-03-30 2021-06-04 航天时代飞鸿技术有限公司 Lightweight automatic detection method for ground target investigation by unmanned aerial vehicle
CN113344408A (en) * 2021-06-21 2021-09-03 成都民航空管科技发展有限公司 Processing method for multi-scale situation perception process of civil aviation traffic control operation
CN113673444A (en) * 2021-08-19 2021-11-19 清华大学 Intersection multi-view target detection method and system based on angular point pooling
CN113949826A (en) * 2021-09-28 2022-01-18 航天时代飞鸿技术有限公司 Unmanned aerial vehicle cluster cooperative reconnaissance method and system under limited communication bandwidth condition
CN115294484A (en) * 2022-09-29 2022-11-04 南京航空航天大学 Method for detecting cooperative target of multiple unmanned aerial vehicles

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633215A (en) * 2017-09-06 2018-01-26 南京小网科技有限责任公司 The discriminating method of small micro- fuzzy object in a kind of high-altitude video monitoring
CN108921875A (en) * 2018-07-09 2018-11-30 哈尔滨工业大学(深圳) A kind of real-time traffic flow detection and method for tracing based on data of taking photo by plane
WO2019033747A1 (en) * 2017-08-18 2019-02-21 深圳市道通智能航空技术有限公司 Method for determining target intelligently followed by unmanned aerial vehicle, unmanned aerial vehicle and remote controller
CN109949229A (en) * 2019-03-01 2019-06-28 北京航空航天大学 A kind of target cooperative detection method under multi-platform multi-angle of view
CN109945867A (en) * 2019-03-04 2019-06-28 中国科学院深圳先进技术研究院 Paths planning method, device and the computer equipment of unmanned plane
CN111079556A (en) * 2019-11-25 2020-04-28 航天时代飞鸿技术有限公司 Multi-temporal unmanned aerial vehicle video image change area detection and classification method
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN112017225A (en) * 2020-08-04 2020-12-01 华东师范大学 Depth image matching method based on point cloud registration

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019033747A1 (en) * 2017-08-18 2019-02-21 深圳市道通智能航空技术有限公司 Method for determining target intelligently followed by unmanned aerial vehicle, unmanned aerial vehicle and remote controller
CN107633215A (en) * 2017-09-06 2018-01-26 南京小网科技有限责任公司 The discriminating method of small micro- fuzzy object in a kind of high-altitude video monitoring
CN108921875A (en) * 2018-07-09 2018-11-30 哈尔滨工业大学(深圳) A kind of real-time traffic flow detection and method for tracing based on data of taking photo by plane
CN109949229A (en) * 2019-03-01 2019-06-28 北京航空航天大学 A kind of target cooperative detection method under multi-platform multi-angle of view
CN109945867A (en) * 2019-03-04 2019-06-28 中国科学院深圳先进技术研究院 Paths planning method, device and the computer equipment of unmanned plane
WO2020181685A1 (en) * 2019-03-12 2020-09-17 南京邮电大学 Vehicle-mounted video target detection method based on deep learning
CN111079556A (en) * 2019-11-25 2020-04-28 航天时代飞鸿技术有限公司 Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN112017225A (en) * 2020-08-04 2020-12-01 华东师范大学 Depth image matching method based on point cloud registration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董洪义: "深度学习之PyTorch物体检测实战", 31 January 2020, 机械工业出版社, pages: 180 - 181 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884692A (en) * 2021-03-15 2021-06-01 中国电子科技集团公司第十一研究所 Distributed airborne cooperative reconnaissance photoelectric system and unmanned aerial vehicle system
CN112884692B (en) * 2021-03-15 2023-06-23 中国电子科技集团公司第十一研究所 Distributed airborne collaborative reconnaissance photoelectric system and unmanned aerial vehicle system
CN112906658A (en) * 2021-03-30 2021-06-04 航天时代飞鸿技术有限公司 Lightweight automatic detection method for ground target investigation by unmanned aerial vehicle
CN113344408A (en) * 2021-06-21 2021-09-03 成都民航空管科技发展有限公司 Processing method for multi-scale situation perception process of civil aviation traffic control operation
CN113673444A (en) * 2021-08-19 2021-11-19 清华大学 Intersection multi-view target detection method and system based on angular point pooling
CN113949826A (en) * 2021-09-28 2022-01-18 航天时代飞鸿技术有限公司 Unmanned aerial vehicle cluster cooperative reconnaissance method and system under limited communication bandwidth condition
CN115294484A (en) * 2022-09-29 2022-11-04 南京航空航天大学 Method for detecting cooperative target of multiple unmanned aerial vehicles

Similar Documents

Publication Publication Date Title
CN112488066A (en) Real-time target detection method under unmanned aerial vehicle multi-machine cooperative reconnaissance
KR101105795B1 (en) Automatic processing of aerial images
CN108734103B (en) Method for detecting and tracking moving target in satellite video
CN112102372A (en) Cross-camera track tracking system for airport ground object
CN107480727A (en) The unmanned plane image fast matching method that a kind of SIFT and ORB are combined
CN111126325A (en) Intelligent personnel security identification statistical method based on video
CN113963240B (en) Comprehensive detection method for multi-source remote sensing image fusion target
CN110110131B (en) Airplane cable support identification and parameter acquisition method based on deep learning and binocular stereo vision
CN111339839A (en) Intensive target detection and metering method
CN111709968B (en) Low-altitude target detection tracking method based on image processing
EP3593322B1 (en) Method of detecting moving objects from a temporal sequence of images
CN113066050B (en) Method for resolving course attitude of airdrop cargo bed based on vision
CN113486697B (en) Forest smoke and fire monitoring method based on space-based multimode image fusion
CN109492525B (en) Method for measuring engineering parameters of base station antenna
CN110544202A (en) parallax image splicing method and system based on template matching and feature clustering
CN114463619B (en) Infrared dim target detection method based on integrated fusion features
CN112634130A (en) Unmanned aerial vehicle aerial image splicing method under Quick-SIFT operator
CN113706496B (en) Aircraft structure crack detection method based on deep learning model
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN114445467A (en) Specific target identification and tracking system of quad-rotor unmanned aerial vehicle based on vision
CN117036404A (en) Monocular thermal imaging simultaneous positioning and mapping method and system
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
Lin et al. A multi-target detection framework for multirotor UAV
CN114693528A (en) Unmanned aerial vehicle low-altitude remote sensing image splicing quality evaluation and redundancy reduction method and system
CN111368603B (en) Airplane segmentation method and device for remote sensing image, readable storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination