CN114743116A - Barracks patrol scene-based unattended special load system and method - Google Patents

Barracks patrol scene-based unattended special load system and method Download PDF

Info

Publication number
CN114743116A
CN114743116A CN202210401664.3A CN202210401664A CN114743116A CN 114743116 A CN114743116 A CN 114743116A CN 202210401664 A CN202210401664 A CN 202210401664A CN 114743116 A CN114743116 A CN 114743116A
Authority
CN
China
Prior art keywords
processing module
unattended
intelligent processing
load system
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210401664.3A
Other languages
Chinese (zh)
Inventor
李威
刘铁军
陈诚
尤冰冰
岳向泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honeycomb Aerospace Tecnologies Beijing Co ltd
Original Assignee
Honeycomb Aerospace Tecnologies Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honeycomb Aerospace Tecnologies Beijing Co ltd filed Critical Honeycomb Aerospace Tecnologies Beijing Co ltd
Priority to CN202210401664.3A priority Critical patent/CN114743116A/en
Publication of CN114743116A publication Critical patent/CN114743116A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0073Surveillance aids
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/06Traffic control systems for aircraft, e.g. air-traffic control [ATC] for control when on the ground

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a special unattended load system and a method based on a barracks tour scene, wherein the load system comprises: the system comprises a video sensor, an intelligent processing module and a ground monitoring end; the video sensor is used for acquiring barracks video image information and sending the acquired video image information to the intelligent processing module; the intelligent processing module receives and processes the video image information and transmits the video processing information to the flight control system; the intelligent processing module is also used for pushing the processed video stream to a ground monitoring end; and the ground monitoring end is used for receiving the video stream, monitoring and controlling the unmanned aerial vehicle system. The method is convenient to use, high in processing speed and good in real-time performance, and the accuracy of target prediction is improved.

Description

Barracks patrol scene-based unattended special load system and method
Technical Field
The invention belongs to the field of unmanned aerial vehicles, and particularly relates to an unattended special load system and method based on a barracks tour scene.
Background
The barracks tour has the following methods:
1. adopting a manual inspection method: for barracks which are not fully constructed, in order to complete the patrol of barracks, the patrol can be performed only in a manual mode. By adopting the form to inspect, the inspection of the inspection point can be accurately finished, but the observation range is limited, and the timeliness is poor.
2. The monitoring system method adopting the fixed visual angle comprises the following steps: in a common barracks, monitoring equipment with fixed visual angles is installed around the barracks, all video information is collected to a video processing center through a network, and the center adopts an identification algorithm to complete the identification of the content in the video. The system has good real-time performance, can quickly finish the identification of suspicious targets, but is easy to be interfered and has visual dead angles.
3. The monitoring method adopting the traditional unmanned aerial vehicle comprises the following steps: adopt traditional unmanned aerial vehicle system to unmanned aerial vehicle carries on traditional photoelectricity load, is equipped with the picture and passes, and the information transfer who observes to ground personnel department, accomplish through the manual work and keep watch on. The method is high in accuracy, but poor in timeliness, has the maintenance problem of the traditional unmanned aerial vehicle, and is complex to use.
Disclosure of Invention
Technical problem to be solved
The embodiment of the invention provides an unattended special load system and method based on a barracks tour scene, which are convenient to use, high in processing speed and good in real-time performance, and improve the accuracy of target prediction.
(II) technical scheme
In a first aspect, an embodiment of the present invention provides an unattended dedicated load system based on a barracks tour scene, including:
the system comprises a video sensor, an intelligent processing module and a ground monitoring end;
the video sensor is used for acquiring barracks video image information and sending the acquired video image information to the intelligent processing module; the intelligent processing module receives and processes the video image information and transmits the video processing information to the flight control system; the intelligent processing module is also used for pushing the processed video stream to a ground monitoring end; and the ground monitoring end is used for receiving the video stream, monitoring and controlling the unmanned aerial vehicle system.
The intelligent processing module is connected with the ground monitoring end through the image transmission equipment.
Wherein, the algorithm step of the intelligent processing module comprises:
(1) image segmentation input
Dividing an input image into unit grid graphs according to S x S;
(2) split frame prediction
Performing two-way processing according to the grid graph;
the first path is as follows: a step of grid division of a confidence region, which is to provide two prediction frames for each unit grid, wherein the provided prediction frames are based on the central point of the grid and are self-defined in size; predicting B unit grids by each grid, wherein each predicted grid has four coordinates and a confidence coefficient, so that the final prediction result is S multiplied by (B ∗ 5 + C) vectors;
and a second path: according to the network structure of the full convolution neural network model, a confidence region probability division diagram is given;
(3) result output
And combining the probability division graph and the confidence area grid graph, and outputting a final result, wherein the result comprises the position and the category information of the target.
The intelligent processing module adopts an independent full convolution neural network model, in the aspect of a network internal structure, a 3x3 convolution kernel is used for feature extraction and abstraction, a 1x1 convolution kernel is used for cascading cross-channel parameter pooling, and an activation function adopts a LeakyReLU function: max (x, 0.1 x).
The algorithm of the intelligent processing module adopts a simplified residual error basic module to replace modules of 1x1 and 3x3 in the original CNN, and a deeper Darknet-53 network is constructed as a feature extraction backbone network.
The algorithm of the intelligent processing module is that a residual error module is used in Darknet-53 to superpose the convolved output characteristic diagram and the input.
In the algorithm of the intelligent processing module, based on the current characteristic diagram, the output characteristics of a certain convolution layer are spliced into a new characteristic diagram by using an upper sampling method.
In a second aspect, the invention provides an unattended patrol method based on a barracks patrol scene, which comprises the following steps:
inputting task information, taking off the unmanned aerial vehicle from the automatic hangar, and ascending to a specific position;
after the unmanned aerial vehicle reaches the designated height, starting to enter a task execution process, and starting to automatically patrol at the moment;
the load system works to obtain image information, the intelligent processing module confirms whether a suspicious target is observed or not by processing the obtained image information and feeds the suspicious target back to the control system and the ground control end in real time, if the suspicious target is found, the load system enters an automatic tracking or automatic flying around state and simultaneously carries out early warning on the ground;
after the above processes are finished, the system confirms the condition of finishing the task, if not finished and can continue to execute the task, then continues the previous task, if the task is finished or the task can not be continuously executed, then sends out a return instruction;
after the unmanned aerial vehicle returns to the upper part of the automatic hangar, the unmanned aerial vehicle descends into the hangar and is charged and stored.
In a third aspect, the invention provides an unattended system, which comprises any one of the above-mentioned unattended dedicated load systems based on the camp patrol scene.
In a fourth aspect, the invention provides an unmanned aerial vehicle, which comprises any one of the above-mentioned unattended special load systems based on the barracks patrol scene.
(III) advantageous effects
The unattended special load system and the method based on the barracks tour scene in the embodiment of the invention have the following beneficial effects:
the invention relates to an unattended special load system based on a barracks tour scene, which comprises: the system comprises a video sensor, an intelligent processing module and a ground monitoring end; the video sensor is used for acquiring barracks video image information and sending the acquired video image information to the intelligent processing module; the intelligent processing module receives and processes the video image information and transmits the video processing information to the flight control system; the intelligent processing module is also used for pushing the processed video stream to a ground monitoring end; and the ground monitoring end is used for receiving the video stream, monitoring and controlling the unmanned aerial vehicle system. The method is convenient to use, high in processing speed and good in real-time performance, and improves the accuracy of target prediction.
Drawings
Fig. 1 is a schematic structural diagram of an unattended special load system based on a barracks tour scene according to an embodiment of the present invention;
fig. 2 is an information transfer diagram of an unattended dedicated load system based on a barracks tour scene in the embodiment of the present invention;
FIG. 3 is a schematic diagram of an algorithm of an unattended dedicated load system based on a barracks patrol scenario according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a network structure of a full convolution neural network model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the algorithm steps in an embodiment of the present invention;
fig. 6 is a schematic view of a work flow of the unattended dedicated load system based on a barracks patrol scene in the embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the figures and examples.
In the following description, the terms "first" and "second" are used for descriptive purposes only and are not intended to indicate or imply relative importance. The following description provides embodiments of the invention, which may be combined with or substituted for various embodiments, and the invention is thus to be construed as embracing all possible combinations of the same and/or different embodiments described. Thus, if one embodiment includes feature A, B, C and another embodiment includes feature B, D, then the invention should also be construed as including embodiments that include A, B, C, D in all other possible combinations, even though such embodiments may not be explicitly recited in the following text.
The following description provides examples, and does not limit the scope, applicability, or examples set forth in the claims. Changes may be made in the function and arrangement of elements described without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For example, the described methods may be performed in an order different than the order described, and various steps may be added, omitted, or combined. Furthermore, features described with respect to some examples may be combined into other examples.
The invention provides an unattended special load system based on a barracks tour scene, and a figure 1 is a structural schematic diagram of the load system, as shown in the figure 1, the system comprises a camera and an intelligent processing module, and is carried on an unmanned aerial vehicle, wherein A is the intelligent processing module and is used for receiving video image information and sending the information to a flight control system; in the figure, B is a video sensor used for acquiring video information; in the figure, C is an interface between the camera and the intelligent module; in the figure, D is a flight control system of the aircraft, which performs subsequent flight operations by receiving the processing information from the intelligent processing module; in the figure, E is an interface of the intelligent processing module and the flight control system, and the interface is a serial port; in the figure, the dotted line F is an unmanned aerial vehicle body; in the figure, G/H is video stream transceiving equipment, namely image transmission equipment, and is used for transmitting image information; in the figure, I is a ground monitoring end which is used for monitoring and controlling the unmanned aerial vehicle system; in the figure, J/K/L is a general video interface.
As shown in fig. 2, the information transmission route of the load system of the present invention includes three parts:
(1) the information transmission route between the intelligent processing module and the video sensor is a common video interface as the medium of part of information transmission hardware, and is used for transmitting the image information of the video sensor to the intelligent processing module.
(2) And the intelligent processing module and the flight control system are used for transmitting video processing information to the flight control system and guiding the flight control system to perform operations such as target tracking, flight winding and the like.
(3) And the information transmission medium of the part is a picture transmission system and is used for pushing the processed video stream to a ground monitoring end.
In some embodiments, the dedicated load algorithm in the present invention is an end-to-end target recognition network, and a single full convolution neural network model is adopted, as shown in fig. 3, the work flow includes that firstly, the resolution of the input image needs to be unified to 448x448, then, the image is sent into the full convolution neural network, and finally, the output part processes the image to obtain the position and category information of the target. Compared with two-stage algorithms such as R-CNN (Region-CNN), the structure is simpler and more uniform, the processing speed is higher, and hardware acceleration is easier to perform. Meanwhile, the training and processing processes of the algorithm are end-to-end, and an available network can be directly obtained from an image training set.
In the aspect of network structure, as shown in fig. 4, the algorithm processes the image by using a uniform convolutional neural network sequence and obtains a feature sequence, and then performs position regression and category prediction by using a shallow convolutional neural network.
In the aspect of a network internal structure, a 3x3 convolution kernel is mainly used for feature extraction and abstraction, a 1x1 convolution kernel is used for cascading cross-channel parameter pooling, and an activation function adopts a LeakyReLU function: . Note that the last layer of the network replaces the activation function with a linear activation function. The net finally outputs a tensor of size, where for the number of meshes, the first 20 elements in the third dimension represent the confidence of 20 classes, the 21-22 elements are the confidence of a bounding box, and the last 8 elements are of a bounding box.
As shown in fig. 5, the algorithm steps are as follows:
(1) image segmentation input
The input image is divided into unit grids by S.
(2) Segmented picture frame prediction
And performing two-way processing according to the grid graph.
The first path is as follows: as shown in fig. 4, the step of partition of the confidence region grid, in which the algorithm gives two prediction boxes for each unit grid, the given prediction boxes are based on the central point of the grid and are self-defined in size. Each grid predicts B unit grids, each predicted grid has four coordinates and a confidence, so the final prediction result is S × (B ∗ 5 + C) vectors. B = 2 in the text means that there are two preselected boxes, C = 20, indicating that there are 20 classes, S = 7.
And a second path: according to the network of fig. 4, a confidence zone probability partition map is given.
(3) Result output
And combining the probability division graph with the grid graph of the confidence region, and outputting a final result.
The load is mainly characterized as follows:
1. algorithm part
(1) Feature extraction network
The YolOv1 network uses the GoogleLeNet classification network structure for reference, but uses the CNN networks of 1 × 1 and 3 × 3 to realize the dimensionality reduction processing of high-dimensional information in the feature extraction, thereby realizing the information integration of high and low channels in the network. And the Yolov2 uses a multiscale feature fusion method of an SSD network for reference in the feature extraction main part, and proposes to use a Darknet-19 network to improve the fine-grained feature extraction in the image. Since the YOLOv2 performs feature fusion only at the back layer and outputs a fixed-size feature map, the method can cause that high and low semantics easily lack most information with fine granularity in the fusion process, and therefore, the detection effect of the YOLOv2 on dense small targets is poor.
The algorithm adopts simplified residual error basic modules to replace modules of 1x1 and 3x3 in the original CNN under the condition of keeping the detection speed, and a deeper Darknet-53 network is constructed as the characteristic extraction backbone network of the algorithm.
(2) Residual mechanism
The GooLeNet-like and VGG straight-tube type network structure is adopted in the YOLOv2 feature extraction Darknet-19, and if the convolution is directly added to the Darknet-19 to deepen the number of network layers, the purpose of extracting more useful feature information by using the convolution network is realized, and the problem of loss gradient disappearance or explosion easily occurs in the network learning training process. For the algorithm, a residual module is used for superposing the convolved output characteristic diagram and the input in Darknet-53 by taking the ResNet idea as reference so as to solve the contradiction between the network depth and the gradient disappearance.
(3) Characteristic diagram
The algorithm provides a method combining FPN and UpSampling (UpSampling) by referring to an FPN method in SSD in a network before outputting a feature map (feature map), and improves the problem of loss of fine-grained target feature information in fusion of a plurality of high-level information and low-level information in the feature map. The basic idea of the method is as follows: based on the current characteristic diagram, output characteristics of a certain convolutional layer are spliced into a new characteristic diagram by using an upper sampling method. The structure can improve the feature richness of the fine-grained target and also helps the algorithm to improve the accuracy of target prediction.
As shown in fig. 6, the dedicated load system belongs to a part of the unattended system, the hardware is fixedly connected to the unmanned aerial vehicle, and the work flow of the whole system is as follows:
(1) inputting task information, taking off the unmanned aerial vehicle from the automatic hangar, and ascending to a specific position;
(2) after reaching the designated height, starting to enter a task execution flow, and starting to automatically patrol at the moment;
(3) the load system works to obtain image information, the intelligent processing module confirms whether a suspicious target is observed or not by processing the obtained image information and feeds the suspicious target back to the control system and the ground control end in real time, if the suspicious target is found, the load system enters an automatic tracking or automatic flying around state and simultaneously carries out early warning on the ground;
(4) after the above processes are finished, the system confirms the condition of finishing the task, if not finished and can continue to execute the task, then continues the previous task, if the task is finished or the task can not be continuously executed, then sends out a return instruction;
(5) after the unmanned aerial vehicle returns to the upper part of the automatic hangar, the unmanned aerial vehicle descends into the hangar and carries out series of actions such as charging, storage and the like.
The embodiment of the invention provides an unattended special load system and method based on a barracks tour scene, which are convenient to use, high in processing speed and good in real-time performance, and improve the accuracy of target prediction.
In the present invention, the embodiment of the unattended operation inspection method based on the camp inspection scene is basically similar to the embodiment of the unattended operation dedicated load system based on the camp inspection scene, and for relevant points, reference is made to the description of the embodiment of the unattended operation dedicated load system based on the camp inspection scene.
It is clear to a person skilled in the art that the solution according to the embodiments of the invention can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, an FPGA (Field-Programmable Gate Array), an IC (Integrated Circuit), or the like.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the program is executed by a processor, the steps of the unattended special load system based on the barracks tour scene are realized. The computer-readable storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, DVD, CD-ROMs, microdrive, and magneto-optical disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory devices, magnetic or optical cards, nanosystems (including molecular memory ICs), or any type of media or device suitable for storing instructions and/or data.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
All functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The utility model provides a special load system of unmanned on duty based on scene is tourd to barracks, its characterized in that includes: the system comprises a video sensor, an intelligent processing module and a ground monitoring end;
the video sensor is used for acquiring barracks video image information and sending the acquired video image information to the intelligent processing module; the intelligent processing module receives and processes the video image information and transmits the video processing information to the flight control system; the intelligent processing module is also used for pushing the processed video stream to a ground monitoring end; and the ground monitoring end is used for receiving the video stream, monitoring and controlling the unmanned aerial vehicle system.
2. The camp-based unattended dedicated load system for the patrol scene of the barracks according to claim 1, wherein the intelligent processing module is connected with the ground monitoring terminal through a graph transmission device.
3. The camp-based unattended dedicated load system of claim 2, wherein the algorithm step of the intelligent processing module comprises:
(1) image segmentation input
Dividing an input image into unit grid graphs according to S x S;
(2) split frame prediction
Performing two-way processing according to the grid graph;
the first path is as follows: a step of grid division of a confidence region, which is to provide two prediction frames for each unit grid, wherein the provided prediction frames are based on the central point of the grid and are self-defined in size; predicting B unit grids by each grid, wherein each predicted grid has four coordinates and a confidence coefficient, so that the final prediction result is S multiplied by (B ∗ 5 + C) vectors;
and a second path: according to the network structure of the full convolution neural network model, a confidence region probability division graph is given;
(3) result output
And combining the probability division graph and the confidence area grid graph, and outputting a final result, wherein the result comprises the position and the category information of the target.
4. The unattended special load system based on the camp patrol scene according to any one of claims 1-3, wherein the intelligent processing module adopts a single full convolution neural network model, in the aspect of network internal structure, a 3x3 convolution kernel is used for feature extraction and abstraction, a 1x1 convolution kernel is used for cascading cross-channel parameter pooling, and an activation function adopts a LeakyReLU function: max (x, 0.1 x).
5. The unattended dedicated load system based on a barrack patrol scene according to any one of claims 1-3, wherein the algorithm of the intelligent processing module adopts a simplified residual basic module to replace the modules 1x1 and 3x3 in the original CNN, and a deeper Darknet-53 network is constructed as a feature extraction backbone network.
6. The unattended dedicated load system according to any of claims 1-3, wherein the algorithm of the intelligent processing module proposes to superimpose the convolved output feature map with the input by using a residual error module in Darknet-53.
7. The unattended dedicated load system according to any one of claims 1-3, wherein the algorithm of the intelligent processing module is based on the current feature map, and a sampling method is used to splice the output features of a certain convolution layer into a new feature map.
8. An unattended patrol method based on a barracks patrol scene is characterized by comprising the following steps:
inputting task information, taking off the unmanned aerial vehicle from the automatic hangar, and ascending to a specific position;
after the unmanned aerial vehicle reaches the designated height, starting to enter a task execution process, and starting to automatically patrol at the moment;
the load system works to obtain image information, the intelligent processing module confirms whether a suspicious target is observed or not by processing the obtained image information and feeds the suspicious target back to the control system and the ground control end in real time, if the suspicious target is found, the load system enters an automatic tracking or automatic flying around state and simultaneously carries out early warning on the ground;
after the above processes are finished, the system confirms the condition of finishing the task, if not finished and can continue to execute the task, then continues the previous task, if the task is finished or the task can not be continuously executed, then sends out a return instruction;
after the unmanned aerial vehicle returns to the upper part of the automatic hangar, the unmanned aerial vehicle descends into the hangar and is charged and stored.
9. An unattended system, characterized in that the unattended system comprises the unattended dedicated load system according to claims 1-7 based on a camp patrol scenario.
10. An unmanned aerial vehicle comprising the camp-based unattended dedicated load system of claims 1-7.
CN202210401664.3A 2022-04-18 2022-04-18 Barracks patrol scene-based unattended special load system and method Pending CN114743116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210401664.3A CN114743116A (en) 2022-04-18 2022-04-18 Barracks patrol scene-based unattended special load system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210401664.3A CN114743116A (en) 2022-04-18 2022-04-18 Barracks patrol scene-based unattended special load system and method

Publications (1)

Publication Number Publication Date
CN114743116A true CN114743116A (en) 2022-07-12

Family

ID=82281391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210401664.3A Pending CN114743116A (en) 2022-04-18 2022-04-18 Barracks patrol scene-based unattended special load system and method

Country Status (1)

Country Link
CN (1) CN114743116A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457411A (en) * 2022-08-23 2022-12-09 中国航空油料集团有限公司 Unmanned inspection method and device for aviation oil pipeline and aviation oil safe operation and maintenance system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776491A (en) * 2018-05-23 2018-11-09 广东容祺智能科技有限公司 Unmanned plane multiple target monitoring system and monitoring method based on dynamic image identification
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN111126359A (en) * 2019-11-15 2020-05-08 西安电子科技大学 High-definition image small target detection method based on self-encoder and YOLO algorithm
CN111832379A (en) * 2019-10-15 2020-10-27 中国石油化工股份有限公司 Unmanned aerial vehicle real-time video detection system based on convolutional neural network
CN112215074A (en) * 2020-09-10 2021-01-12 鲁东大学 Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision
CN112418117A (en) * 2020-11-27 2021-02-26 北京工商大学 Small target detection method based on unmanned aerial vehicle image
CN112489032A (en) * 2020-12-14 2021-03-12 北京科技大学 Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background
CN112634369A (en) * 2020-12-26 2021-04-09 西安科锐盛创新科技有限公司 Space and or graph model generation method and device, electronic equipment and storage medium
CN112906485A (en) * 2021-01-25 2021-06-04 杭州易享优智能科技有限公司 Visual impairment person auxiliary obstacle perception method based on improved YOLO model
CN113823056A (en) * 2021-09-26 2021-12-21 中电科西北集团有限公司 Unmanned aerial vehicle forest fire prevention monitoring system based on remote monitoring

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108776491A (en) * 2018-05-23 2018-11-09 广东容祺智能科技有限公司 Unmanned plane multiple target monitoring system and monitoring method based on dynamic image identification
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN111832379A (en) * 2019-10-15 2020-10-27 中国石油化工股份有限公司 Unmanned aerial vehicle real-time video detection system based on convolutional neural network
CN111126359A (en) * 2019-11-15 2020-05-08 西安电子科技大学 High-definition image small target detection method based on self-encoder and YOLO algorithm
CN112215074A (en) * 2020-09-10 2021-01-12 鲁东大学 Real-time target identification and detection tracking system and method based on unmanned aerial vehicle vision
CN112418117A (en) * 2020-11-27 2021-02-26 北京工商大学 Small target detection method based on unmanned aerial vehicle image
CN112489032A (en) * 2020-12-14 2021-03-12 北京科技大学 Unmanned aerial vehicle-mounted small target detection and positioning method and system under complex background
CN112634369A (en) * 2020-12-26 2021-04-09 西安科锐盛创新科技有限公司 Space and or graph model generation method and device, electronic equipment and storage medium
CN112906485A (en) * 2021-01-25 2021-06-04 杭州易享优智能科技有限公司 Visual impairment person auxiliary obstacle perception method based on improved YOLO model
CN113823056A (en) * 2021-09-26 2021-12-21 中电科西北集团有限公司 Unmanned aerial vehicle forest fire prevention monitoring system based on remote monitoring

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨光红 等: "无人***基础", 机械工业出版社, pages: 216 - 246 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457411A (en) * 2022-08-23 2022-12-09 中国航空油料集团有限公司 Unmanned inspection method and device for aviation oil pipeline and aviation oil safe operation and maintenance system

Similar Documents

Publication Publication Date Title
US10699125B2 (en) Systems and methods for object tracking and classification
CN109255286B (en) Unmanned aerial vehicle optical rapid detection and identification method based on deep learning network framework
WO2020215241A1 (en) Real-time object detection method deployed in platform having limited computing resource
CN112991487B (en) System for multithreading real-time construction of orthophoto semantic map
CN114255407B (en) High-resolution-based anti-unmanned aerial vehicle multi-target identification and tracking video detection method
CN111339893A (en) Pipeline detection system and method based on deep learning and unmanned aerial vehicle
CN106851229B (en) Security and protection intelligent decision method and system based on image recognition
Liu et al. Deep network for road damage detection
CN112180903A (en) Vehicle state real-time detection system based on edge calculation
US20230017578A1 (en) Image processing and model training methods, electronic device, and storage medium
CN114743116A (en) Barracks patrol scene-based unattended special load system and method
Kataev et al. Method to estimate pedestrian traffic using convolutional neural network
CN114723944A (en) Image analysis method, storage medium, and electronic device
CN116052026A (en) Unmanned aerial vehicle aerial image target detection method, system and storage medium
KR20210024862A (en) Object detecting apparatus detecting object using hierarchical pyramid and object detecting method of the same
CN114419444A (en) Lightweight high-resolution bird group identification method based on deep learning network
CN114218999A (en) Millimeter wave radar target detection method and system based on fusion image characteristics
CN117557957A (en) Airport scene aircraft target detection method based on improved YOLOv5
CN112926415A (en) Pedestrian avoiding system and pedestrian monitoring method
CN117333807A (en) Cloud edge cooperative target locking system for air-ground integrated unmanned aerial vehicle group
CN116563553A (en) Unmanned aerial vehicle image segmentation method and system based on deep learning
CN116580324A (en) Yolov 5-based unmanned aerial vehicle ground target detection method
Liu et al. Abnormal behavior analysis strategy of bus drivers based on deep learning
CN116242833A (en) Airport runway disease detection and early warning system
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination