CN111178743A - Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster - Google Patents

Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster Download PDF

Info

Publication number
CN111178743A
CN111178743A CN201911357480.6A CN201911357480A CN111178743A CN 111178743 A CN111178743 A CN 111178743A CN 201911357480 A CN201911357480 A CN 201911357480A CN 111178743 A CN111178743 A CN 111178743A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
cooperative
observation
autonomous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911357480.6A
Other languages
Chinese (zh)
Inventor
闫野
查顺考
刘凯燕
姜志杰
邓宝松
谢良
印二威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center, National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority to CN201911357480.6A priority Critical patent/CN111178743A/en
Publication of CN111178743A publication Critical patent/CN111178743A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Development Economics (AREA)
  • Educational Administration (AREA)
  • Game Theory and Decision Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for autonomous cooperative observation and cooperative operation of an unmanned aerial vehicle cluster, which comprises the following steps: detecting and identifying multiple operation targets, and establishing operation tasks; distributing the operation tasks based on the positions of all unmanned aerial vehicles in the unmanned aerial vehicle cluster and the operation target positions; and controlling each unmanned aerial vehicle to implement based on the distributed operation tasks. The method does not need manual intervention midway, only needs to set the operation target of the unmanned aerial vehicle group, has high autonomy and simple operation, and can effectively save manpower and time. The method can help rescue work of dangerous scenes such as fire and earthquake, and can provide help for unmanned aerial vehicle team groups to fight on military affairs and the like.

Description

Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster
Technical Field
The invention relates to the field of machine learning, in particular to a cooperative observation and cooperative operation technology for an unmanned aerial vehicle cluster.
Background
The invention is from the problems of autonomous task allocation and navigation operation of unmanned aerial vehicle groups. The method for autonomous cooperative observation and cooperative operation of the unmanned aerial vehicle group plays an important role in rescue of dangerous scenes such as fire, earthquake and the like, military battlefield environment analysis, operation of the unmanned aerial vehicle group teams and groups and the like. The autonomy and high-precision navigation of the unmanned aerial vehicle group are key factors, and the unmanned aerial vehicle group can identify a working target, autonomously distribute a task and autonomously navigate to the working target to complete a working task with high precision. However, the current unmanned aerial vehicle group operation needs certain manual intervention, and the unmanned aerial vehicle group detection and identification target precision is not high, the overall efficiency is low, and the scene adaptability is poor.
The task allocation path planning for detecting and identifying the operation target and the autonomy with high precision is the core problem of the autonomy operation of the unmanned aerial vehicle group. However, the current detection and identification work targets are all identified manually or by single camera images, and task allocation and path planning also need manual intervention. These methods face mainly the following difficulties: manual identification or single camera image identification is low in efficiency, low in precision and easy to miss detection of a working target; too many manual intervention modes are used for task allocation and path planning, the efficiency is low, and the scene adaptability is not strong. The invention starts from the autonomous cooperativity and the improvement of the precision of the unmanned aerial vehicle cluster, starts from the robust camera automatic calibration and reasonable shooting method, develops the neural network mapping model based on deep learning and reinforcement learning, improves the autonomy and the scene adaptability in a machine learning mode, and integrates the images of a plurality of unmanned aerial vehicle clusters to improve the precision. When the technology is used, only the operation target in a specific scene needs to be set at the beginning, other manual intervention processes are not needed, the whole technical process automatically cooperates with the unmanned aerial vehicle group to detect and identify the operation target, the task distribution is automatically carried out, and the autonomous path planning navigation is provided for each unmanned aerial vehicle, so that the unmanned aerial vehicle group cooperatively completes the preset operation task.
Disclosure of Invention
In view of the problem of autonomous cooperative operation of the unmanned aerial vehicle cluster, the invention aims to provide a technology for cooperative observation and cooperative operation of the unmanned aerial vehicle cluster based on a deep neural network. Related personnel set specific operation target tasks for the unmanned aerial vehicle cluster, and the technology can integrate information of images shot by multiple unmanned aerial vehicles to detect and identify operation targets; cost can be estimated through information such as the unmanned aerial vehicle and the operation target position, and operation target task allocation can be automatically completed; the system can autonomously pass through a given target, carry out road force planning navigation and control the unmanned aerial vehicle group to finish the operation target.
The invention provides a method for autonomous cooperative observation and cooperative operation of an unmanned aerial vehicle cluster, which comprises the following steps:
detecting and identifying multiple operation targets, and establishing operation tasks;
distributing the operation tasks based on the positions of all unmanned aerial vehicles in the unmanned aerial vehicle cluster and the operation target positions;
and controlling each unmanned aerial vehicle to implement based on the distributed operation tasks.
Wherein, use many unmanned aerial vehicles to shoot the image fusion technique, detect the many operation targets of discernment.
Wherein, using many unmanned aerial vehicles to shoot the image fusion technique, it specifically includes to detect the many operation targets of discernment:
collecting different images;
extracting features of the image of each perspective using a deep neural network;
performing feature information fusion on the obtained features of all the visual angles through a deep convolution network;
and detecting and identifying multiple operation targets.
Wherein the different images are collected from different perspectives of a plurality of drones.
Wherein, the allocating the operation task based on the position and the operation target position of each unmanned aerial vehicle in the unmanned aerial vehicle cluster specifically comprises:
and estimating the required cost of each unmanned aerial vehicle to different operation targets according to the position information and the multi-target position information of the plurality of unmanned aerial vehicles, and distributing operation tasks for each unmanned aerial vehicle by using a Hungarian algorithm.
And the operation tasks distributed by the unmanned aerial vehicles are different.
After the operation tasks are distributed, path planning navigation is performed among the operation tasks of the unmanned aerial vehicles.
Wherein, each unmanned aerial vehicle sets the operation target as the route planning navigation target.
After the operation tasks are distributed and path planning navigation is performed among the operation tasks of each unmanned aerial vehicle, the method further comprises the following steps:
each unmanned aerial vehicle extracts state information of the current position according to observation data of the surrounding environment;
inputting the state information of the current unmanned aerial vehicle into a deep reinforcement learning model to obtain a mapping action instruction of the current position state;
and the unmanned aerial vehicle executes the action according to the mapped action instruction to reach the next position state, judges whether the operation target is reached, and repeats the steps if the operation target is not reached.
And if the target of the operation is reached, stopping the navigation to finish the operation.
As described above, the present invention discloses a method for autonomous cooperative observation and cooperative work of an unmanned aerial vehicle cluster, which has the following beneficial effects: (1) the autonomy is strong, and a large amount of manual operation is not needed; (2) the operation target identification has higher accuracy; (3) and the method has high adaptability to different scenes.
Drawings
Fig. 1 is a flowchart illustrating a process of cooperative observation and cooperative work technology of the entire unmanned aerial vehicle fleet disclosed in the embodiment of the present invention.
Fig. 2 shows a flowchart of processing for image recognition and detection of a fusion multiple unmanned aerial vehicle disclosed in the embodiment of the present invention.
Fig. 3 is a flowchart illustrating an autonomous cooperative task allocation process of an unmanned aerial vehicle cluster according to an embodiment of the present invention.
Fig. 4 shows a navigation flow chart for autonomous path planning of an unmanned aerial vehicle disclosed in the embodiment of the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
The invention provides an autonomous cooperative observation and cooperative operation technology based on a mobile unmanned aerial vehicle cluster platform, the processing flow is shown as figure 1, and the key steps are as follows:
1. carrying out multi-image information fusion and operation target detection and identification by utilizing images shot by the unmanned aerial vehicle group;
2. and (4) autonomous cooperative task allocation, wherein tasks are allocated to each unmanned aerial vehicle.
3. And after the task allocation of each unmanned aerial vehicle is completed, carrying out path planning navigation on each unmanned aerial vehicle according to respective task targets, and controlling the unmanned aerial vehicles to achieve the operation targets to complete the operation.
In step 1, the invention needs to combine a plurality of images shot by the unmanned aerial vehicles for fusion, and the collection and transmission of the images shot by the unmanned aerial vehicle group and the fusion of multi-image information. The processing flow is shown in fig. 2, and the key steps are as follows:
1.1 each unmanned aerial vehicle goes to the operation target area and shoots images to be uploaded to a central computer.
1.2 the central computer extracts the image characteristic information by using a neural network through the collected images.
And 1.3, fusing the image characteristic information by using a neural network to obtain fused image characteristic information.
1.4 the obtained image characteristic information is reused to detect and identify multiple operation targets by using the neural network
In step 2, the present invention cooperatively allocates to each unmanned aerial vehicle operation target task according to the detected multi-operation target and the position information of the unmanned aerial vehicle cluster, the processing flow is as shown in fig. 3, and the key steps are as follows:
2.1 estimating the cost of distributing different operation targets by each unmanned aerial vehicle according to the obtained multi-operation target and the position information of the unmanned aerial vehicle group.
2.2 according to the cost, a task allocation scheme of each unmanned aerial vehicle is quickly given by using a task allocation algorithm such as a Hungarian algorithm and the like.
2.3 the central computer returns the task assignment results to each drone.
Through the steps 1 and 2, each unmanned aerial vehicle obtains respective operation target information, manual intervention is needed in the navigation of the current unmanned aerial vehicle, the scene adaptability is poor, the unmanned aerial vehicle is difficult to navigate to an operation target quickly and accurately, and manpower and time are consumed. The processing flow is shown in fig. 4, and the key steps are as follows:
3.1 the unmanned aerial vehicle receives task target information distributed from the central computer and sets the task target as a path planning navigation target;
3.2 the unmanned aerial vehicle extracts the state information of the current position through the observation data of the unmanned aerial vehicle to the surrounding environment;
3.3, inputting the state information of the current unmanned aerial vehicle into a deep reinforcement learning model to obtain a mapping action instruction of the current position state;
3.4 the unmanned aerial vehicle executes the action according to the mapped action instruction to reach the next position state, judges whether the operation target is reached, if the operation target is not reached, returns to 3.2, and if the operation target is reached, stops navigating to finish the operation.
Fig. 1 shows a general flowchart of the present invention, which includes 3 key steps, namely, multi-image information fusion and operation target detection and identification, autonomous cooperative task allocation, and autonomous path planning and navigation for the unmanned aerial vehicle, where the 3 key steps will be described in detail in this embodiment.
The technology of multi-image information fusion and operation target detection and identification. FIG. 2 is a flow chart of the process of multi-image information fusion and job target detection and identification. In order to achieve the purpose of improving the detection and identification of the operation targets and avoiding missing detection, the method of fusing image information shot by multiple unmanned aerial vehicles is adopted in the embodiment, the central computer collects images shot by each unmanned aerial vehicle, then a deep neural network is used for extracting features of each image to obtain a feature matrix, then the obtained feature matrix is subjected to feature information fusion by the deep neural network to obtain a fused feature information matrix, and the detection and identification of the multiple operation targets can be carried out by utilizing the fused feature information matrix.
An autonomous collaborative task allocation technique. FIG. 3 presents an autonomic collaborative task assignment process flow diagram. And for the detected operation targets, estimating the cost of the unmanned aerial vehicle for reaching the corresponding operation targets according to the position information of each unmanned aerial vehicle and the position information of each operation target, and solving the optimal solution of the operation targets distributed by each unmanned aerial vehicle by using task distribution algorithms such as Hungarian algorithm and the like according to the position information and the cost information.
Unmanned aerial vehicle is from moving path planning navigation technology. Fig. 4 shows a navigation flow chart of autonomous path planning of the unmanned aerial vehicle. Unmanned aerial vehicle path planning navigation often needs manual intervention, and the scene adaptability is poor, consumes time and manpower. The invention designs a path planning navigation strategy based on deep reinforcement learning, an unmanned aerial vehicle can extract the state representation of the current position through the observation data of the surrounding environment by the unmanned aerial vehicle, and obtains the action mapping instruction of the current position by inputting the state representation into a deep reinforcement learning model, so that the unmanned aerial vehicle reaches the next position, and the operation is finished by repeating the operation until the operation target is reached, so that the unmanned aerial vehicle has strong autonomy and scene adaptability.
From the above description, those skilled in the art can easily understand other advantages and effects of the present invention from the disclosure of the present specification.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (10)

1. A method for autonomous cooperative observation and cooperative operation of an unmanned aerial vehicle cluster comprises the following steps:
detecting and identifying multiple operation targets, and establishing operation tasks;
distributing the operation tasks based on the positions of all unmanned aerial vehicles in the unmanned aerial vehicle cluster and the operation target positions;
and controlling each unmanned aerial vehicle to implement based on the distributed operation tasks.
2. The method for autonomous cooperative observation and cooperative work of unmanned aerial vehicle cluster according to claim 1, wherein the multiple targets are detected and identified by using a fusion technique of images captured by multiple unmanned aerial vehicles.
3. The method for autonomous cooperative observation and cooperative work of a unmanned aerial vehicle fleet according to claim 2, wherein the detecting and identifying the multiple work targets by using the multiple unmanned aerial vehicle photographed image fusion technology specifically comprises:
collecting different images;
extracting features of the image of each perspective using a deep neural network;
performing feature information fusion on the obtained features of all the visual angles through a deep convolution network;
and detecting and identifying multiple operation targets.
4. The method for autonomous collaborative observation and collaborative operation of a fleet of unmanned aerial vehicles according to claim 1, wherein the different images are collected from different perspectives of a plurality of unmanned aerial vehicles.
5. The method for autonomous cooperative observation and cooperative work of the unmanned aerial vehicle fleet as claimed in claim 1, wherein the allocating the work task based on the position and the work target position of each unmanned aerial vehicle in the unmanned aerial vehicle fleet specifically comprises:
and estimating the required cost of each unmanned aerial vehicle to different operation targets according to the position information and the multi-target position information of the plurality of unmanned aerial vehicles, and distributing operation tasks for each unmanned aerial vehicle by using a Hungarian algorithm.
6. The method for autonomous cooperative observation and cooperative task of the unmanned aerial vehicle fleet according to claim 1, wherein the unmanned aerial vehicles are assigned different task tasks.
7. The method for autonomous cooperative observation and cooperative work of the unmanned aerial vehicle fleet according to claim 1, wherein after the assignment of the job task, each unmanned aerial vehicle is provided with a path planning navigation between the unmanned aerial vehicle and the job task target.
8. The method for autonomous cooperative observation and cooperative work of the unmanned aerial vehicle fleet according to claim 7, wherein each unmanned aerial vehicle sets the work objective as a path planning navigation objective.
9. The method for autonomous cooperative observation and cooperative task for the unmanned aerial vehicle fleet according to claim 7, wherein after the assignment of the task, the method for providing path planning navigation between each unmanned aerial vehicle and the task target further comprises:
each unmanned aerial vehicle extracts state information of the current position according to observation data of the surrounding environment;
inputting the state information of the current unmanned aerial vehicle into a deep reinforcement learning model to obtain a mapping action instruction of the current position state;
and the unmanned aerial vehicle executes the action according to the mapped action instruction to reach the next position state, judges whether the operation target is reached, and repeats the steps if the operation target is not reached.
10. The method for autonomous cooperative observation and cooperative work of the unmanned aerial vehicle cluster according to claim 9, wherein if the target of the work is reached, the navigation is stopped to complete the work.
CN201911357480.6A 2019-12-25 2019-12-25 Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster Pending CN111178743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911357480.6A CN111178743A (en) 2019-12-25 2019-12-25 Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911357480.6A CN111178743A (en) 2019-12-25 2019-12-25 Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster

Publications (1)

Publication Number Publication Date
CN111178743A true CN111178743A (en) 2020-05-19

Family

ID=70657455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911357480.6A Pending CN111178743A (en) 2019-12-25 2019-12-25 Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster

Country Status (1)

Country Link
CN (1) CN111178743A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506216A (en) * 2020-11-18 2021-03-16 天津(滨海)人工智能军民融合创新中心 Flight path planning method and device for unmanned aerial vehicle
CN115097864A (en) * 2022-06-27 2022-09-23 中国人民解放军海军航空大学 Multi-machine formation task allocation method
CN116088585A (en) * 2023-04-07 2023-05-09 中国民用航空飞行学院 Multi-unmanned aerial vehicle take-off and landing sequence planning system and method based on Hungary algorithm

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107272742A (en) * 2017-08-07 2017-10-20 深圳市华琥技术有限公司 A kind of navigation control method of unmanned aerial vehicle group work compound
US20180129913A1 (en) * 2016-11-09 2018-05-10 Parrot Drones Drone comprising a device for determining a representation of a target via a neural network, related determination method and computer
CN108764293A (en) * 2018-04-28 2018-11-06 重庆交通大学 A kind of vehicle checking method and system based on image
CN109389056A (en) * 2018-09-21 2019-02-26 北京航空航天大学 A kind of track surrounding enviroment detection method of space base multi-angle of view collaboration
CN109409546A (en) * 2018-12-11 2019-03-01 四川睿盈源科技有限责任公司 Expressway Property manages and protects method
CN109443366A (en) * 2018-12-20 2019-03-08 北京航空航天大学 A kind of unmanned aerial vehicle group paths planning method based on improvement Q learning algorithm
CN109510656A (en) * 2018-11-26 2019-03-22 中国人民解放军军事科学院国防科技创新研究院 A kind of self-adapting data distribution method suitable for unmanned plane group
CN109992000A (en) * 2019-04-04 2019-07-09 北京航空航天大学 A kind of multiple no-manned plane path collaborative planning method and device based on Hierarchical reinforcement learning
CN110245641A (en) * 2019-06-21 2019-09-17 上海摩象网络科技有限公司 A kind of target tracking image pickup method, device, electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180129913A1 (en) * 2016-11-09 2018-05-10 Parrot Drones Drone comprising a device for determining a representation of a target via a neural network, related determination method and computer
CN107272742A (en) * 2017-08-07 2017-10-20 深圳市华琥技术有限公司 A kind of navigation control method of unmanned aerial vehicle group work compound
CN108764293A (en) * 2018-04-28 2018-11-06 重庆交通大学 A kind of vehicle checking method and system based on image
CN109389056A (en) * 2018-09-21 2019-02-26 北京航空航天大学 A kind of track surrounding enviroment detection method of space base multi-angle of view collaboration
CN109510656A (en) * 2018-11-26 2019-03-22 中国人民解放军军事科学院国防科技创新研究院 A kind of self-adapting data distribution method suitable for unmanned plane group
CN109409546A (en) * 2018-12-11 2019-03-01 四川睿盈源科技有限责任公司 Expressway Property manages and protects method
CN109443366A (en) * 2018-12-20 2019-03-08 北京航空航天大学 A kind of unmanned aerial vehicle group paths planning method based on improvement Q learning algorithm
CN109992000A (en) * 2019-04-04 2019-07-09 北京航空航天大学 A kind of multiple no-manned plane path collaborative planning method and device based on Hierarchical reinforcement learning
CN110245641A (en) * 2019-06-21 2019-09-17 上海摩象网络科技有限公司 A kind of target tracking image pickup method, device, electronic equipment

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112506216A (en) * 2020-11-18 2021-03-16 天津(滨海)人工智能军民融合创新中心 Flight path planning method and device for unmanned aerial vehicle
CN112506216B (en) * 2020-11-18 2022-12-30 天津(滨海)人工智能军民融合创新中心 Flight path planning method and device for unmanned aerial vehicle
CN115097864A (en) * 2022-06-27 2022-09-23 中国人民解放军海军航空大学 Multi-machine formation task allocation method
CN116088585A (en) * 2023-04-07 2023-05-09 中国民用航空飞行学院 Multi-unmanned aerial vehicle take-off and landing sequence planning system and method based on Hungary algorithm
CN116088585B (en) * 2023-04-07 2023-06-13 中国民用航空飞行学院 Multi-unmanned aerial vehicle take-off and landing sequence planning system and method based on Hungary algorithm

Similar Documents

Publication Publication Date Title
CN111178743A (en) Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster
CN108496129B (en) Aircraft-based facility detection method and control equipment
DE112017002154B4 (en) Mobile robot and control method for a mobile robot
CN108508916B (en) Control method, device and equipment for unmanned aerial vehicle formation and storage medium
CN112465738B (en) Photovoltaic power station online operation and maintenance method and system based on infrared and visible light images
JP2020097393A (en) Flight management system for flying object
DE102018113672A1 (en) Mobile robot and control method for it
CN111009150B (en) Open type parking lot management method and system and background server
CN109656252A (en) A kind of middle control degree system and positioning navigation method based on AGV
CN110428501B (en) Panoramic image generation method and device, electronic equipment and readable storage medium
CN111712773A (en) Control method, electronic equipment and system for cooperative work of unmanned aerial vehicle
JP2021192304A (en) Information processing apparatus and method for controlling the same, and program
US9816786B2 (en) Method for automatically generating a three-dimensional reference model as terrain information for an imaging device
CN105678289A (en) Control method and device of unmanned aerial vehicle
CN109697428B (en) Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network
CN108960134A (en) A kind of patrol UAV image mark and intelligent identification Method
CN115649501B (en) Unmanned aerial vehicle night lighting system and method
CN106303409A (en) A kind of destination object combined tracking method and destination object combine tracking device
CN110728684B (en) Map construction method and device, storage medium and electronic equipment
CN117519291A (en) Photovoltaic panel inspection system based on multi-unmanned aerial vehicle path planning
CN115686073B (en) Unmanned aerial vehicle-based transmission line inspection control method and system
CN109977884B (en) Target following method and device
CN115086565B (en) Patrol collection method, storage medium and patrol collection device for image data of laboratory equipment
CN107356242B (en) Indoor positioning method and system for intelligent mobile equipment
CN109144098A (en) A kind of unmanned plane stair automatic detecting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200519

RJ01 Rejection of invention patent application after publication