CN114047783A - Unmanned aerial vehicle system and unmanned aerial vehicle simulation system - Google Patents

Unmanned aerial vehicle system and unmanned aerial vehicle simulation system Download PDF

Info

Publication number
CN114047783A
CN114047783A CN202111352126.1A CN202111352126A CN114047783A CN 114047783 A CN114047783 A CN 114047783A CN 202111352126 A CN202111352126 A CN 202111352126A CN 114047783 A CN114047783 A CN 114047783A
Authority
CN
China
Prior art keywords
task
unmanned aerial
aerial vehicle
virtual
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111352126.1A
Other languages
Chinese (zh)
Inventor
崔晋
赵凡舒
袁梅
董韶鹏
吴泽炎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Ningbo Institute of Innovation of Beihang University
Original Assignee
Beihang University
Ningbo Institute of Innovation of Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University, Ningbo Institute of Innovation of Beihang University filed Critical Beihang University
Priority to CN202111352126.1A priority Critical patent/CN114047783A/en
Publication of CN114047783A publication Critical patent/CN114047783A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The present disclosure provides an unmanned aerial vehicle system and an unmanned aerial vehicle simulation system, the unmanned aerial vehicle system comprises an environment sensing device, an unmanned aerial vehicle device, an airborne computing device and a server; the environment sensing equipment is used for sensing real scene data and transmitting the real scene data to the airborne computing equipment; the airborne computing equipment comprises a first task execution module, a control instruction generation module and an unmanned aerial vehicle equipment, wherein the first task execution module is used for receiving real scene data and executing a first task according to the real scene data; the server comprises a second task execution module used for receiving the real scene data, executing a second task according to the real scene data and sending an execution result of the second task to the airborne computing equipment. By implementing the technical scheme, the unmanned aerial vehicle can meet the operation requirement while being miniaturized.

Description

Unmanned aerial vehicle system and unmanned aerial vehicle simulation system
Technical Field
The invention relates to the field of unmanned aerial vehicles, in particular to an unmanned aerial vehicle system and an unmanned aerial vehicle simulation system.
Background
The application demand of the micro unmanned aerial vehicle in the military and civil fields is continuously increased, and the micro unmanned aerial vehicle is developing towards the direction of being smaller, lighter and more difficult to detect at present, and can execute various battle missions such as reconnaissance and the like. The unmanned aerial vehicle relies on various sensors such as vision to carry out autonomous flight and situation perception tasks, and a large amount of data processing demands are produced in the period, and the requirement on the operational capability is higher and higher, so that the micro unmanned aerial vehicle is difficult to meet the operational demand.
Disclosure of Invention
In order to solve at least one technical problem in the prior art, the present disclosure provides an unmanned aerial vehicle system, including an environment sensing device, an unmanned aerial vehicle device, an onboard computing device, and a server, where the onboard computing device and the environment sensing device are disposed on the unmanned aerial vehicle device, and the onboard computing device is in communication connection with the server through a mobile network;
the environment sensing equipment is used for sensing real scene data and sending the real scene data to the airborne computing equipment;
the on-board computing device includes:
a first task execution module, configured to receive the real scene data, execute the first task according to the real scene data, and send the real scene data to the server, so that the server executes a second task, where an operation intensity of the first task is smaller than an operation intensity of the second task;
the control instruction generating module is used for generating a control instruction according to an execution result of the first task and an execution result of the second task sent by the server, and sending the control instruction to the unmanned aerial vehicle device so as to control the unmanned aerial vehicle device to fly;
the server, comprising:
and the second task execution module is used for receiving the real scene data, executing the second task according to the real scene data, and sending an execution result of the second task to the airborne computing equipment.
Optionally, the on-board computing device is communicatively connected to the server through a 5G network.
Optionally, the server is an edge computing server.
In another aspect of the disclosure, an unmanned aerial vehicle simulation system includes a simulation device, an airborne computing device and a server, which are in communication connection in sequence;
the simulation apparatus includes:
the virtual scene building module is used for building a virtual scene model, and the virtual scene model comprises a virtual unmanned aerial vehicle and a virtual flight environment of the virtual unmanned aerial vehicle;
the virtual environment perception module is used for generating virtual scene data and sending the virtual scene data to airborne computing equipment, wherein the virtual scene data is the data of the flight environment perceived by the virtual unmanned aerial vehicle in the virtual scene model;
the unmanned aerial vehicle control module is used for receiving a control instruction sent by airborne computing equipment and controlling the virtual unmanned aerial vehicle to fly according to the control instruction;
the on-board computing device includes:
a first task execution module, configured to execute a first task according to the virtual scene data, and send the virtual scene data to the server, so that the server executes a second task, where an operation intensity of the first task is smaller than an operation intensity of the second task;
the control instruction generating module is used for generating the control instruction according to the execution result of the first task and the execution result of the second task sent by the server and sending the control instruction to the simulation equipment;
the server includes:
and the second task execution module is used for receiving the virtual scene data, executing the second task according to the virtual scene data, and sending an execution result of the second task to the airborne computing equipment.
Optionally, the unmanned aerial vehicle control module includes:
the flight control data processing module is used for carrying out attitude calculation according to the control instruction fed back by the airborne computing equipment to obtain attitude data;
and the unmanned aerial vehicle driving module is used for predicting the behavior state of the virtual unmanned aerial vehicle according to the attitude data and controlling the virtual unmanned aerial vehicle to fly according to the behavior state of the virtual unmanned aerial vehicle.
Optionally, the virtual flight environment data perceived by the virtual unmanned aerial vehicle in the virtual scene model includes image data in the virtual flight environment that can be perceived by a visual sensor included in the virtual unmanned aerial vehicle.
Optionally, the first task includes a first obstacle avoidance task, and the first obstacle avoidance task includes: based on a sparse point cloud local matching algorithm, carrying out sparse processing on the image data to obtain sparse points, extracting the characteristics of the image data based on the sparse points, and determining the obstacle of the flight environment according to the characteristics of the image data.
Optionally, the second task includes a second obstacle avoidance task;
the second obstacle avoidance task comprises:
performing scale space processing on the image data;
generating a multi-scale difference according to the image data processed by the scale space;
determining obstacles in the virtual flight environment according to the multi-scale difference based on a trained convolutional neural network model.
Optionally, the performing scale space processing on the image data includes: and carrying out Gaussian scale space processing on the image data.
Optionally, the first task includes a path planning task.
One or more technical scheme that provide in the embodiment of this application, unmanned aerial vehicle system can be carried out the first task that the operational strength was handed over relatively by airborne computing equipment for airborne computing equipment can be miniaturized, carries out the higher second task of operational strength relatively by the server, makes unmanned aerial vehicle equipment both satisfy miniaturized requirement, can adapt to a large amount of data processing demands again.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 shows a schematic block diagram of a drone system according to an exemplary embodiment of the present disclosure;
fig. 2 shows a block diagram of a drone simulation system according to an exemplary embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description. It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Aspects of the present disclosure are described below with reference to the accompanying drawings.
Referring to fig. 1, a drone system includes an environment sensing device 101, a drone device 102, an onboard computing device 103, and a server 104, where the onboard computing device 103 and the environment sensing device 101 are disposed on the drone device 102, and the onboard computing device 103 is in communication connection with the server 104 through a mobile network;
the environment sensing equipment 101 is used for sensing real scene data and sending the real scene data to the airborne computing equipment 103;
the on-board computing device 103 includes:
a first task execution module 105, configured to receive real scene data, execute a first task according to the real scene data, and send the real scene data to the server 104, so that the server 104 executes a second task, where an operation intensity of the first task is smaller than an operation intensity of the second task;
a control instruction generation module 106, configured to generate a control instruction according to an execution result of the first task and an execution result of the second task sent by the server 104, and send the control instruction to the unmanned aerial vehicle device 102 to control the unmanned aerial vehicle device to fly;
a server 104, comprising:
and the second task execution module 107 is configured to receive the real scene data, execute a second task according to the real scene data, and send an execution result of the second task to the onboard computing device 103.
Unmanned aerial vehicle equipment 102 system in this embodiment, can be by the relatively lower first task of operational strength of airborne computing equipment 103 execution for airborne computing equipment can lightweight and miniaturization, and the relatively higher second task of operational strength is carried out by server 104, makes unmanned aerial vehicle equipment can satisfy a large amount of data processing's requirement, finally makes unmanned aerial vehicle equipment 102 both satisfy lightweight, miniaturized requirement, can adapt to a large amount of data processing demands again.
The context awareness device may comprise a vision device and other auxiliary sensors, the auxiliary sensors may be distance sensors or the like, and the vision device may be a stereo vision camera or the like.
The real scene data may be image data or the like, such as image data detected by a stereoscopic camera, distance data detected by a distance sensor, or the like. The real scene data may be the same as the scene data perceived by the existing unmanned aerial vehicle, and will not be described in detail in this embodiment.
It should be appreciated that the first and second tasks may be tasks in existing drone devices, such as obstacle avoidance identification tasks, path planning tasks, and the like.
In one embodiment, the first task is a weight reduction task and the second task is a high intensity calculation task. For example, according to the task executed by the onboard computing device alone, if the load of the processor of the onboard computing device exceeds 50%, the task is a high-intensity computing task, otherwise, the task is a light-weight task. It should be noted that 50% here can be set according to actual requirements, for example, any one value of 10-90% and the like.
In one embodiment, a task classification list is preset by a professional, and the onboard computing device determines whether the corresponding task belongs to the first task or the second task by searching classification identifiers in the task classification list, wherein the task classification list comprises the task and an identifier indicating whether the task belongs to the first task or the second task. For example, the professional may determine whether the task is the first task or the second task according to the operation amount of the task, and then establish a task classification list according to the determination result, for example, a task whose operation amount exceeds a preset operation amount threshold may be regarded as the second task.
In one embodiment, the drone device may be provided with a 5G communication module, the 5G communication module communicatively connecting the onboard computing device with the server through a 5G mobile network. Illustratively, the 5G communication module is disposed on an onboard computing device, the onboard computing device communicates with the 5G base station of the 5G mobile network through the 5G communication module, and the 5G base station of the 5G mobile network communicates with the server. Exemplarily, a back-end control center can be further arranged, and the back-end control center is used for remotely controlling the unmanned aerial vehicle equipment and the server. It should be noted that the server in this embodiment may be an edge computing server, or may be another type of server.
In an optional implementation mode, the airborne computing equipment is an embedded airborne computing board, the embedded airborne computing board is used for executing light-weight task operation on the aircraft, and the embedded airborne computing board is used as plate-shaped airborne computing equipment embedded into the unmanned aerial vehicle equipment, so that the space occupation of the embedded airborne computing board on the unmanned aerial vehicle can be reduced.
In an optional embodiment, the server is a mobile edge computing server, configured to execute a second task, where the second task is used to process high-strength operation task data downloaded by the unmanned aerial vehicle device in the 5G link in real time, and return a processing result to the unmanned aerial vehicle device in real time through the 5G link, so as to assist the unmanned aerial vehicle device in completing the task. The 5G network has the characteristics of large bandwidth and low time delay, provides multi-class complex task parallel processing capability for the unmanned aerial vehicle equipment, and can support the microminiature unmanned aerial vehicle equipment to realize complex tasks such as intelligent identification, autonomous obstacle avoidance and autonomous environment mapping based on high-resolution and high-frame-rate image data.
Referring to fig. 2, an unmanned aerial vehicle simulation system includes a simulation device 201, an onboard computing device 202, and a server 203, which are sequentially connected in communication;
the simulation apparatus 201 includes:
a virtual scene constructing module 204, configured to construct a virtual scene model, where the virtual scene model includes a virtual unmanned aerial vehicle and a virtual flight environment of the virtual unmanned aerial vehicle;
the virtual environment sensing module 205 is configured to generate virtual scene data, and send the virtual scene data to the airborne computing device, where the virtual scene data is data of a flight environment sensed by the virtual unmanned aerial vehicle in the virtual scene model;
the unmanned aerial vehicle control module 206 is used for receiving a control instruction sent by the airborne computing equipment and controlling the virtual unmanned aerial vehicle to fly according to the control instruction;
on-board computing device 202 includes:
a first task execution module 207, configured to execute a first task according to the virtual scene data, and send the virtual scene data to the server, so that the server executes a second task, where an operation intensity of the first task is smaller than an operation intensity of the second task;
a control instruction generating module 208, configured to generate a control instruction according to an execution result of the first task and an execution result of the second task sent by the server, and send the control instruction to the simulation device;
the server 203 includes:
and the second task execution module 209 is configured to receive the virtual scene data, execute a second task according to the virtual scene data, and send an execution result of the second task to the onboard computing device.
In the unmanned aerial vehicle simulation system in this embodiment, the simulation device is used to simulate the virtual scene model and generate virtual scene data, so that the airborne computing device and the server execute the first task and the second task respectively, and the virtual unmanned aerial vehicle is controlled to fly based on the execution generated by the execution results of the first task and the second task. The unmanned aerial vehicle simulation system can simulate under the condition that the unmanned aerial vehicle does not fly, so that the mechanism computing equipment and the server equipment can execute related tasks by utilizing virtual scene data, the execution efficiency or accuracy of the airborne computing equipment and the server is verified through the flight condition of the virtual unmanned aerial vehicle, the cost of the airborne computing equipment and the server test is reduced, and when the airborne computing equipment and the server after the test are used for the unmanned aerial vehicle system, the operation efficiency is higher.
It should be appreciated that the first and second tasks may be tasks in existing drone devices, such as obstacle avoidance identification tasks, path planning tasks, and the like. The first task and the second task in this embodiment are conceptually the same as those in the foregoing embodiment, and are not described in detail here.
The unmanned aerial vehicle simulation system in the embodiment is a semi-physical simulation system based on a three-dimensional visual software model. For example, the simulation device may build a virtual scene model based on three-dimensional visualization software as a test development environment. The server can adopt an edge computing server, the edge computing server and the airborne computing equipment can be connected through the LAN, a complete simulation 5G edge computing unmanned aerial vehicle networking framework is formed, and finally research on task-level simulation test work of the unmanned aerial vehicle is carried out based on the constructed simulation environment.
In one embodiment, in order that the virtual drone may simulate a flight mission, the simulation device further includes a drone flight mission simulation module for simulating a flight mission.
In one embodiment, the virtual scene construction module of the simulation device can respectively construct a corresponding texture model, a material model, a grid model and a collision setting when constructing the virtual unmanned aerial vehicle; when the virtual flight environment is constructed by the virtual scene construction module of the simulation equipment, the corresponding texture model, material model, grid model and collision setting can be constructed respectively, and meanwhile, the virtual flight environment can be generated randomly, so that the simulation effect is better.
In order to construct three-dimensional modeling and visualization of a complex flight environment of a complex unmanned aerial vehicle to obtain a corresponding virtual unmanned aerial vehicle and a corresponding virtual flight environment, two processes of multi-level detail level modeling and flow type dynamic rendering visualization are included when a virtual scene model is constructed. And multi-level detail level modeling is completed for models such as terrain, buildings, crowds, traffic flows and the like in a complex scene, and the streaming dynamic rendering visualization dynamically schedules model data according to the position of a camera, so that the three-dimensional visualization rendering of a complex environment is realized.
In an optional embodiment, the simulation device constructs around the virtual scene to realize environment-aware simulation and control simulation of the flight of the unmanned aerial vehicle. In sensor modeling, synchronous processing of binocular sensing data is realized through frame concurrency and frame synchronization, and flight control based on a simulation engine is realized, so that the flight control algorithm realizes internal calling of the simulation engine, and high real-time performance of the flight control algorithm is guaranteed; resources, materials and scenes are rendered in the simulation equipment, and the scenes are rendered in a collision mode, an illumination mode and an appearance mode.
In an alternative embodiment, the drone control module includes:
the flight control data processing module is used for carrying out attitude calculation according to a control instruction fed back by the airborne computing equipment to obtain attitude data;
and the unmanned aerial vehicle driving module is used for predicting the behavior state of the virtual unmanned aerial vehicle according to the attitude data and controlling the virtual unmanned aerial vehicle to fly according to the behavior state of the virtual unmanned aerial vehicle.
The flight control data processing module integrates a flight control calculation program of the unmanned aerial vehicle, performs attitude calculation on decision data contained in the control instruction, and provides attitude data for the unmanned aerial vehicle. Here, the attitude data may be obtained by referring to an existing flight control computing program, for example, a flight control computing program based on a pid (probability Integral differential) control algorithm is used to perform attitude calculation to obtain the attitude data, and this embodiment is not described in detail again.
The unmanned aerial vehicle driving module receives attitude data of the flight control data processing module and position data transmitted based on network communication to perform unmanned aerial vehicle behavior state prediction calculation, and finally, the unmanned aerial vehicle driving flow is visually displayed by rendering the unmanned aerial vehicle behavior. Here, the behavior state of the unmanned aerial vehicle may be predicted based on the attitude data and the position data with reference to an existing method, for example, the behavior state of the unmanned aerial vehicle may be predicted according to the attitude data and the position data by using the associated correspondence between the attitude data and the position data and the behavior state, which is not described in detail in this embodiment.
Illustratively, the unmanned aerial vehicle driving module comprises three processes of unmanned aerial vehicle componentization driving, unmanned aerial vehicle behavior state prediction and unmanned aerial vehicle behavior rendering visualization. The module designs a driving interface according to unmanned aerial vehicle modularization driving, the unmanned aerial vehicle behavior state prediction receives attitude data of a flight control data processing module and position data transmitted based on network communication to perform unmanned aerial vehicle behavior state prediction calculation, and finally, the driving flow of the virtual unmanned aerial vehicle is visually displayed through unmanned aerial vehicle behavior rendering visualization.
In an optional embodiment, the virtual flight environment data perceived by the virtual drone in the virtual scene model includes image data in the virtual flight environment that can be perceived by a vision sensor included in the virtual drone.
Illustratively, the purpose of the virtual environment awareness module is to generate image data, which may be specifically stereoscopic image data, and depth data, which are transmitted to the onboard computing device for unmanned aerial vehicle decision-making. The image data is calculated and output mainly by a programmable graphics rendering pipeline in combination with the pose of the unmanned aerial vehicle, the carrying pose of a visual sensor and the scene data of the unmanned aerial vehicle; and calculating the depth data according to the depth environment of the programmable rendering pipeline, and finally outputting the depth image data.
In one embodiment, the emulation device and the on-board computing device may be connected via a local area network, and the on-board computing device and the server may be connected via a gigabit fiber network, simulating 5G network performance via the gigabit fiber network.
It can be known that the uplink rate that the unmanned aerial vehicle can reach is closely related to factors such as the station spacing of the 5G network, the antenna state, the position of the unmanned aerial vehicle, the uplink and downlink time slot proportion, the network load and the like.
For the low-altitude coverage link rate, the transmission rate of a single-base-station 5G public network is analyzed under the condition that the antenna of a 64-channel 3D-MIMO (a spatial multiplexing technology, the beam forming is realized by utilizing a multi-antenna matrix so as to achieve the multiplexing purpose and the efficiency is higher when the antenna covers a building) base station with the beam adjusting capability of the vertical dimension does not use mechanical downtilt but realizes electronic downtilt through a baseband weight value. Meanwhile, the 5G signal intensity and the transmission rate are measured in a sparse point sampling mode at different distances and at different heights of the unmanned aerial vehicle.
And fitting the 5G rate field based on the relation between the intensity and the rate and the position obtained by sampling the sparse points and the attenuation characteristic of the 5G signal, so as to obtain the actual transmission characteristic of the single-base-station public network 5G and complete the modeling of the public network 5G rate.
For the low-altitude coverage link delay, the 5G is greatly improved compared with the 4G air interface delay, wherein the air interface control plane delay is reduced from 100ms to 10ms, and the air interface user plane delay is reduced from 10ms to 4ms or even lower to 1 ms. The core network delay of 5G is about 10ms to 20ms, and if the edge computing technology of 5G is further considered in the future, the core network delay can be further reduced.
In an optional embodiment, the onboard computer board constructs a lightweight intelligent algorithm, including a lightweight three-dimensional obstacle avoidance algorithm, a fast path planning algorithm, and a decision algorithm. In order to ensure the high efficiency of onboard operation, the onboard system not only needs to carry out light weight processing on the algorithm, but also realizes the hardware parallelization of the algorithm so as to improve the utilization rate of onboard limited resources. In addition, the onboard platform also comprises functions of stable and efficient task queue control, task scheduling, multi-task transmission and the like so as to improve the real-time performance of data.
On the aspect of software, a program carried by airborne computing equipment and a feature-based sparse point cloud matching algorithm realize sparse processing of image data in a key point positioning mode, and feature extraction and matching are carried out on the basis of sparse points so as to realize a rapid three-dimensional sensing function. After the features are obtained, how to match can be realized by using the existing technology, and the detailed description is not provided here.
On hardware, the airborne computing equipment can adopt a cortex xA57 core and a parallel processing unit to form a computing core of an airborne computing board, the power consumption is only 7.5 watts, and the energy efficiency is more than 25 times that of a high-performance desktop CPU; and multi-core decomposition and task scheduling are realized for the airborne algorithm, and hardware acceleration and hardware realization of the algorithm are realized. Wherein cortex A57 is a model of a processing unit.
In an optional embodiment, the server comprises a multi-task parallel distribution module, an intelligent perception module, an intelligent decision module, a GPU acceleration unit and the like to realize efficient processing of the computation-intensive tasks.
The intelligent sensing module is realized based on an intelligent sensing algorithm, and intelligent sensing realizes point cloud data generation and processing, including scale space processing, multi-scale difference and convolutional neural network mapping. Scale space processing simulates the concept and method of observing objects by the human eye in the field of image data. And filtering by adopting a Gaussian kernel function when a scale space is constructed, so that the original image data can store the most detail features, and simulating feature representation under the condition of large scale by gradually reducing the detail features after Gaussian filtering. The main reasons for filtering with gaussian kernel function are two:
1) a gaussian kernel is the only scale-invariant kernel.
2) The DoG (difference of gaussian) kernel function can be approximated as a LoG function, which makes feature extraction simpler. Meanwhile, the original image is filtered after being up-sampled by 2 times, so that more information can be reserved, and subsequent feature extraction and matching are facilitated. The scale space image generation is an image generated after the convolution operation is carried out on the current image and the kernel parameters sigma of different scales. The scale space model of the image continuously carries out reduced-order sampling on the original image to obtain a series of images with different sizes, and the tower-shaped model is formed from large to small and from bottom to top. The original image is the first layer of the scale space, and a new image obtained by each time of downsampling is one layer (one image for each layer) of the scale space, wherein n layers are provided in all the scale spaces. In order to make the scale show the continuity, Gaussian filtering is added on the basis of simple down sampling in the Gaussian scale space.
The multi-scale difference is constructed on the basis of a scale space. In the multi-scale difference, the 1 st group of layers 1 of the scale space is obtained by subtracting the 1 st group of layers 1 from the 1 st group of layers 2 of the gaussian scale space. And by analogy, generating each differential image data group by group layer by layer, and forming a differential scale space by all the differential image data. The image data of the ith group and the ith layer summarized as the DOG pyramid is obtained by subtracting the ith layer of the ith group and the ith layer of the ith group with the Gaussian pyramid.
A Convolutional Neural Network (CNN) is a feed-forward type neural network, which has excellent performance in large-scale image processing, and is currently widely used in the fields of image classification, positioning, and the like. Compared with other neural network structures, the convolutional neural network requires relatively few parameters, so that the convolutional neural network can be widely applied. There are three basic concepts in convolutional neural networks: local receptive fields (LocalReceptiveFields), shared weights (SharedWeights), Pooling (Pooling).
Local receptive field: for a general deep neural network, each pixel point of an image is connected to each fully-connected neuron, and for a convolutional neural network, each hidden node is connected to only a certain local area of the image, so that the number of parameter training is reduced. For example, for a 1024 × 720 image, using a 9 × 9 field, only 81 weight parameters are needed. The same is true for general vision, when viewing an image, more often than not, local attention is paid; sharing weight value: in the convolutional layer of the convolutional neural network, the weights corresponding to the neurons are the same, and the weights are the same, so that the parameter quantity of training can be reduced. The shared weights and offsets are also called convolution kernels or dip strainers; pooling: because the images to be processed are often large, in the actual process, the original image does not need to be analyzed, and the features of the images which can be effectively obtained are the most important, so that the image size can be adjusted through a downsampling process after the image is convolved by adopting the idea similar to image compression.
Through convolutional neural network mapping, the characteristics have strong generalization capability, and the accuracy of three-dimensional perception is improved.
Illustratively, the first task includes a first obstacle avoidance task, and the first obstacle avoidance task includes: based on a sparse point cloud local matching algorithm, carrying out sparse processing on image data to obtain sparse points, extracting the characteristics of the image data based on the sparse points, and determining the obstacle of the flight environment according to the characteristics of the image data. The first task may also comprise a route planning task or the like.
For example, in an optional embodiment, the second task includes a second obstacle avoidance task, and the second obstacle avoidance task includes: carrying out scale space processing on the image data;
generating a multi-scale difference according to the image data processed by the scale space;
and determining the obstacles in the virtual flight environment according to the multi-scale difference based on the trained convolutional neural network model.
The convolutional neural network model can train the multi-scale difference of the image as input, and train the convolutional neural network model by taking the length and the width of the obstacle and the central point position of the obstacle as output.
When the control instruction is generated according to the execution result of the first task and the execution result of the second task sent by the server, the control instruction may be generated according to the execution result of the first task, and then a next control instruction may be generated according to the execution result of the second task; for example, when a first obstacle avoidance task is executed, the obtained position and size of the obstacle are inaccurate, and at this time, the unmanned aerial vehicle can be controlled to execute preliminary obstacle avoidance; when the second obstacle avoidance task fed back by the server is obtained, the position and the size of the obstacle obtained by the second obstacle avoidance task recognition are accuracy, the unmanned aerial vehicle can be controlled to accurately avoid the obstacle at the moment, and therefore the timeliness and the accuracy of the obstacle avoidance can be simultaneously avoided.
It can be known that, the convolutional neural network model may also incorporate the steps of performing scale space processing on the image data and generating the multi-scale difference according to the image data processed by the scale space into the convolutional neural network model, so that the model may obtain a final processing result directly according to the input image data.
In one embodiment, in order to accelerate the communication efficiency and reduce the load of the system, a communication mode called by a remote process is adopted for each communication; the communication transmission of the network can be realized in a local calling mode at a server by a Service method registration mode; the system load can be reduced and the communication processing delay can be shortened.

Claims (10)

1. An unmanned aerial vehicle system is characterized by comprising environment sensing equipment, unmanned aerial vehicle equipment, airborne computing equipment and a server, wherein the airborne computing equipment and the environment sensing equipment are arranged on the unmanned aerial vehicle equipment, and the airborne computing equipment is in communication connection with the server through a mobile network;
the environment sensing equipment is used for sensing real scene data and sending the real scene data to the airborne computing equipment;
the on-board computing device includes:
a first task execution module, configured to receive the real scene data, execute the first task according to the real scene data, and send the real scene data to the server, so that the server executes a second task, where an operation intensity of the first task is smaller than an operation intensity of the second task;
the control instruction generating module is used for generating a control instruction according to an execution result of the first task and an execution result of the second task sent by the server, and sending the control instruction to the unmanned aerial vehicle device so as to control the unmanned aerial vehicle device to fly;
the server, comprising:
and the second task execution module is used for receiving the real scene data, executing the second task according to the real scene data, and sending an execution result of the second task to the airborne computing equipment.
2. The drone system of claim 1, wherein the onboard computing device is communicatively connected with the server through a 5G network.
3. The drone system of claim 1, wherein the server is an edge computing server.
4. An unmanned aerial vehicle simulation system is characterized by comprising simulation equipment, airborne computing equipment and a server which are sequentially in communication connection;
the simulation apparatus includes:
the virtual scene building module is used for building a virtual scene model, and the virtual scene model comprises a virtual unmanned aerial vehicle and a virtual flight environment of the virtual unmanned aerial vehicle;
the virtual environment perception module is used for generating virtual scene data and sending the virtual scene data to airborne computing equipment, wherein the virtual scene data is the data of the flight environment perceived by the virtual unmanned aerial vehicle in the virtual scene model;
the unmanned aerial vehicle control module is used for receiving a control instruction sent by airborne computing equipment and controlling the virtual unmanned aerial vehicle to fly according to the control instruction;
the on-board computing device includes:
a first task execution module, configured to execute a first task according to the virtual scene data, and send the virtual scene data to the server, so that the server executes a second task, where an operation intensity of the first task is smaller than an operation intensity of the second task;
the control instruction generating module is used for generating the control instruction according to the execution result of the first task and the execution result of the second task sent by the server and sending the control instruction to the simulation equipment;
the server includes:
and the second task execution module is used for receiving the virtual scene data, executing the second task according to the virtual scene data, and sending an execution result of the second task to the airborne computing equipment.
5. The drone simulation system of claim 4, wherein the drone control module includes:
the flight control data processing module is used for carrying out attitude calculation according to the control instruction fed back by the airborne computing equipment to obtain attitude data;
and the unmanned aerial vehicle driving module is used for predicting the behavior state of the virtual unmanned aerial vehicle according to the attitude data and controlling the virtual unmanned aerial vehicle to fly according to the behavior state of the virtual unmanned aerial vehicle.
6. The drone simulation system of claim 4, wherein the virtual scene model virtual flight environment data perceived by the virtual drone includes image data in the virtual flight environment that is perceivable by a vision sensor included with the virtual drone.
7. The drone simulation system of claim 6, wherein the first task comprises a first obstacle avoidance task, the first obstacle avoidance task comprising: based on a sparse point cloud local matching algorithm, carrying out sparse processing on the image data to obtain sparse points, extracting the characteristics of the image data based on the sparse points, and determining the obstacle of the flight environment according to the characteristics of the image data.
8. The drone simulation system of claim 7, wherein the second task includes a second obstacle avoidance task;
the second obstacle avoidance task comprises:
performing scale space processing on the image data;
generating a multi-scale difference according to the image data processed by the scale space;
determining obstacles in the virtual flight environment according to the multi-scale difference based on a trained convolutional neural network model.
9. The drone simulation system of claim 8, wherein the scale-space processing of the image data comprises: and carrying out Gaussian scale space processing on the image data.
10. The drone simulation system of claim 6, wherein the first task comprises a path planning task.
CN202111352126.1A 2021-11-16 2021-11-16 Unmanned aerial vehicle system and unmanned aerial vehicle simulation system Pending CN114047783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111352126.1A CN114047783A (en) 2021-11-16 2021-11-16 Unmanned aerial vehicle system and unmanned aerial vehicle simulation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111352126.1A CN114047783A (en) 2021-11-16 2021-11-16 Unmanned aerial vehicle system and unmanned aerial vehicle simulation system

Publications (1)

Publication Number Publication Date
CN114047783A true CN114047783A (en) 2022-02-15

Family

ID=80209555

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111352126.1A Pending CN114047783A (en) 2021-11-16 2021-11-16 Unmanned aerial vehicle system and unmanned aerial vehicle simulation system

Country Status (1)

Country Link
CN (1) CN114047783A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115544673A (en) * 2022-11-28 2022-12-30 四川腾盾科技有限公司 Method for assisting in taking off and landing of large unmanned aerial vehicle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107390545A (en) * 2017-07-31 2017-11-24 彩虹无人机科技有限公司 A kind of simulation training system of unmanned plane and its load
US20190294172A1 (en) * 2016-12-30 2019-09-26 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Navigation method and apparatus, and terminal device
CN112356027A (en) * 2020-10-29 2021-02-12 久瓴(上海)智能科技有限公司 Obstacle avoidance method and device for agriculture and forestry robot, computer equipment and storage medium
CN112783199A (en) * 2020-12-25 2021-05-11 北京航空航天大学 Unmanned aerial vehicle autonomous navigation method based on transfer learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190294172A1 (en) * 2016-12-30 2019-09-26 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Navigation method and apparatus, and terminal device
CN107390545A (en) * 2017-07-31 2017-11-24 彩虹无人机科技有限公司 A kind of simulation training system of unmanned plane and its load
CN112356027A (en) * 2020-10-29 2021-02-12 久瓴(上海)智能科技有限公司 Obstacle avoidance method and device for agriculture and forestry robot, computer equipment and storage medium
CN112783199A (en) * 2020-12-25 2021-05-11 北京航空航天大学 Unmanned aerial vehicle autonomous navigation method based on transfer learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115544673A (en) * 2022-11-28 2022-12-30 四川腾盾科技有限公司 Method for assisting in taking off and landing of large unmanned aerial vehicle

Similar Documents

Publication Publication Date Title
CN106094569B (en) Multi-sensor Fusion unmanned plane perceives and evades analogue system and its emulation mode
CN103699106B (en) Based on the multiple no-manned plane cotasking planning simulation system of VR-Forces emulation platform
CN109716160A (en) For detecting the method and system of vehicle environmental information
US12037027B2 (en) Systems and methods for generating synthetic motion predictions
CN111797983A (en) Neural network construction method and device
CN114912532B (en) Multi-source heterogeneous perception data fusion method for automatic driving automobile
Tang et al. A joint global and local path planning optimization for UAV task scheduling towards crowd air monitoring
CN114092920B (en) Model training method, image classification method, device and storage medium
CN112965507B (en) Cluster unmanned aerial vehicle cooperative work system and method based on intelligent optimization
CN110794713A (en) Reconnaissance type unmanned aerial vehicle photoelectric load simulation training system
CN114047783A (en) Unmanned aerial vehicle system and unmanned aerial vehicle simulation system
Chen et al. Adadrone: Quality of navigation based neural adaptive scheduling for edge-assisted drones
Polosky et al. Machine learning subsystem for autonomous collision avoidance on a small uas with embedded gpu
CN116362109B (en) Intelligent unmanned system and method based on digital twinning
CN116469142A (en) Target positioning and identifying method, device and readable storage medium
Xiong et al. Fire detection system based on unmanned aerial vehicle
CN116339321A (en) Global information driven distributed multi-robot reinforcement learning formation surrounding method based on 5G communication
CN216647401U (en) Safety helmet recognition device
Wang et al. Aprus: An airborne altitude-adaptive purpose-related uav system for object detection
CN114326821A (en) Unmanned aerial vehicle autonomous obstacle avoidance system and method based on deep reinforcement learning
Šul'aj et al. UAV management system for the smart city
CN112867023A (en) Method for minimizing perception data acquisition delay through dynamic scheduling of unmanned terminal
Šuľaj et al. UAV Management system for the smart city
CN112313597A (en) Aircraft control method, device, system and storage medium
Zhai et al. Computational Resource Constrained Deep Learning Based Target Recognition from Visible Optical Images.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination