CN110908399A - Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network - Google Patents

Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network Download PDF

Info

Publication number
CN110908399A
CN110908399A CN201911214854.9A CN201911214854A CN110908399A CN 110908399 A CN110908399 A CN 110908399A CN 201911214854 A CN201911214854 A CN 201911214854A CN 110908399 A CN110908399 A CN 110908399A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
neural network
convolutional neural
obstacle avoidance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911214854.9A
Other languages
Chinese (zh)
Other versions
CN110908399B (en
Inventor
廖建文
蔡倩倩
孟伟
鲁仁全
付敏跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201911214854.9A priority Critical patent/CN110908399B/en
Publication of CN110908399A publication Critical patent/CN110908399A/en
Application granted granted Critical
Publication of CN110908399B publication Critical patent/CN110908399B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/04Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
    • G05B13/042Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators in which a parameter or coefficient is automatically adjusted to optimise the performance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides an unmanned aerial vehicle autonomous obstacle avoidance method based on a light weight type neural network, which comprises the following steps: collecting video data simulating the flight of a camera carried by an unmanned aerial vehicle as training data; preprocessing training data; constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture, and inputting preprocessed training data into the convolutional neural network for training; in being applied to unmanned aerial vehicle's treater with the convolution neural network who accomplishes the training, monocular camera among the unmanned aerial vehicle transmits the video frame data of real-time collection for the treater in, the video frame data is exported behind the convolution neural network in the treater and is obtained the collision probability, the treater modulates current unmanned aerial vehicle's flying speed according to the collision probability of output, and when unmanned aerial vehicle flying speed reduced to predetermined minimum velocity, unmanned aerial vehicle translated along the y axle of fuselage, realize unmanned aerial vehicle's autonomic obstacle avoidance. The invention further provides an unmanned aerial vehicle autonomous obstacle avoidance system based on the light weight type neural network.

Description

Unmanned aerial vehicle autonomous obstacle avoidance method and system based on light weight type neural network
Technical Field
The invention relates to the technical field of unmanned aerial vehicle autonomous obstacle avoidance, in particular to an unmanned aerial vehicle autonomous obstacle avoidance method and system based on a light weight type neural network.
Background
With the progress of the times and the development of science and technology, unmanned aerial vehicles have been applied to specific tasks such as inspection, transportation, monitoring, security protection, investigation and the like, and even in complex and limited environments such as forests, tunnels, indoor environments and the like, unmanned aerial vehicles can normally complete the tasks.
At present, the method is applied to a method for identifying obstacles and avoiding the obstacles to continue flying by an unmanned aerial vehicle, and the method mainly combines a GPS and a vision sensor to estimate the system state of the unmanned aerial vehicle, deduce whether the obstacles exist or not and plan a path. However, this approach is difficult to implement in urban environments where high-rise buildings exist and is prone to system state estimation errors when dynamic obstacles are encountered. Therefore, the obstacle identification precision of the unmanned aerial vehicle is improved, the calculated amount is reduced, and meanwhile, safe and reliable flight control commands can be rapidly sent, so that the unmanned aerial vehicle obstacle avoidance method has important significance.
Disclosure of Invention
In order to overcome the defect that the estimation of the system state is easy to make mistakes when the unmanned aerial vehicle encounters a dynamic obstacle in the prior art, the invention provides an unmanned aerial vehicle autonomous obstacle avoidance method based on a light weight type neural network and an unmanned aerial vehicle autonomous obstacle avoidance system based on the light weight type neural network.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an unmanned aerial vehicle autonomous obstacle avoidance method based on a light weight type neural network comprises the following steps:
s1: collecting video data simulating the flight of a camera carried by an unmanned aerial vehicle as training data;
s2: preprocessing the training data;
s3: constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture (MFnet), and inputting the preprocessed training data into the convolutional neural network for training;
s4: will accomplish in the convolutional neural network of training is applied to unmanned aerial vehicle's the treater, monocular camera among the unmanned aerial vehicle transmits the video frame data of real-time collection for the treater in, the video frame data is exported behind the convolutional neural network in the treater and is obtained the collision probability, and the treater modulates current unmanned aerial vehicle's flying speed according to the collision probability of output, and when unmanned aerial vehicle flying speed reduced to predetermined minimum speed, unmanned aerial vehicle translated along the y axle of fuselage, realizes that unmanned aerial vehicle's autonomic obstacle avoidance.
In the technical scheme, the learning capacity of the training convolutional neural network is improved and overfitting is avoided by collecting the video sequence as training data and preprocessing the training data; the method is characterized in that a light-weight neural network is combined, a video sequence is input into the convolutional neural network to obtain a corresponding collision probability, a control command of the forward flight speed of the airplane is calculated according to the collision probability of the output end of the convolutional neural network, and then the control command is fed back to a flight control platform of the unmanned aerial vehicle to control the forward flight speed, so that the autonomous obstacle avoidance of the unmanned aerial vehicle is realized.
Preferably, in the step S1, a monocular camera is fixed on the bicycle to acquire video data, so as to acquire video data simulating the flight of the unmanned aerial vehicle carried with the camera. Because unmanned aerial vehicle's use has certain danger, can't use unmanned aerial vehicle to carry on the monocular camera to gather the video sequence who is close to the barrier, consequently adopt the monocular camera to fix the data acquisition who simulates unmanned aerial vehicle to carry on the flight of monocular camera on the bicycle, realize the training data acquisition under the environment changeable scene of different region, different barriers.
Preferably, in the step S2, the step of preprocessing the training data includes:
s21: performing frame-by-frame manual labeling on the video data, wherein a video frame which is more than 1m away from the obstacle is labeled as 0, and a video frame which is less than or equal to 1m away from the obstacle is labeled as 1;
s22: and adding random noise to the image in the marked video frame, and turning or cutting to obtain the training data subjected to preprocessing.
Preferably, in step S3, the structure of the convolutional neural network includes a lightweight convolutional neural network architecture MFnet, based on mobilenetv2, where the first layer of convolution uses hole convolutional layers, output ends of the hole convolutional layers are respectively connected to input ends of 6 depth-separable convolutional components, and each depth-separable convolutional component includes a channel-by-channel convolutional layer, a point-by-point convolutional layer, a BN normalization layer, and a Relu activation layer, which are connected in sequence; the output end of the depth separable convolution component is respectively connected with the convolution layer adopting the dropout method, the output end of the convolution layer adopting the dropout method is connected with the input end of the full-connection layer, a sigmoid activation function is adopted in the full-connection layer, and the collision probability corresponding to the input video frame image is output.
Preferably, the dropout value in the convolution layer using the dropout method is preset to 0.5.
Preferably, the channel-by-channel convolution layer is a 3 × 3 convolution kernel, and the point-by-point convolution layer is a 1 × 1 convolution kernel.
Preferably, the step of S3 further includes the steps of: and optimizing each layer parameter of the convolutional neural network by adopting a binary cross entropy loss function, wherein the calculation formula is as follows:
Figure BDA0002299223000000021
wherein ,
Figure BDA0002299223000000022
the collision probability of the output of the convolutional neural network is represented, and y represents a mark corresponding to a video frame input into the convolutional neural network.
Preferably, in the step S4, the specific step of modulating the flight speed of the current unmanned aerial vehicle by the processor according to the output collision probability includes modulating the forward speed of the unmanned aerial vehicle according to the output collision probability to realize autonomous obstacle avoidance of the unmanned aerial vehicle; the unmanned aerial vehicle is characterized in that the forward speed modulation formula of the unmanned aerial vehicle is as follows:
vk=(1-α)vk-1+α(1-pt)Vmax
wherein ,vkIndicating the modulation speed, ptIndicates the probability of collision, VmaxRepresenting the maximum forward speed of the unmanned aerial vehicle, α representing the modulation coefficient, and 0 being more than or equal to α being more than or equal to 1.
The invention also provides an unmanned aerial vehicle autonomous obstacle avoidance system based on the light weight type neural network, which is applied to the unmanned aerial vehicle autonomous obstacle avoidance method based on the light weight type neural network, and comprises an unmanned aerial vehicle with a monocular camera, a display card, a processor and a flight control platform, wherein:
the unmanned aerial vehicle acquires a current video sequence through a monocular camera carried by the unmanned aerial vehicle and transmits the current video sequence to the processor;
the video card is used for training the convolutional neural network and then transplanting the trained convolutional neural network into the processor for application;
the processor obtains the collision probability corresponding to the current video sequence according to the convolutional neural network output obtained by the display card transplantation, obtains the unmanned aerial vehicle flight modulation speed according to a preset modulation formula, and sends a modulation command to the flight control platform;
and the flight control platform adjusts the flight speed of the unmanned aerial vehicle according to the modulation command sent from the processor, so that autonomous obstacle avoidance is realized.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that: the light-weight convolutional neural network architecture MFnet is adopted to construct the convolutional neural network, so that the calculation amount can be reduced while the barrier can be accurately identified, and the video frame processing time is reduced, so that the flight speed modulation speed of the unmanned aerial vehicle is improved, and the autonomous obstacle avoidance of the unmanned aerial vehicle is effectively realized; the unmanned aerial vehicle dynamically modulates the flight speed according to the collision probability output by the convolutional neural network, and can be applied to the environment with dynamic obstacles.
Drawings
Fig. 1 is a flowchart of an autonomous obstacle avoidance method for an unmanned aerial vehicle based on a lightweight neural network in embodiment 1.
Fig. 2 is a partial training data image of example 1.
Fig. 3 is a schematic structural diagram of the convolutional neural network of embodiment 1.
Fig. 4 is a schematic structural diagram of an autonomous obstacle avoidance system of an unmanned aerial vehicle based on a lightweight neural network in embodiment 2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
The present embodiment provides an autonomous obstacle avoidance method for an unmanned aerial vehicle based on a lightweight neural network, and as shown in fig. 1, the present embodiment is a flowchart of the autonomous obstacle avoidance method for the unmanned aerial vehicle based on the lightweight neural network.
In the unmanned aerial vehicle autonomous obstacle avoidance method based on the lightweight neural network provided by the embodiment, the method comprises the following steps:
s1: video data simulating the flight of the unmanned aerial vehicle carrying camera is collected and used as training data.
In this embodiment, carry on the monocular camera through at the bicycle and carry out video data acquisition, realize the collection of the video data that simulation unmanned aerial vehicle carried on the camera flight, obtain the training data under the environment changeable scene of different regions, different barriers.
As shown in fig. 2, this embodiment is a training data image.
S2: and preprocessing the training data.
In this step, the step of preprocessing the training data includes:
s21: performing frame-by-frame manual labeling on the video data, wherein a video frame which is more than 1m away from an obstacle is marked as 0, which indicates that no obstacle exists in front; marking a video frame with the distance less than or equal to 1m from the obstacle as 1, and indicating that the obstacle exists in front of the video frame;
s22: and adding random noise to the image in the marked video frame, and turning or cutting to obtain the training data subjected to preprocessing.
S3: and constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture (MFnet), and inputting the preprocessed training data into the convolutional neural network for training.
In this step, the structure of the constructed convolutional neural network comprises a lightweight convolutional neural network architecture MFnet, taking mobilenetv2 as a reference, wherein the first layer of convolution adopts a hole convolutional layer and avoids using 5 × 5 convolutional kernels, the output end of the hole convolutional layer is respectively connected with the input ends of 6 depth separable convolutional components, and each depth separable convolutional component comprises a channel-by-channel convolutional layer, a point-by-point convolutional layer, a BN normalization layer and a Relu activation layer which are sequentially connected; the output end of the depth separable convolution component is respectively connected with the convolution layer adopting the dropout method, the output end of the convolution layer adopting the dropout method is connected with the input end of the full-connection layer, a sigmoid activation function is adopted in the full-connection layer, and the collision probability corresponding to the input video frame image is output.
Fig. 3 is a schematic structural diagram of the convolutional neural network of the present embodiment.
In this embodiment, the dropout value in the convolution layer using the dropout method is preset to 0.5; the channel-by-channel convolution layer is a 3 x 3 convolution kernel, and the point-by-point convolution layer is a 1 x 1 convolution kernel.
The method also comprises a convolutional neural network optimization step, wherein each layer of parameter of the convolutional neural network is optimized by adopting a binary cross entropy loss function, and the calculation formula is as follows:
Figure BDA0002299223000000051
wherein ,
Figure BDA0002299223000000052
the collision probability of the output of the convolutional neural network is represented, and y represents a mark corresponding to a video frame input into the convolutional neural network.
The convolutional neural network training in this embodiment uses a random increasing decreasing SGD as an optimizer, and its learning rate is set to 0.001, the batch _ size is 16, and the epochs is 50.
S4: will accomplish in the convolutional neural network of training is applied to unmanned aerial vehicle's the treater, monocular camera among the unmanned aerial vehicle transmits the video frame data of real-time collection for the treater in, the video frame data is exported behind the convolutional neural network in the treater and is obtained the collision probability, and the treater modulates current unmanned aerial vehicle's flying speed according to the collision probability of output, and when unmanned aerial vehicle flying speed reduced to predetermined minimum speed, unmanned aerial vehicle translated along the y axle of fuselage, realizes that unmanned aerial vehicle's autonomic obstacle avoidance.
In the step, the specific step of modulating the flight speed of the current unmanned aerial vehicle by the processor according to the output collision probability comprises modulating the advancing speed of the unmanned aerial vehicle according to the output collision probability to realize the autonomous obstacle avoidance of the unmanned aerial vehicle; the unmanned aerial vehicle has the following forward speed modulation formula:
vk=(1-α)vk-1+α(1-pt)Vmax
wherein ,vkIndicating the modulation speed, ptIndicates the probability of collision, VmaxRepresenting the maximum forward speed of the unmanned aerial vehicle, α representing the modulation coefficient, and 0 being more than or equal to α being more than or equal to 1.
In this embodiment, the maximum forward speed V of the unmanned aerial vehiclemaxSet up to 2m/s, unmanned aerial vehicle flying height control is about 2m, and modulation factor α sets up to 0.7, unmanned aerial vehicle minimum velocity VminSet to 0.01 m/s.
In the specific implementation process, when the unmanned aerial vehicle encounters an obstacle, the monocular camera carried on the unmanned aerial vehicle processes the currently acquired video frame in the trained convolutional neural network, and the collision probability p is outputtAnd according to the collision probability ptAnd velocity modulation formula to obtain corresponding modulation velocity vkWhen the unmanned planeThe closer to the obstacle the velocity v is modulatedkBy modulating progressively less, when the velocity v of the modulationkDown to a predetermined VminWhen 0.01m/s, unmanned aerial vehicle translates along the y-axis of fuselage, and when unmanned aerial vehicle translated to monocular camera the place ahead and there was not the barrier, the collision probability p of convolution neural network outputtReducing, modulating the velocity vkAnd increasing, and enabling the unmanned aerial vehicle to continuously fly forwards.
Example 2
The present embodiment provides an unmanned aerial vehicle autonomous obstacle avoidance system based on a lightweight neural network, and as shown in fig. 4, the present embodiment is a schematic structural diagram of the unmanned aerial vehicle autonomous obstacle avoidance system based on the lightweight neural network.
In the unmanned aerial vehicle autonomous obstacle avoidance system based on the lightweight neural network that this embodiment provided, including unmanned aerial vehicle 1, display card 3, treater 4, the flight control platform 5 that carries with monocular camera 2, wherein:
the unmanned aerial vehicle 1 acquires a current video sequence through a monocular camera 2 carried by the unmanned aerial vehicle and transmits the current video sequence to a processor 4;
the display card 3 is used for training the convolutional neural network, and then transplanting the trained convolutional neural network into the processor 4 for application;
the processor 4 obtains the collision probability corresponding to the current video sequence according to the convolutional neural network output obtained by transplantation from the display card 3, obtains the flight modulation speed of the unmanned aerial vehicle 1 according to a preset modulation formula, and sends a modulation command to the flight control platform 5;
the flight control platform 5 adjusts the flight speed of the unmanned aerial vehicle 1 according to the modulation command sent from the processor 4 to realize autonomous obstacle avoidance.
In this embodiment, the RTX2080ti graphic card 3 is used for training, and the adopted evaluation indexes are accuracy and F-1score, where:
F-1=(2*precison*recall)/(precison+recall)
where precison denotes accuracy and recall denotes recall.
The trained convolutional neural network is transplanted to an Nvidia Jetson TX2 mobile development platform of the unmanned aerial vehicle 1 for inference, and the inference process is that the forward speed of the unmanned aerial vehicle 1 is controlled through a flight speed control part through output obtained through MFnet.
In order to reduce the volume and the load of the unmanned aerial vehicle 1, in the embodiment, the processor 4 is the Nvidia Jetson TX2, and the weight of the TX2 core module plus the carrier plate is less than 300g, so that the load of the unmanned aerial vehicle 1 can be effectively reduced; TX2 contains ARMCortex-A57 and Nvidia Denver2 processing cores, and 256 CUDA core Pascal architecture designs can meet hardware facilities required by a mobile development platform.
In this embodiment, the maximum forward speed V of the unmanned aerial vehicle 1maxSet up to 2m/s, unmanned aerial vehicle 1 flight altitude control is around 2m, and modulation factor α sets up to 0.7, VminSet to 0.01 m/s.
In the specific implementation process, when the unmanned aerial vehicle 1 encounters an obstacle, the monocular camera 2 mounted on the unmanned aerial vehicle 1 transmits the currently acquired video frame to the processor 4 for processing, a convolutional neural network trained in the display card 3 is preset in the processor 4, and the convolutional neural network outputs the collision probability p of the current unmanned aerial vehicle 1tThe processor 4 is based on the collision probability ptAnd velocity modulation formula to obtain corresponding modulation velocity vkAnd sends the information to the flight control platform 5 to control the flight speed of the unmanned aerial vehicle 1.
When the unmanned aerial vehicle 1 gets closer to the obstacle, the modulation speed vkBy modulating progressively less, when the velocity v of the modulationkReduce to preset unmanned aerial vehicle 1 minimum velocity VminDuring the process, the unmanned aerial vehicle 1 translates along the y axis of the fuselage, and when the unmanned aerial vehicle 1 translates to the place ahead of the monocular camera without an obstacle, the collision probability p output by the convolutional neural networktReducing, modulating the velocity vkThe increase, unmanned aerial vehicle 1 continues to fly forward to realize that unmanned aerial vehicle 1 independently keeps away the barrier function.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (9)

1. An unmanned aerial vehicle autonomous obstacle avoidance method based on a lightweight neural network is characterized by comprising the following steps:
s1: collecting video data simulating the flight of a camera carried by an unmanned aerial vehicle as training data;
s2: preprocessing the training data;
s3: constructing a convolutional neural network by adopting a lightweight convolutional neural network architecture (MFnet), and inputting the preprocessed training data into the convolutional neural network for training;
s4: will accomplish in the convolutional neural network of training is applied to unmanned aerial vehicle's the treater, monocular camera among the unmanned aerial vehicle transmits the video frame data of real-time collection for the treater in, the video frame data is exported behind the convolutional neural network in the treater and is obtained the collision probability, and the treater modulates current unmanned aerial vehicle's flying speed according to the collision probability of output, and when unmanned aerial vehicle flying speed reduced to predetermined minimum speed, unmanned aerial vehicle translated along the y axle of fuselage, realizes that unmanned aerial vehicle's autonomic obstacle avoidance.
2. The unmanned aerial vehicle autonomous obstacle avoidance method according to claim 1, characterized in that: in the step S1, a monocular camera is fixed on the bicycle to acquire video data, so that the acquisition of the video data simulating the flight of the unmanned aerial vehicle carrying the camera is realized.
3. The unmanned aerial vehicle autonomous obstacle avoidance method according to claim 1, characterized in that: in the step S2, the step of preprocessing the training data includes:
s21: performing frame-by-frame manual labeling on the video data, wherein a video frame which is more than 1m away from the obstacle is labeled as 0, and a video frame which is less than or equal to 1m away from the obstacle is labeled as 1;
s22: and adding random noise to the image in the marked video frame, and turning or cutting to obtain the training data subjected to preprocessing.
4. The unmanned aerial vehicle autonomous obstacle avoidance method according to claim 1, characterized in that: in the step S3, the convolutional neural network has a structure including a lightweight convolutional neural network architecture MFnet, which takes mobilenetv2 as a reference, where a first layer of convolution uses a hole convolutional layer, output ends of the hole convolutional layer are respectively connected to input ends of 6 depth-separable convolutional components, and each depth-separable convolutional component includes a channel-by-channel convolutional layer, a point-by-point convolutional layer, a BN normalization layer, and a Relu activation layer, which are sequentially connected; the output end of the depth separable convolution component is respectively connected with the convolution layer adopting the dropout method, the output end of the convolution layer adopting the dropout method is connected with the input end of the full-connection layer, a sigmoid activation function is adopted in the full-connection layer, and the collision probability corresponding to the input video frame image is output.
5. The unmanned aerial vehicle autonomous obstacle avoidance method according to claim 4, wherein: the dropout value in the convolution layer adopting the dropout method is preset to be 0.5.
6. The unmanned aerial vehicle autonomous obstacle avoidance method according to claim 4, wherein: the channel-by-channel convolution layer is a 3 x 3 convolution kernel, and the point-by-point convolution layer is a 1 x 1 convolution kernel.
7. The unmanned aerial vehicle autonomous obstacle avoidance method according to claim 3, characterized in that: in the step S3, the method further includes the steps of: and optimizing each layer parameter of the convolutional neural network by adopting a binary cross entropy loss function, wherein the calculation formula is as follows:
Figure FDA0002299222990000021
wherein ,
Figure FDA0002299222990000022
the collision probability of the output of the convolutional neural network is represented, and y represents a mark corresponding to a video frame input into the convolutional neural network.
8. The unmanned aerial vehicle autonomous obstacle avoidance method according to claim 1, characterized in that: in the step S4, the specific step of modulating the flight speed of the current unmanned aerial vehicle by the processor according to the output collision probability includes modulating the forward speed of the unmanned aerial vehicle according to the output collision probability to realize autonomous obstacle avoidance of the unmanned aerial vehicle; the unmanned aerial vehicle is characterized in that the forward speed modulation formula of the unmanned aerial vehicle is as follows:
vk=(1-α)vk-1+α(1-pt)Vmax
wherein ,vkIndicating the modulation speed, ptIndicates the probability of collision, VmaxRepresenting the maximum forward speed of the unmanned aerial vehicle, α representing the modulation coefficient, and 0 being more than or equal to α being more than or equal to 1.
9. The utility model provides an unmanned aerial vehicle is barrier system of keeping away independently based on light weight type neural network which characterized in that, including carrying unmanned aerial vehicle, display card, treater, the flight control platform of monocular camera, wherein:
the unmanned aerial vehicle acquires a current video sequence through a monocular camera carried by the unmanned aerial vehicle and transmits the current video sequence to the processor;
the video card is used for training the convolutional neural network and then transplanting the trained convolutional neural network into the processor for application;
the processor obtains the collision probability corresponding to the current video sequence according to the convolutional neural network output obtained by the display card transplantation, obtains the unmanned aerial vehicle flight modulation speed according to a preset modulation formula, and sends a modulation command to the flight control platform;
and the flight control platform adjusts the flight speed of the unmanned aerial vehicle according to the modulation command sent from the processor, so that autonomous obstacle avoidance is realized.
CN201911214854.9A 2019-12-02 2019-12-02 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network Active CN110908399B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911214854.9A CN110908399B (en) 2019-12-02 2019-12-02 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911214854.9A CN110908399B (en) 2019-12-02 2019-12-02 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network

Publications (2)

Publication Number Publication Date
CN110908399A true CN110908399A (en) 2020-03-24
CN110908399B CN110908399B (en) 2023-05-12

Family

ID=69821638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911214854.9A Active CN110908399B (en) 2019-12-02 2019-12-02 Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network

Country Status (1)

Country Link
CN (1) CN110908399B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN111880558A (en) * 2020-07-06 2020-11-03 广东技术师范大学 Plant protection unmanned aerial vehicle obstacle avoidance spraying method and device, computer equipment and storage medium
CN112364774A (en) * 2020-11-12 2021-02-12 天津大学 Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network
CN112666975A (en) * 2020-12-18 2021-04-16 中山大学 Unmanned aerial vehicle safety trajectory tracking method based on predictive control and barrier function
CN113419555A (en) * 2021-05-20 2021-09-21 北京航空航天大学 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN113485392A (en) * 2021-06-17 2021-10-08 广东工业大学 Virtual reality interaction method based on digital twins
CN114661061A (en) * 2022-02-14 2022-06-24 天津大学 GPS-free micro unmanned aerial vehicle flight control method based on visual indoor environment
CN117475358A (en) * 2023-12-27 2024-01-30 广东南方电信规划咨询设计院有限公司 Collision prediction method and device based on unmanned aerial vehicle vision

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106155082A (en) * 2016-07-05 2016-11-23 北京航空航天大学 A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream
GB201703174D0 (en) * 2017-02-28 2017-04-12 Russell Iain Matthew Unmanned aerial vehicles
US20180253980A1 (en) * 2017-03-03 2018-09-06 Farrokh Mohamadi Drone Terrain Surveillance with Camera and Radar Sensor Fusion for Collision Avoidance
CN109784298A (en) * 2019-01-28 2019-05-21 南京航空航天大学 A kind of outdoor on-fixed scene weather recognition methods based on deep learning
CN109960278A (en) * 2019-04-09 2019-07-02 岭南师范学院 A kind of bionical obstruction-avoiding control system of unmanned plane based on LGMD and method
CN110298397A (en) * 2019-06-25 2019-10-01 东北大学 The multi-tag classification method of heating metal image based on compression convolutional neural networks
RU2703797C1 (en) * 2019-02-05 2019-10-22 Общество с ограниченной ответственностью "Гарант" (ООО "Гарант") Method and system for transmitting media information from unmanned aerial vehicles to a data collection point on a low-directivity optical channel with quantum reception of a media stream
CN110456805A (en) * 2019-06-24 2019-11-15 深圳慈航无人智能***技术有限公司 A kind of UAV Intelligent tracking flight system and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106155082A (en) * 2016-07-05 2016-11-23 北京航空航天大学 A kind of unmanned plane bionic intelligence barrier-avoiding method based on light stream
GB201703174D0 (en) * 2017-02-28 2017-04-12 Russell Iain Matthew Unmanned aerial vehicles
US20180253980A1 (en) * 2017-03-03 2018-09-06 Farrokh Mohamadi Drone Terrain Surveillance with Camera and Radar Sensor Fusion for Collision Avoidance
CN109784298A (en) * 2019-01-28 2019-05-21 南京航空航天大学 A kind of outdoor on-fixed scene weather recognition methods based on deep learning
RU2703797C1 (en) * 2019-02-05 2019-10-22 Общество с ограниченной ответственностью "Гарант" (ООО "Гарант") Method and system for transmitting media information from unmanned aerial vehicles to a data collection point on a low-directivity optical channel with quantum reception of a media stream
CN109960278A (en) * 2019-04-09 2019-07-02 岭南师范学院 A kind of bionical obstruction-avoiding control system of unmanned plane based on LGMD and method
CN110456805A (en) * 2019-06-24 2019-11-15 深圳慈航无人智能***技术有限公司 A kind of UAV Intelligent tracking flight system and method
CN110298397A (en) * 2019-06-25 2019-10-01 东北大学 The multi-tag classification method of heating metal image based on compression convolutional neural networks

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111880558A (en) * 2020-07-06 2020-11-03 广东技术师范大学 Plant protection unmanned aerial vehicle obstacle avoidance spraying method and device, computer equipment and storage medium
CN111880558B (en) * 2020-07-06 2021-05-11 广东技术师范大学 Plant protection unmanned aerial vehicle obstacle avoidance spraying method and device, computer equipment and storage medium
CN111831010A (en) * 2020-07-15 2020-10-27 武汉大学 Unmanned aerial vehicle obstacle avoidance flight method based on digital space slice
CN112364774A (en) * 2020-11-12 2021-02-12 天津大学 Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network
CN112666975B (en) * 2020-12-18 2022-03-29 中山大学 Unmanned aerial vehicle safety trajectory tracking method based on predictive control and barrier function
CN112666975A (en) * 2020-12-18 2021-04-16 中山大学 Unmanned aerial vehicle safety trajectory tracking method based on predictive control and barrier function
CN113419555B (en) * 2021-05-20 2022-07-19 北京航空航天大学 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN113419555A (en) * 2021-05-20 2021-09-21 北京航空航天大学 Monocular vision-based low-power-consumption autonomous obstacle avoidance method and system for unmanned aerial vehicle
CN113485392A (en) * 2021-06-17 2021-10-08 广东工业大学 Virtual reality interaction method based on digital twins
CN114661061A (en) * 2022-02-14 2022-06-24 天津大学 GPS-free micro unmanned aerial vehicle flight control method based on visual indoor environment
CN114661061B (en) * 2022-02-14 2024-05-17 天津大学 GPS-free visual indoor environment-based miniature unmanned aerial vehicle flight control method
CN117475358A (en) * 2023-12-27 2024-01-30 广东南方电信规划咨询设计院有限公司 Collision prediction method and device based on unmanned aerial vehicle vision
CN117475358B (en) * 2023-12-27 2024-04-23 广东南方电信规划咨询设计院有限公司 Collision prediction method and device based on unmanned aerial vehicle vision

Also Published As

Publication number Publication date
CN110908399B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN110908399B (en) Unmanned aerial vehicle autonomous obstacle avoidance method and system based on lightweight neural network
CN109063532B (en) Unmanned aerial vehicle-based method for searching field offline personnel
CN108563236B (en) Target tracking method of nano unmanned aerial vehicle based on concentric circle characteristics
CN113552867B (en) Planning method for motion trail and wheeled mobile device
CN113192381B (en) Hybrid scene-based driving simulation method, system, equipment and storage medium
CN112364774A (en) Unmanned vehicle brain autonomous obstacle avoidance method and system based on impulse neural network
CN106989728A (en) A kind of building ground mapping system based on unmanned plane
CN113674355A (en) Target identification and positioning method based on camera and laser radar
CN116261649A (en) Vehicle driving intention prediction method, device, terminal and storage medium
Dong et al. Real-time survivor detection in UAV thermal imagery based on deep learning
Liu et al. Understanding artificial intelligence: Fundamentals and applications
Xu et al. Development of power transmission line detection technology based on unmanned aerial vehicle image vision
Xiang et al. Autonomous eVTOL: A summary of researches and challenges
CN116486290B (en) Unmanned aerial vehicle monitoring and tracking method and device, electronic equipment and storage medium
CN112731921A (en) Military path planning support system based on parallel simulation
EP4180767A1 (en) Route planning for a ground vehicle through unfamiliar terrain
Doornbos et al. Drone Technologies: A Tertiary Systematic Literature Review on a Decade of Improvements
CN116243725A (en) Substation unmanned aerial vehicle inspection method and system based on visual navigation
Qi et al. Detection and tracking of a moving target for UAV based on machine vision
CN112241180B (en) Visual processing method for landing guidance of unmanned aerial vehicle mobile platform
CN114663818A (en) Airport operation core area monitoring and early warning system and method based on vision self-supervision learning
Zaier et al. Vision-based UAV tracking using deep reinforcement learning with simulated data
Li et al. UAV System integration of real-time sensing and flight task control for autonomous building inspection task
Naso et al. Autonomous flight insurance method of unmanned aerial vehicles Parot Mambo using semantic segmentation data
Beeharry et al. Drone-Based Weed Detection Architectures Using Deep Learning Algorithms and Real-Time Analytics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Liao Jianwen

Inventor after: Cai Qianqian

Inventor after: Meng Wei

Inventor after: Lu Renquan

Inventor before: Liao Jianwen

Inventor before: Cai Qianqian

Inventor before: Meng Wei

Inventor before: Lu Renquan

Inventor before: Fu Min Yue

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant