CN114756037A - Unmanned aerial vehicle system based on neural network image recognition and control method - Google Patents

Unmanned aerial vehicle system based on neural network image recognition and control method Download PDF

Info

Publication number
CN114756037A
CN114756037A CN202210274826.1A CN202210274826A CN114756037A CN 114756037 A CN114756037 A CN 114756037A CN 202210274826 A CN202210274826 A CN 202210274826A CN 114756037 A CN114756037 A CN 114756037A
Authority
CN
China
Prior art keywords
flight
training
module
mask
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210274826.1A
Other languages
Chinese (zh)
Other versions
CN114756037B (en
Inventor
赵杰夫
张朋艺
王汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Huixing Photoelectric Technology Co ltd
Original Assignee
Guangdong Huixing Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Huixing Photoelectric Technology Co ltd filed Critical Guangdong Huixing Photoelectric Technology Co ltd
Priority to CN202210274826.1A priority Critical patent/CN114756037B/en
Publication of CN114756037A publication Critical patent/CN114756037A/en
Application granted granted Critical
Publication of CN114756037B publication Critical patent/CN114756037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an unmanned aerial vehicle system based on neural network image recognition and a control method, comprising the following steps: the system comprises a camera module, an image processing module, a flight module and a processor, wherein the camera module is used for a target picture, the image processing module is used for carrying out image recognition processing based on the target picture shot by the camera module, the image recognition module is based on a mask assisted R-CNN training protocol obtained after a Convolutional Neural Network (CNN) is improved, picture recognition is carried out through the mask assisted R-CNN training protocol, the processor receives an image processed by the image processing module, a three-dimensional model is built and analyzed, and the flight state of the flight module is obtained so as to adjust the flight mode of the flight module.

Description

Unmanned aerial vehicle system based on neural network image recognition and control method
Technical Field
The invention belongs to the technical field of neural networks, and particularly relates to an unmanned aerial vehicle system based on neural network image recognition and a control method.
Background
Among the prior art, the technical scheme who carries out picture identification through convolution neural network has existed many, then, how to realize better picture identification and differentiation, always is the difficult point of carrying out image processing, and among the prior art, unmanned aerial vehicle shoots and has constantly used in the public, if improve unmanned aerial vehicle and to the shooting of complex environment, carries out image analysis through shooting to descend unmanned aerial vehicle in the complex environment, this is unmanned aerial vehicle controlled difficult problem.
Disclosure of Invention
The invention discloses an unmanned aerial vehicle system based on neural network image recognition, which comprises: the system comprises a camera module, an image processing module, a flight module and a processor, wherein the camera module is used for a target picture, the image processing module is used for carrying out image recognition processing based on the target picture shot by the camera module, the image recognition module is based on a mask assisted R-CNN training protocol obtained after a Convolutional Neural Network (CNN) is improved, picture recognition is carried out through the mask assisted R-CNN training protocol, the processor receives an image processed by the image processing module, a three-dimensional model is built and analyzed, and the flight state of the flight module is obtained so as to adjust the flight mode of the flight module.
The unmanned aerial vehicle system based on neural network image recognition, the image processing module includes: the image receiving unit is used for receiving first original image information shot by the camera module, transmitting the first original image information to the model training unit, segmenting and determining a first target object through the model training unit, and sending the first target object to the processor; the image receiving unit is used for receiving second original image information shot by the camera module, transmitting the second original image information to the model training unit, segmenting and determining a second target object through the model training unit, and sending the second target object to the processor; the image receiving unit is used for receiving third original image information shot by the camera module, transmitting the third original image information to the model training unit, segmenting and determining a third target object through the model training unit, and sending the third target object to the processor; and the processor establishes a three-dimensional model according to the first target object, the second target object and the third target object, wherein the first original image information, the second original image information and the third original image information are pictures with three different angles.
In the unmanned aerial vehicle system based on neural network image recognition, the model training unit obtains a mask assisted R-CNN training protocol through training, the mask assisted R-CNN training protocol is realized by adding a mask head in a primary or secondary target detector to realize target segmentation, and the accuracy of detection and classification is improved, and the method specifically comprises the following steps:
step S1), training a basic mask R-CNN model for dividing tasks;
step S2), a training set of segmentation tasks is enhanced by using soft pixel level labels;
step S3), training a mask assisted R-CNN model for the detection and segmentation task, said step S2) performing multiple repetitive training on itself.
The unmanned aerial vehicle system based on neural network image recognition, the step S1) specifically includes the following steps:
step S11), training and dividing a task set;
step S12), obtaining a first mask R-CNN model;
step S13), obtaining a first detection task training set
Step S14), a first label training sample is obtained by soft pixels.
The step S2) specifically includes the following steps:
step S21), obtaining an enhanced training set of the first segmentation task;
step S22), obtaining a second mask R-CNN model;
step S23), obtaining a second detection task training set
Step S24), obtaining a second label training sample through soft pixels;
the step S3) specifically includes the following steps:
step S31), obtaining an enhanced training set of the second segmentation task;
step S32), obtaining a mask auxiliary R-CNN model;
step S33), obtaining an overall training result;
step S34), respectively obtaining a detection task and a segmentation task through an overall training result;
the step S21) receives the output of the step S14), the step S31) receives the output of the step S24), the second mask R-CNN model of the step S22) receives the first parameters of the first mask R-CNN model of the step S12) for adjustment, and the mask assisted R-CNN model of the step S32) receives the second parameters of the second mask R-CNN model of the step S22) for adjustment.
The unmanned aerial vehicle system based on neural network image recognition, the flight module includes: four flight rotors, four horizontal flight rotor support rods, a vertical flight rotor support rod and an instruction receiving unit, wherein the four horizontal flight rotor support rods can horizontally stretch and retract, a groove for sliding the active flight rotor is arranged in the upper side of the four horizontal flight rotor support rods, the vertical flight rotor support rod is arranged at the intersection point of the four horizontal flight rotor support rods and is provided with four buckles in the vertical direction, the four flight rotors can be clamped and simultaneously rotate in the vertical direction, the lower side of the flight rotor is provided with an I-shaped groove matched with the groove, the bulge at the bottom of the I-shaped groove is embedded into the groove, so that the flight rotor can slide onto the vertical flight rotor support rod through the groove by the bulge at the bottom of the I-shaped groove, the vertical flight rotor support rod is provided with the four buckles, and the inner side of the I-shaped groove is provided with a through hollow, make the I-shaped is the I-shaped structure for the outside, and the inboard can make vertical flight rotor bracing piece link up, works as the flight rotor slides to when on the vertical flight rotor bracing piece, vertical flight rotor bracing piece stretches out partly will the flight rotor passes through the buckle joint the inboard of I-shaped structure works as after vertical flight rotor bracing piece stretches out partly, drives horizontal flight rotor bracing piece also part that rises in step makes things convenient for other flight rotors to slide the upper portion of the preceding flight rotor through horizontal flight rotor bracing piece.
The unmanned aerial vehicle system based on neural network image recognition is characterized in that the flight module adjusts the flight mode in real time according to the requirement of the image processing module for shooting the object, after the processor sends a shooting instruction, the flight module adjusts the first flight height and the first angle, after the adjustment is completed, a first determination instruction is sent to the image processing module, after the image processing module receives the first determination instruction, first original image information is shot, the first original image information is subjected to image recognition processing to obtain a first target object and then is sent to the processor, after the processor obtains the first target object, the shooting second flight height and the shooting second angle of a second target object are obtained through analysis, the second flight height and the second angle are sent to the flight module, the flight module adjusts the rotating speed of the flying rotor wing and the position of the flying rotor wing supporting rod according to the second flight height and the second angle, the processor obtains a second target object after image recognition processing, analyzes and obtains a third flying height and a third angle of a third target object, sends the third flying height and the third angle to the flying module, and the flying module adjusts the rotating speed of the flying rotor and the position of the supporting rod of the flying rotor according to the third flying height and the third angle, shoots a third original image, obtains the third target object after image recognition processing, and sends the third target object to the processor.
A mask assisted R-CNN training protocol is obtained after a convolutional neural network CNN is improved, picture recognition is carried out through the mask assisted R-CNN training protocol, an unmanned aerial vehicle flying rotor wing is controlled through angle recognition images to adjust flying postures, multi-angle images are obtained, and a target object is recognized.
The control method comprises the following steps of the mask assisted R-CNN training protocol:
step S1), training a basic mask R-CNN model for dividing tasks;
step S2), a training set of segmentation tasks is enhanced by using soft pixel level labels;
step S3), training a mask assisted R-CNN model for the detection and segmentation task, and step S2) carrying out repeated training for multiple times;
the step S1) specifically includes the following steps:
step S11), training and dividing a task set;
step S12), obtaining a first mask R-CNN model;
step S13), obtaining a first detection task training set
Step S14), obtaining a first label training sample through soft pixels;
the step S2) specifically includes the following steps:
step S21), obtaining an enhanced training set of the first segmentation task;
Step S22), obtaining a second mask R-CNN model;
step S23), obtaining a second detection task training set
Step S24), obtaining a second label training sample through soft pixels;
the step S3) specifically includes the following steps:
step S31), obtaining an enhanced training set of the second segmentation task;
step S32), obtaining a mask auxiliary R-CNN model;
step S33), obtaining an overall training result;
step S34), respectively obtaining a detection task and a segmentation task through an overall training result;
the step S21) receives the output of the step S14), the step S31) receives the output of the step S24), the second mask R-CNN model of the step S22) receives the first parameters of the first mask R-CNN model of the step S12) for adjustment, and the mask assisted R-CNN model of the step S32) receives the second parameters of the second mask R-CNN model of the step S22) for adjustment.
In the control method, the flight module includes: four flight rotors, four horizontal flight rotor support rods, a vertical flight rotor support rod and an instruction receiving unit, wherein the four horizontal flight rotor support rods can horizontally stretch and retract, a groove for sliding the active flight rotor is arranged in the upper side of the four horizontal flight rotor support rods, the vertical flight rotor support rod is arranged at the intersection point of the four horizontal flight rotor support rods and is provided with four buckles in the vertical direction, the four flight rotors can be clamped and simultaneously rotate in the vertical direction, the lower side of the flight rotor is provided with an I-shaped groove matched with the groove, the bulge at the bottom of the I-shaped groove is embedded into the groove, so that the flight rotor can slide onto the vertical flight rotor support rod through the groove by the bulge at the bottom of the I-shaped groove, the vertical flight rotor support rod is provided with the four buckles, and the inner side of the I-shaped groove is provided with a through hollow, make the I-shaped is the I-shaped structure for the outside, and the inboard can make vertical flight rotor bracing piece link up, works as the flight rotor slides to when on the vertical flight rotor bracing piece, vertical flight rotor bracing piece stretches out partly will the flight rotor passes through the buckle joint the inboard of I-shaped structure works as after vertical flight rotor bracing piece stretches out partly, drives horizontal flight rotor bracing piece also part that rises in step makes things convenient for other flight rotors to slide the upper portion of the preceding flight rotor through horizontal flight rotor bracing piece.
According to the control method, the flight module adjusts the flight mode in real time according to the requirement of the image processing module for shooting the object, after the processor sends a shooting instruction, the flight module adjusts the first flight height and the first angle, after the adjustment is finished, a first determination instruction is sent to the image processing module, after the image processing module receives the first determination instruction, first original image information is shot, after the first original image information is subjected to image recognition processing, a first target object is obtained and sent to the processor, after the processor obtains the first target object, the shooting second flight height and the shooting second angle of a second target object are obtained through analysis, the second flight height and the shooting second angle of the second target object are sent to the flight module, the flight module adjusts the rotating speed of the flight rotor and the position of the flight rotor support rod according to the second flight height and the shooting second angle, the processor obtains a second target object after image recognition processing, analyzes and obtains a third flying height and a third angle of a third target object, sends the third flying height and the third angle to the flying module, and the flying module adjusts the rotating speed of the flying rotor and the position of the supporting rod of the flying rotor according to the third flying height and the third angle, shoots a third original image, obtains the third target object after image recognition processing, and sends the third target object to the processor.
The invention provides an unmanned aerial vehicle system based on neural network image recognition and a control method thereof. The important improvement of the invention is that the picture recognition is carried out through the improved R-CNN, three target objects are determined after the picture recognition, a three-dimensional image is established, and the unmanned aerial vehicle shooting object recognition can be better carried out through the processing control of the processor; the flying rotor wing is arranged on the support rod in a sliding mode, the vertical support rod is arranged to match with the image identified by the image identification module through the sliding horizontal support rod and the telescopic vertical support rod, the flying posture is adjusted, and the flying rotor wing can be retracted conveniently when the position of the unmanned aerial vehicle is insufficient in the falling process, so that different flying postures of the unmanned aerial vehicle can be adjusted according to different objects.
Drawings
Fig. 1 is a schematic diagram of an unmanned aerial vehicle system based on neural network image recognition according to the present invention.
FIG. 2 is a schematic diagram of an improved training of the image recognition module of the present invention.
FIG. 3 is a functional schematic diagram of a flight module according to the present invention.
FIG. 4 is a schematic diagram of an image obtained by the image recognition module according to the present invention.
Detailed Description
The present application will now be described in further detail with reference to the drawings, it should be noted that the following detailed description is given for illustrative purposes only and is not to be construed as limiting the scope of the present application, as those skilled in the art will be able to make numerous insubstantial modifications and adaptations to the present application based on the above disclosure.
Fig. 1 is a schematic diagram of the drone system based on neural network image recognition according to the present invention. The invention discloses an unmanned aerial vehicle system based on neural network image recognition, which comprises: the image pickup module is used for carrying out image recognition processing on the basis of the target picture shot by the shooting module, the image recognition module is based on a mask assisted R-CNN training protocol obtained after a Convolutional Neural Network (CNN) is improved, image recognition is carried out through the mask assisted R-CNN training protocol, the processor receives the image processed by the image processing module, establishes a three-dimensional model and analyzes the three-dimensional model, and obtains the flight state of the flight module so as to adjust the flight mode of the flight module.
The unmanned aerial vehicle system based on neural network image recognition, the image processing module includes: the image receiving unit is used for receiving first original image information shot by the camera module, transmitting the first original image information to the model training unit, segmenting through the model training unit to determine a first target object, and transmitting the first target object to the processor; the image receiving unit is used for receiving second original image information shot by the camera module, transmitting the second original image information to the model training unit, segmenting through the model training unit to determine a second target object, and sending the second target object to the processor; the image receiving unit is used for receiving third original image information shot by the camera module, transmitting the third original image information to the model training unit, segmenting the third original image information through the model training unit to determine a third target object, and sending the third target object to the processor; the processor establishes a three-dimensional model according to the first target object, the second target object and the third target object, and the first original image information, the second original image information and the third original image information are pictures at three different angles.
Fig. 2 is a schematic diagram illustrating an improved training of the image recognition module according to the present invention. In the unmanned aerial vehicle system based on neural network image recognition, the model training unit obtains a mask assisted R-CNN training protocol through training, the mask assisted R-CNN training protocol is realized by adding a mask head in a primary or secondary target detector to realize target segmentation, and the accuracy of detection and classification is improved, and the method specifically comprises the following steps:
step S1), training a basic mask R-CNN model for dividing tasks;
step S2), a training set of the segmentation task is enhanced by using the soft pixel level label;
step S3), training a mask assisted R-CNN model for the detection and segmentation task, said step S2) performing multiple repetitive training on itself.
The unmanned aerial vehicle system based on neural network image recognition, the step S1) specifically includes the following steps:
step S11), training and dividing a task set;
step S12), obtaining a first mask R-CNN model;
step S13), obtaining a first detection task training set
Step S14), a first label training sample is obtained by soft pixels.
The step S2) specifically includes the following steps:
step S21), obtaining an enhanced training set of the first segmentation task;
Step S22), a second mask R-CNN model is obtained;
step S23), obtaining a second detection task training set
Step S24), obtaining a second label training sample through the soft pixels;
the step S3) specifically includes the following steps:
step S31), obtaining an enhanced training set of the second segmentation task;
step S32), obtaining a mask auxiliary R-CNN model;
step S33), obtaining an overall training result;
step S34), respectively obtaining a detection task and a segmentation task through an overall training result;
the step S21) receives the output of the step S14), the step S31) receives the output of the step S24), the second mask R-CNN model of the step S22) receives the first parameters of the first mask R-CNN model of the step S12) for adjustment, and the mask assisted R-CNN model of the step S32) receives the second parameters of the second mask R-CNN model of the step S22) for adjustment.
Sparse training can be performed on each training model, for example, before an overall training result is obtained on a mask assisted R-CNN model, sparse training is performed, normalization is performed through convolution characteristics, and training is performed through the following formula (1);
Figure BDA0003554037560000071
wherein sigma2Inputting the mean square error of the characteristic in the small batch, wherein gamma is a scale factor of training, beta is the bias of training, and epsilon is a reference value of the small batch; x is the current actual value of the training,
Figure BDA0003554037560000072
Is the average value of the training, and y is the result value of the training; the target of sparse training is as shown in equation (2):
Figure BDA0003554037560000073
where α is a penalty factor that balances two loss terms, and f (γ) ═ γ | represents the norm of γ; loss represents a loss term.
Fig. 3 is a functional schematic diagram of the flight module according to the present invention. The unmanned aerial vehicle system based on neural network image recognition, the flight module includes: four flight rotors, four horizontal flight rotor support rods, a vertical flight rotor support rod and an instruction receiving unit, wherein the four horizontal flight rotor support rods can horizontally stretch and retract, a groove for sliding the active flight rotor is arranged in the upper side of the four horizontal flight rotor support rods, the vertical flight rotor support rod is arranged at the intersection point of the four horizontal flight rotor support rods and is provided with four buckles in the vertical direction, the four flight rotors can be clamped and simultaneously rotate in the vertical direction, the lower side of the flight rotor is provided with an I-shaped groove matched with the groove, the bulge at the bottom of the I-shaped groove is embedded into the groove, so that the flight rotor can slide onto the vertical flight rotor support rod through the groove by the bulge at the bottom of the I-shaped groove, the vertical flight rotor support rod is provided with the four buckles, and the inner side of the I-shaped groove is provided with a through hollow, make the I-shaped is the I-shaped structure for the outside, and the inboard can make vertical flight rotor bracing piece link up, works as the flight rotor slides to when on the vertical flight rotor bracing piece, vertical flight rotor bracing piece stretches out partly will the flight rotor passes through the buckle joint the inboard of I-shaped structure works as after vertical flight rotor bracing piece stretches out partly, drives horizontal flight rotor bracing piece also part that rises in step makes things convenient for other flight rotors to slide the upper portion of the preceding flight rotor through horizontal flight rotor bracing piece.
Preferably, when the image recognition module discerned that unmanned aerial vehicle landing place is complicated, and the position was not enough to support when unmanned aerial vehicle's the full open mode of flight rotor descends, then the flight rotor slides on vertical flight rotor bracing piece at the horizontality, and four flight rotors slide vertical flight rotor bracing pieces in proper order to make unmanned aerial vehicle safety to descend.
The unmanned aerial vehicle system based on neural network image recognition, the flight module adjusts the flight mode in real time according to the requirement of the image processing module for shooting the object, after the processor sends a shooting instruction, the flight module adjusts the first flight height and the first angle, after the adjustment is completed, the flight module sends a first determination instruction to the image processing module, after the image processing module receives the first determination instruction, the image processing module shoots first original image information, and sends the first original image information to the processor after the image recognition processing to obtain a first target object, after the processor obtains the first target object, the processor analyzes to obtain the shooting second flight height and the shooting second angle of a second target object, and sends the second flight height and the shooting second angle to the flight module, the flight module adjusts the rotating speed of the flying rotor wing and the position of the flying rotor wing supporting rod according to the second flight height and the second angle, shooting a second original image, obtaining a second target object after image recognition processing and then sending the second target object to the processor, after obtaining the second target object, analyzing and obtaining a third flight height and a third angle of shooting of a third target object by the processor, sending the third flight height and the third angle to the flight module, adjusting the rotating speed of the flight rotor and the position of a support rod of the flight rotor according to the third flight height and the third angle by the flight module, shooting a third original image, obtaining the third target object after image recognition processing and then sending the third target object to the processor.
A mask assisted R-CNN training protocol is obtained after a convolutional neural network CNN is improved, picture recognition is carried out through the mask assisted R-CNN training protocol, an unmanned aerial vehicle flying rotor wing is controlled through angle recognition images to adjust flying postures, multi-angle images are obtained, and a target object is recognized.
The control method comprises the following steps of the mask assisted R-CNN training protocol:
step S1), training a basic mask R-CNN model for dividing tasks;
step S2), a training set of the segmentation task is enhanced by using the soft pixel level label;
step S3), training a mask assisted R-CNN model for the detection and segmentation task, and step S2) carrying out repeated training for multiple times;
the step S1) specifically includes the following steps:
step S11), training and dividing a task set;
step S12), obtaining a first mask R-CNN model;
step S13), obtaining a first detection task training set
Step S14), obtaining a first label training sample through soft pixels;
the step S2) specifically includes the following steps:
step S21), obtaining an enhanced training set of the first segmentation task;
Step S22), obtaining a second mask R-CNN model;
step S23), obtaining a second detection task training set
Step S24), obtaining a second label training sample through soft pixels;
the step S3) specifically includes the following steps:
step S31), obtaining an enhanced training set of the second segmentation task;
step S32), obtaining a mask auxiliary R-CNN model;
step S33), obtaining an overall training result;
step S34), respectively obtaining a detection task and a segmentation task through an overall training result;
the step S21) receives the output of the step S14), the step S31) receives the output of the step S24), the second mask R-CNN model of the step S22) receives the first parameters of the first mask R-CNN model of the step S12) for adjustment, and the mask assisted R-CNN model of the step S32) receives the second parameters of the second mask R-CNN model of the step S22) for adjustment.
In the control method, the flight module includes: four flight rotors, four horizontal flight rotor support rods, a vertical flight rotor support rod and an instruction receiving unit, wherein the four horizontal flight rotor support rods can horizontally stretch and retract, a groove for sliding the active flight rotor is arranged in the upper side of the four horizontal flight rotor support rods, the vertical flight rotor support rod is arranged at the intersection point of the four horizontal flight rotor support rods and is provided with four buckles in the vertical direction, the four flight rotors can be clamped and simultaneously rotate in the vertical direction, the lower side of the flight rotor is provided with an I-shaped groove matched with the groove, the bulge at the bottom of the I-shaped groove is embedded into the groove, so that the flight rotor can slide onto the vertical flight rotor support rod through the groove by the bulge at the bottom of the I-shaped groove, the vertical flight rotor support rod is provided with the four buckles, and the inner side of the I-shaped groove is provided with a through hollow, make the I-shaped is the I-shaped structure for the outside, and the inboard can make vertical flight rotor bracing piece link up, works as the flight rotor slides to when on the vertical flight rotor bracing piece, vertical flight rotor bracing piece stretches out partly will the flight rotor passes through the buckle joint the inboard of I-shaped structure works as after vertical flight rotor bracing piece stretches out partly, drives horizontal flight rotor bracing piece also part that rises in step makes things convenient for other flight rotors to slide the upper portion of the preceding flight rotor through horizontal flight rotor bracing piece.
Vertical flight rotor bracing piece is used for the drive including the module that rises vertical flight rotor bracing piece is receiving the flight rotor after, the drive that rises, in order to pass the I-shaped inner structure of flight rotor, and the buckle is lived the flight rotor.
Fig. 4 is a schematic diagram of an image recognition module according to the present invention. According to the control method, the flight module adjusts the flight mode in real time according to the requirement of the image processing module for shooting the object;
after the processor sends a shooting instruction, the flight module adjusts a first flight height and a first angle, and sends a first determining instruction to the image processing module after the adjustment is completed, and after the image processing module receives the first determining instruction, the image processing module shoots first original image information, obtains a first target object after the first original image information is subjected to image recognition processing, and sends the first target object to the processor;
after the processor obtains the first target object, the processor analyzes and obtains a second shooting flying height and a second shooting angle of a second target object, and sends the second flying height and the second shooting angle to the flying module;
the flight module adjusts the rotating speed of the flight rotor and the position of the flight rotor supporting rod according to a second flight height and a second angle, shoots a second original image, obtains a second target object after image recognition processing, and sends the second target object to the processor;
After the processor obtains the second target object, a shooting third flying height and a third angle of a third target object are obtained through analysis, and the third flying height and the third angle are sent to the flying module;
the flight module is according to the rotational speed of third flying height and the third angle adjustment flight rotor and at the position of flight rotor bracing piece, shoots the original image of third, sends for after obtaining the third target object after image recognition handles the treater.
The invention provides an unmanned aerial vehicle system based on neural network image recognition and a control method thereof. The important improvement of the invention is that the picture recognition is carried out through the improved R-CNN, three target objects are determined after the picture recognition, a three-dimensional image is established, and the unmanned aerial vehicle shooting object recognition can be better carried out through the processing control of the processor; the flying rotor wing is arranged on the support rod in a sliding mode, the vertical support rod is arranged to match with the image identified by the image identification module through the sliding horizontal support rod and the telescopic vertical support rod, the flying posture is adjusted, and the flying rotor wing can be retracted conveniently when the position of the unmanned aerial vehicle is insufficient in the falling process, so that different flying postures of the unmanned aerial vehicle can be adjusted according to different objects.

Claims (10)

1. An unmanned aerial vehicle system based on neural network image recognition, its characterized in that includes: the image pickup module is used for carrying out image recognition processing on the basis of the target picture shot by the shooting module, the image recognition module is based on a mask assisted R-CNN training protocol obtained after a Convolutional Neural Network (CNN) is improved, image recognition is carried out through the mask assisted R-CNN training protocol, the processor receives the image processed by the image processing module, establishes a three-dimensional model and analyzes the three-dimensional model, and obtains the flight state of the flight module so as to adjust the flight mode of the flight module.
2. The neural network image recognition based drone system of claim 1, wherein the image processing module comprises: the image receiving unit is used for receiving first original image information shot by the camera module, transmitting the first original image information to the model training unit, segmenting through the model training unit to determine a first target object, and transmitting the first target object to the processor; the image receiving unit is used for receiving second original image information shot by the camera module, transmitting the second original image information to the model training unit, segmenting through the model training unit to determine a second target object, and sending the second target object to the processor; the image receiving unit is used for receiving third original image information shot by the camera module, transmitting the third original image information to the model training unit, segmenting the third original image information through the model training unit to determine a third target object, and sending the third target object to the processor; the processor establishes a three-dimensional model according to the first target object, the second target object and the third target object, and the first original image information, the second original image information and the third original image information are pictures at three different angles.
3. The unmanned aerial vehicle system based on neural network image recognition as claimed in claim 2, wherein the model training unit obtains a mask assisted R-CNN training protocol through training, the mask assisted R-CNN training protocol is to add a mask head in a primary or secondary target detector to achieve target segmentation, and accuracy of detection and classification is improved, and the method specifically comprises the following steps:
step S1), training a basic mask R-CNN model for dividing tasks;
step S2), a training set of the segmentation task is enhanced by using the soft pixel level label;
step S3), training a mask assisted R-CNN model for the detection and segmentation task, said step S2) repeats training itself a number of times.
4. The unmanned aerial vehicle system based on neural network image recognition as claimed in claim 3, wherein the step S1) includes the following steps:
step S11), training and dividing a task set;
step S12), obtaining a first mask R-CNN model;
step S13), obtaining a first detection task training set
Step S14), a first label training sample is obtained by soft pixels.
The step S2) specifically includes the following steps:
step S21), obtaining an enhanced training set of the first segmentation task;
Step S22), a second mask R-CNN model is obtained;
step S23), obtaining a second detection task training set
Step S24), obtaining a second label training sample through the soft pixels;
the step S3) specifically includes the following steps:
step S31), obtaining an enhanced training set of the second segmentation task;
step S32), obtaining a mask auxiliary R-CNN model;
step S33), obtaining an overall training result;
step S34), respectively obtaining a detection task and a segmentation task through an overall training result;
the step S21) receives the output of the step S14), the step S31) receives the output of the step S24), the second mask R-CNN model of the step S22) receives the first parameters of the first mask R-CNN model of the step S12) for adjustment, and the mask assisted R-CNN model of the step S32) receives the second parameters of the second mask R-CNN model of the step S22) for adjustment.
5. The neural network image recognition-based drone system of claim 4, wherein the flight module comprises: four flight rotors, four horizontal flight rotor support rods, a vertical flight rotor support rod and an instruction receiving unit, wherein the four horizontal flight rotor support rods can horizontally stretch and retract, a groove for sliding the active flight rotor is arranged in the upper side of the four horizontal flight rotor support rods, the vertical flight rotor support rod is arranged at the intersection point of the four horizontal flight rotor support rods and is provided with four buckles in the vertical direction, the four flight rotors can be clamped and simultaneously rotate in the vertical direction, the lower side of the flight rotor is provided with an I-shaped groove matched with the groove, the bulge at the bottom of the I-shaped groove is embedded into the groove, so that the flight rotor can slide onto the vertical flight rotor support rod through the groove by the bulge at the bottom of the I-shaped groove, the vertical flight rotor support rod is provided with the four buckles, and the inner side of the I-shaped groove is provided with a through hollow, make the I-shaped is the I-shaped structure for the outside, and the inboard can make vertical flight rotor bracing piece link up, works as the flight rotor slides to when on the vertical flight rotor bracing piece, vertical flight rotor bracing piece stretches out partly will the flight rotor passes through the buckle joint the inboard of I-shaped structure works as after vertical flight rotor bracing piece stretches out partly, drives horizontal flight rotor bracing piece also part that rises in step makes things convenient for other flight rotors to slide the upper portion of the preceding flight rotor through horizontal flight rotor bracing piece.
6. The unmanned aerial vehicle system based on neural network image recognition as claimed in claim 5, wherein the flying module adjusts the flying mode in real time according to the requirement of the image processing module for shooting the object, after the processor sends the shooting instruction, the flying module adjusts the first flying height and the first angle, after the adjustment is completed, the flying module sends a first determination instruction to the image processing module, after the image processing module receives the first determination instruction, the image processing module shoots the first original image information, and sends the first original image information to the processor after the image recognition processing to obtain the first target object, after the processor obtains the first target object, the processor analyzes to obtain the second flying height and the second angle for shooting the second target object, and sends the second flying height and the second angle to the flying module, the flight module is according to the rotational speed of second flying height and second angular adjustment flight rotor and in the position of flight rotor bracing piece, shoots the original image of second, sends after image recognition handles and gives after obtaining the second target object the treater, after the treater obtains the second target object, the analysis obtains the shooting third flying height and the third angle of third target object to send third flying height and third angle for the flight module, the flight module is according to the rotational speed of third flying height and third angle adjustment flight rotor and in the position of flight rotor bracing piece, shoots the original image of third, send after image recognition handles and sends after obtaining the third target object the treater.
7. A control method of an unmanned aerial vehicle system based on neural network image recognition is characterized in that a mask assisted R-CNN training protocol is obtained based on an improved convolutional neural network CNN, image recognition is carried out through the mask assisted R-CNN training protocol, an unmanned aerial vehicle flying rotor wing is controlled through sub-angle image recognition to adjust flying postures, and multi-angle images are obtained to recognize a target object.
8. The control method of claim 7, comprising the step of the mask assisted R-CNN training protocol:
step S1), training a basic mask R-CNN model for dividing tasks;
step S2), a training set of segmentation tasks is enhanced by using soft pixel level labels;
step S3), training a mask assisted R-CNN model for the detection and segmentation task, and step S2) carrying out repeated training for multiple times;
the step S1) specifically includes the following steps:
step S11), training and dividing a task set;
step S12), obtaining a first mask R-CNN model;
step S13), obtaining a first detection task training set
Step S14), obtaining a first label training sample through soft pixels;
the step S2) specifically includes the following steps:
Step S21), obtaining an enhanced training set of the first segmentation task;
step S22), obtaining a second mask R-CNN model;
step S23), obtaining a second detection task training set
Step S24), obtaining a second label training sample through soft pixels;
the step S3) specifically includes the following steps:
step S31), obtaining an enhanced training set of the second segmentation task;
step S32), obtaining a mask auxiliary R-CNN model;
step S33), obtaining an overall training result;
step S34), respectively obtaining a detection task and a segmentation task through an overall training result;
the step S21) receives the output of the step S14), the step S31) receives the output of the step S24), the second mask R-CNN model of the step S22) receives the first parameters of the first mask R-CNN model of the step S12) for adjustment, and the mask assisted R-CNN model of the step S32) receives the second parameters of the second mask R-CNN model of the step S22) for adjustment.
9. The control method of claim 8, wherein the flight module comprises: four flight rotors, four horizontal flight rotor support rods, a vertical flight rotor support rod and an instruction receiving unit, wherein the four horizontal flight rotor support rods can horizontally stretch and retract, a groove for sliding the active flight rotor is arranged in the upper side of the four horizontal flight rotor support rods, the vertical flight rotor support rod is arranged at the intersection point of the four horizontal flight rotor support rods and is provided with four buckles in the vertical direction, the four flight rotors can be clamped and simultaneously rotate in the vertical direction, the lower side of the flight rotor is provided with an I-shaped groove matched with the groove, the bulge at the bottom of the I-shaped groove is embedded into the groove, so that the flight rotor can slide onto the vertical flight rotor support rod through the groove by the bulge at the bottom of the I-shaped groove, the vertical flight rotor support rod is provided with the four buckles, and the inner side of the I-shaped groove is provided with a through hollow, make the I-shaped is the I-shaped structure for the outside, and the inboard can make vertical flight rotor bracing piece link up, works as the flight rotor slides to when on the vertical flight rotor bracing piece, vertical flight rotor bracing piece stretches out partly will the flight rotor passes through the buckle joint the inboard of I-shaped structure works as after vertical flight rotor bracing piece stretches out partly, drives horizontal flight rotor bracing piece also part that rises in step makes things convenient for other flight rotors to slide the upper portion of the preceding flight rotor through horizontal flight rotor bracing piece.
10. The control method according to claim 9, wherein the flight module adjusts the flight mode in real time according to the requirement of the image processing module for shooting the object, when the processor sends the shooting command, the flight module adjusts the first flight altitude and the first angle, when the adjustment is completed, the flight module sends a first determination command to the image processing module, when the image processing module receives the first determination command, the flight module shoots first original image information, and sends the first original image information after image recognition processing to the processor after obtaining a first target object, when the processor obtains the first target object, the flight module analyzes and obtains a second flight altitude and a second angle for shooting a second target object, and sends the second flight altitude and the second angle to the flight module, and the flight module adjusts the rotation speed of the flight rotor and the position of the flight rotor support rod according to the second flight altitude and the second angle, the processor obtains a second target object after image recognition processing, analyzes and obtains a third flying height and a third angle of a third target object, sends the third flying height and the third angle to the flying module, and the flying module adjusts the rotating speed of the flying rotor and the position of the supporting rod of the flying rotor according to the third flying height and the third angle, shoots a third original image, obtains the third target object after image recognition processing, and sends the third target object to the processor.
CN202210274826.1A 2022-03-18 2022-03-18 Unmanned aerial vehicle system based on neural network image recognition and control method Active CN114756037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210274826.1A CN114756037B (en) 2022-03-18 2022-03-18 Unmanned aerial vehicle system based on neural network image recognition and control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210274826.1A CN114756037B (en) 2022-03-18 2022-03-18 Unmanned aerial vehicle system based on neural network image recognition and control method

Publications (2)

Publication Number Publication Date
CN114756037A true CN114756037A (en) 2022-07-15
CN114756037B CN114756037B (en) 2023-04-07

Family

ID=82326958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210274826.1A Active CN114756037B (en) 2022-03-18 2022-03-18 Unmanned aerial vehicle system based on neural network image recognition and control method

Country Status (1)

Country Link
CN (1) CN114756037B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
CN108055003A (en) * 2017-10-24 2018-05-18 北京利泽菲尔文化科技有限公司 A kind of autonomous inspection device of unmanned plane based on double light intelligent loads
CN108182808A (en) * 2017-12-28 2018-06-19 广东傲智创新科技有限公司 A kind of movable type car plate data collecting system
CN110163177A (en) * 2019-05-28 2019-08-23 李峥嵘 A kind of wind power generation unit blade unmanned plane automatic sensing recognition methods
CN110187720A (en) * 2019-06-03 2019-08-30 深圳铂石空间科技有限公司 Unmanned plane guidance method, device, system, medium and electronic equipment
CN111204452A (en) * 2020-02-10 2020-05-29 北京建筑大学 Target detection system based on miniature aircraft
US20200202548A1 (en) * 2018-12-19 2020-06-25 Zijian Wang Techniques for precisely locating landmarks in monocular camera images with deep learning
CN111652067A (en) * 2020-04-30 2020-09-11 南京理工大学 Unmanned aerial vehicle identification method based on image detection
CN112329559A (en) * 2020-10-22 2021-02-05 空间信息产业发展股份有限公司 Method for detecting homestead target based on deep convolutional neural network
CN112819094A (en) * 2021-02-25 2021-05-18 北京时代民芯科技有限公司 Target detection and identification method based on structural similarity measurement
US20210158157A1 (en) * 2019-11-07 2021-05-27 Thales Artificial neural network learning method and device for aircraft landing assistance
KR20210072504A (en) * 2019-12-09 2021-06-17 삼성전자주식회사 Neural network system and operating method of the same
CN113870278A (en) * 2021-08-24 2021-12-31 广州华农大智慧农业科技有限公司 Improved Mask R-CNN model-based satellite remote sensing image farmland block segmentation method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107444665A (en) * 2017-07-24 2017-12-08 长春草莓科技有限公司 A kind of unmanned plane Autonomous landing method
CN108055003A (en) * 2017-10-24 2018-05-18 北京利泽菲尔文化科技有限公司 A kind of autonomous inspection device of unmanned plane based on double light intelligent loads
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
CN108182808A (en) * 2017-12-28 2018-06-19 广东傲智创新科技有限公司 A kind of movable type car plate data collecting system
US20200202548A1 (en) * 2018-12-19 2020-06-25 Zijian Wang Techniques for precisely locating landmarks in monocular camera images with deep learning
CN110163177A (en) * 2019-05-28 2019-08-23 李峥嵘 A kind of wind power generation unit blade unmanned plane automatic sensing recognition methods
CN110187720A (en) * 2019-06-03 2019-08-30 深圳铂石空间科技有限公司 Unmanned plane guidance method, device, system, medium and electronic equipment
US20210158157A1 (en) * 2019-11-07 2021-05-27 Thales Artificial neural network learning method and device for aircraft landing assistance
KR20210072504A (en) * 2019-12-09 2021-06-17 삼성전자주식회사 Neural network system and operating method of the same
CN111204452A (en) * 2020-02-10 2020-05-29 北京建筑大学 Target detection system based on miniature aircraft
CN111652067A (en) * 2020-04-30 2020-09-11 南京理工大学 Unmanned aerial vehicle identification method based on image detection
CN112329559A (en) * 2020-10-22 2021-02-05 空间信息产业发展股份有限公司 Method for detecting homestead target based on deep convolutional neural network
CN112819094A (en) * 2021-02-25 2021-05-18 北京时代民芯科技有限公司 Target detection and identification method based on structural similarity measurement
CN113870278A (en) * 2021-08-24 2021-12-31 广州华农大智慧农业科技有限公司 Improved Mask R-CNN model-based satellite remote sensing image farmland block segmentation method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
JUN CHEN 等: "Building Area Estimation in Drone Aerial Images Based on Mask R-CNN", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
PENGYI ZHANG 等: "SlimYOLOv3: Narrower, Faster and Better for Real-Time UAV Applications", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP》 *
SRIKANTH VEMULA 等: "Mask R-CNN Powerline Detector: A Deep Learning approach with applications to a UAV", 《2020 AIAA/IEEE 39TH DIGITAL AVIONICS SYSTEMS CONFERENCE》 *
YUN SU 等: "GR-SLAM: Vision-Based Sensor Fusion SLAM for Ground Robots on Complex Terrain", 《2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *
刘炫麟: "基于无人机航拍图像的植物种类智能识别研究", 《中国优秀硕士学位论文全文数据库农业科技辑》 *
王靖宇等: "基于深度神经网络的低空弱小无人机目标检测研究", 《西北工业大学学报》 *

Also Published As

Publication number Publication date
CN114756037B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN107729808B (en) Intelligent image acquisition system and method for unmanned aerial vehicle inspection of power transmission line
CN114281093B (en) Defect detection system and method based on unmanned aerial vehicle power inspection
CN105302151B (en) A kind of system and method for aircraft docking guiding and plane type recognition
CN107830846B (en) Method for measuring angle of communication tower antenna by using unmanned aerial vehicle and convolutional neural network
CN109923583A (en) A kind of recognition methods of posture, equipment and moveable platform
CN109857144B (en) Unmanned aerial vehicle, unmanned aerial vehicle control system and control method
US11288824B2 (en) Processing images to obtain environmental information
CN106973221B (en) Unmanned aerial vehicle camera shooting method and system based on aesthetic evaluation
CN108037543B (en) A kind of multispectral infrared imaging detecting and tracking method monitoring low-altitude unmanned vehicle
US20210009270A1 (en) Methods and system for composing and capturing images
CN109858437B (en) Automatic luggage volume classification method based on generation query network
WO2019126930A1 (en) Method and apparatus for measuring distance, and unmanned aerial vehicle
CN112711267B (en) Unmanned aerial vehicle autonomous inspection method based on RTK high-precision positioning and machine vision fusion
CN108053382A (en) A kind of visual characteristic defogging is surely as detection system
CN110044212A (en) The rotor wing unmanned aerial vehicle of view-based access control model metrical information arrests recovery method
CN112947526B (en) Unmanned aerial vehicle autonomous landing method and system
Fu et al. Vision-based obstacle avoidance for flapping-wing aerial vehicles
CN114581831A (en) Unmanned aerial vehicle obstacle detection and obstacle avoidance method and system based on image and point cloud
CN110866472A (en) Unmanned aerial vehicle ground moving target identification and image enhancement system and method
US11869236B1 (en) Generating data for training vision-based algorithms to detect airborne objects
CN114756037B (en) Unmanned aerial vehicle system based on neural network image recognition and control method
Zhou et al. Real-time object detection and pose estimation using stereo vision. An application for a Quadrotor MAV
WO2021014752A1 (en) Information processing device, information processing method, and information processing program
CN114982739A (en) Intelligent laser bird repelling device and method based on deep learning
CN114187663A (en) Method for controlling unmanned aerial vehicle by posture based on radar detection gray level graph and neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant