CN114564042A - Unmanned aerial vehicle landing method based on multi-sensor fusion - Google Patents

Unmanned aerial vehicle landing method based on multi-sensor fusion Download PDF

Info

Publication number
CN114564042A
CN114564042A CN202210196642.8A CN202210196642A CN114564042A CN 114564042 A CN114564042 A CN 114564042A CN 202210196642 A CN202210196642 A CN 202210196642A CN 114564042 A CN114564042 A CN 114564042A
Authority
CN
China
Prior art keywords
area
information
landed
ground
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210196642.8A
Other languages
Chinese (zh)
Inventor
牛欢
王浩
王晨
张炯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Commercial Aircraft Corp of China Ltd
Beijing Aeronautic Science and Technology Research Institute of COMAC
Original Assignee
Commercial Aircraft Corp of China Ltd
Beijing Aeronautic Science and Technology Research Institute of COMAC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Commercial Aircraft Corp of China Ltd, Beijing Aeronautic Science and Technology Research Institute of COMAC filed Critical Commercial Aircraft Corp of China Ltd
Priority to CN202210196642.8A priority Critical patent/CN114564042A/en
Publication of CN114564042A publication Critical patent/CN114564042A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to an unmanned aerial vehicle landing method based on multi-sensor fusion, which comprises the steps of determining information of a ground to be landed according to image information acquired by an airborne camera; determining a landable area according to ground three-dimensional point cloud information synchronously constructed by an airborne laser radar; calculating landing position information of the unmanned aerial vehicle according to the position relation of the multiple sensors and the measurement data; and providing landing position information for a guidance system, and enabling the unmanned aerial vehicle to land autonomously in a landable area by adjusting the flight attitude of the unmanned aerial vehicle. According to the method, the visual image information and the laser radar point cloud information are fused in the unknown environment, the landable area can be extracted in an image semantic segmentation mode, the ground unevenness information in the landable area can be given according to the three-dimensional point cloud information, the selectable landable area can be judged more accurately, the analysis speed is high, the safety and the robustness are better, the safety risk of landing in the unknown environment is effectively reduced, and the control accuracy of the unmanned aerial vehicle is improved.

Description

Unmanned aerial vehicle landing method based on multi-sensor fusion
Technical Field
The invention relates to the technical field of unmanned aerial vehicle autonomous landing, in particular to an unmanned aerial vehicle landing method based on multi-sensor fusion.
Background
The existing image segmentation technology comprises the following processes: the image is segmented into a plurality of specific areas with unique properties, and the interested target area is extracted from the segmented areas.
Conventional image segmentation algorithms include: threshold-based segmentation methods, region-based segmentation methods, edge-based segmentation methods, and segmentation methods based on specific theories.
With the continuous development of artificial intelligence technology and machine vision related fields, an image segmentation technology based on a convolutional neural network and a plurality of classification methods is developed, and the method has great development in the fields of automatic driving, image three-dimensional reconstruction, medical image segmentation and the like, and can be used for accurately segmenting and extracting the category of a single picture or a real-time video stream.
The existing multi-sensor fusion cooperation technology is used for processing information collected by different types of sensors in the same time period, and after the different types of sensors are matched based on specific characteristic information, the defects of a single sensor in the sensing process can be overcome, so that the reliability of environment sensing processing is increased, and the sensing content of a controlled object main body on the environment is enriched.
Generally, the preprocessing method of multi-sensor fusion comprises the following steps:
the time of the multiple sensors is unified and the space coordinate is unified;
matching the characteristic information;
enhancing information fusion, or realizing multi-modal information fusion through a neural network.
The multi-sensor fusion collaborative technology is widely applied in the field of automatic driving, and the common technology comprises: the multi-vision and laser radar fusion, the multi-vision and millimeter wave radar fusion, the whole vehicle positioning information and external sensor information fusion and the like can enhance the perception capability of the controlled object main body and also improve the safety and robustness of the controlled object main body in the process of executing tasks.
In unmanned aerial vehicle's research and development design field, autonomic landing (including autonomic landing under non-emergency state and autonomic landing under emergency state) is one of indispensable basic function, has had more research at present at home and abroad, specifically includes:
the research on the autonomous landing mechanism mainly aims at a buffer device required by autonomous landing;
an unmanned aerial vehicle autonomous landing method based on radio and laser guidance is mainly used for recognizing a preset target area (recognized by using a ground identification and target recognition method) and guiding unmanned aerial vehicle autonomous landing to carry out research;
the autonomous landing method mainly aims at identifying a target area based on an image segmentation method and finishes research on guidance of autonomous landing of the unmanned aerial vehicle;
and so on.
In order to realize the identification and retrieval of the target area, the above research work needs to preset a ground identifier or set a target point in advance, and the defects are that:
the target area cannot be identified in an unknown environment, and the guidance of autonomous landing of the unmanned aerial vehicle is finished;
in the process of identifying the target area, the image segmentation is performed based on single image information, three-dimensional information of the ground is lacked or ignored, and the safety and robustness of searching the target area are insufficient.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide the unmanned aerial vehicle landing method based on multi-sensor fusion, visual image information and laser radar point cloud information are fused in an unknown environment, so that a landable area can be extracted in an image semantic segmentation mode, and meanwhile, ground unevenness information in the landable area can be given according to three-dimensional point cloud information, so that the selectable landable area can be judged more accurately, the analysis speed is high, better safety and robustness are achieved, the safety risk of landing in the unknown environment is effectively reduced, and the control accuracy of the unmanned aerial vehicle is improved.
In order to achieve the purpose, the invention adopts the technical scheme that:
an unmanned aerial vehicle landing method based on multi-sensor fusion is characterized in that airborne sensor data of an unmanned aerial vehicle are used for calculating a landable area in real time and completing autonomous landing according to the following steps:
image segmentation processing is carried out according to image information obtained by an airborne camera, pre-classification identification is carried out on ground scenes, and information of the ground to be landed is determined;
calculating the unevenness of the ground according to the ground three-dimensional point cloud information synchronously constructed by the airborne laser radar, completing the unevenness of the ground information of each ground to be landed, and determining a landable area suitable for landing;
calculating landing position information of the unmanned aerial vehicle through the multi-sensor position relation and the measurement data, wherein the landing position information comprises relative position information and attitude information of a landing area;
and providing landing position information for a guidance system, and enabling the unmanned aerial vehicle to land autonomously in a landable area by adjusting the flight attitude of the unmanned aerial vehicle.
On the basis of the technical scheme, when the unknown environment alarm is generated, the airborne sensor of the unmanned aerial vehicle is automatically triggered, the landable area is calculated in real time, and autonomous landing is completed.
On the basis of the technical scheme, after image information is acquired according to the airborne camera and image segmentation processing is carried out, pre-screening is carried out according to the distance and the physical size of the unmanned aerial vehicle, and the landing area identification speed is improved.
On the basis of the technical scheme, image segmentation processing is carried out according to image information obtained by an onboard camera, pre-classification recognition is carried out on ground scenes, the ground information to be landed is determined, and the landable area recognition method based on image semantic segmentation is adopted as follows:
collecting a ground image and labeling a sample; the method specifically comprises the following steps:
collecting ground images with different heights, carrying out category marking on targets with different categories in the ground images, and carrying out sampling marking on objects with different danger levels observable in the flight process;
performing semantic segmentation on the image; the method specifically comprises the following steps:
segmenting the sampled and labeled ground image by using a semantic segmentation algorithm, training and estimating acquired ground image data by using the image semantic segmentation algorithm to generate a trained image semantic segmentation model, deploying a semantic segmentation model inference program to an onboard computer platform, inputting an onboard vision sensor image of an unmanned aerial vehicle, performing inference by using a pre-generated model, and outputting a labeled segmentation image;
merging landing areas with the same risk level; the method specifically comprises the following steps:
combining the labeled segmentation images according to different risk levels to generate a better landable area, a general landable area and an area which cannot be landed, and setting the better landable area and the general landable area as an area to be landed;
extracting the outline of the image area; the method specifically comprises the following steps:
the method comprises the steps of accessing current ground height information of an unmanned aerial vehicle, calculating an actual distance represented by each pixel of an undetermined landing area, setting a rectangular area according to the physical size of the unmanned aerial vehicle, sequentially extracting the outline of an image area in the undetermined landing area, comparing the outline of the image area with the rectangular area, and screening and sequencing the area, length and width of the undetermined landing area capable of accommodating the rectangular area;
outputting the information of the ground to be landed; the method specifically comprises the following steps:
and performing preliminary screening according to the current height of the unmanned aerial vehicle, the distance between the center point of the area to be landed and the current position of the unmanned aerial vehicle, and giving image-trusted ground information to be landed, wherein the ground information to be landed comprises the range and the position of the area to be landed.
On the basis of the technical scheme, when the area and the length and the width of the to-be-determined landing area capable of containing the rectangular area are screened, the image area and the size deviation coefficient are set according to the error existing in the visual calculation distance, and the rectangular area is enlarged based on the deviation coefficient and then screened.
On the basis of the technical scheme, the method for identifying the landable area based on the fusion laser point cloud information comprises the following steps of calculating the ground unevenness according to the ground three-dimensional point cloud information synchronously constructed by the airborne laser radar, completing the ground unevenness information of each piece of ground information to be landed, and determining the landable area suitable for landing, wherein the method comprises the following steps of:
accessing ground information to be landed, and acquiring a result set of an area to be landed, wherein the result set comprises the ground information;
after the laser radar and the camera are calibrated jointly, accessing ground three-dimensional point cloud information synchronously constructed by the laser radar, and projecting the point cloud to a to-be-landed area to enable each point in the point cloud to correspond to a pixel point in an image of the to-be-landed area;
effectively cutting the point cloud information according to the image area outline of the area to be landed, and only reserving the point cloud information in the image area outline;
when the area to be landed meets the requirements of the physical size and the landing range of the airplane, point cloud unevenness calculation is further carried out, and ground unevenness information completion is carried out on the ground information to be landed corresponding to the area to be landed;
screening the regions to be landed according to the unevenness threshold value, and sequencing the screened regions to be landed through target point distance calculation;
and sequentially determining the landable areas suitable for landing based on the sorting and outputting.
On the basis of the technical scheme, when the area to be landed is far larger than the physical size and landing range requirements of the airplane, the area to be landed is divided into a plurality of divided areas in a multi-window mode according to the landing range requirements, and then the ground unevenness information corresponding to the divided areas is completed in a sliding window mode.
On the basis of the technical scheme, when landing position information is provided for a guide system, if the landing position information provides space point coordinates relative to the aircraft body coordinates, the aircraft attitude is determined through an inertial measurement unit IMU, and then the space point coordinates relative to the aircraft body coordinates are converted into actual GPS space point coordinates;
and the guidance system calculates the expected flight attitude and the flight speed of the unmanned aerial vehicle at the next moment according to the actual GPS space point coordinates.
The unmanned aerial vehicle landing method based on multi-sensor fusion has the following beneficial effects:
in an unknown environment, visual image information and laser radar point cloud information are fused, a landable area can be extracted in an image semantic segmentation mode, ground unevenness information in the landable area can be given according to three-dimensional point cloud information, the selectable landable area can be judged more accurately, the analysis speed is high, better safety and robustness are achieved, safety risks of landing in the unknown environment are effectively reduced, and the control accuracy of the unmanned aerial vehicle is improved.
Drawings
The invention has the following drawings:
the drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a flowchart of a landable area identification method based on image semantic segmentation according to the present invention.
FIG. 2 is a flowchart of a method for identifying a landing area based on fused laser point cloud information according to the present invention.
FIG. 3 is a flowchart illustrating the landable area matching of a drone based on multi-sensor fusion according to the present invention.
FIG. 4 is a schematic diagram illustrating that the area to be landed is much larger than the physical size and landing range requirements of the aircraft.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings. The detailed description, while indicating exemplary embodiments of the invention, is given by way of illustration only, in which various details of embodiments of the invention are included to assist understanding. Accordingly, it will be appreciated by those skilled in the art that various changes and modifications may be made to the embodiments described herein without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The invention provides an unmanned aerial vehicle landing method based on multi-sensor fusion, which is suitable for autonomous landing of an unmanned aerial vehicle (including autonomous landing in a non-emergency state and autonomous landing in an emergency state), and can calculate a landable area in real time and complete autonomous landing by using airborne sensor data of the unmanned aerial vehicle according to the following steps:
image segmentation processing is carried out according to image information obtained by an airborne camera, pre-classification identification is carried out on ground scenes, and information of the ground to be landed is determined; the airborne camera is used for carrying out primary identification and retrieval on the ground scene;
calculating the ground unevenness according to ground three-dimensional point cloud information synchronously constructed by an airborne laser radar, completing the ground unevenness information of each piece of ground information to be landed, and determining a landable area suitable for landing;
calculating landing position information of the unmanned aerial vehicle through the multi-sensor position relation and the measurement data, wherein the landing position information comprises relative position information and attitude information of a landing area, and the attitude information is used for adjusting the flight attitude of the unmanned aerial vehicle;
and providing landing position information for a guidance system, and enabling the unmanned aerial vehicle to land autonomously in a landable area by adjusting the flight attitude of the unmanned aerial vehicle.
The invention utilizes the fusion of vision and laser radar, has important significance for autonomous landing of the unmanned aerial vehicle, can increase the operation safety of the unmanned aerial vehicle by fast response processing, and can lead the system to obtain more accurate relative position information and attitude information of the landable area in time.
On the basis of the technical scheme, when the unknown environment alarm is generated, the airborne sensor of the unmanned aerial vehicle is automatically triggered, the landable area is calculated in real time, and autonomous landing is completed.
The method is particularly suitable for autonomous landing control of the unmanned aerial vehicle in an unknown environment.
On the basis of the technical scheme, after image information is acquired according to the airborne camera and image segmentation processing is carried out, pre-screening is carried out according to the distance and the physical size of the unmanned aerial vehicle, and the landing area identification speed is improved.
Because the physical size of the unmanned aerial vehicle is known, the pre-screening is carried out in advance according to the physical size of the unmanned aerial vehicle, the calculation time can be reduced, the calculation speed is prevented from being influenced by the time consumed by the subsequent point cloud calculation time, and the calculation cost is reduced.
On the basis of the above technical solution, as shown in fig. 1, the image information obtained by the onboard camera is subjected to image segmentation processing, the ground scene is pre-classified and identified, and the ground information to be landed is determined, and the image semantic segmentation-based landable area identification method is adopted as follows:
collecting a ground image and labeling a sample; the method specifically comprises the following steps:
collecting ground images with different heights, carrying out category marking on targets with different categories in the ground images, and carrying out sampling marking on objects with different danger levels observable in the flight process to obtain sampled and marked ground images;
for example, sampling and labeling high-risk areas such as pedestrians, vehicles, trees, buildings and railings, sampling and labeling low-risk areas such as cement or asphalt pouring pavements, lawns and wastelands, sampling and labeling medium-risk areas such as crop fields, special landing areas such as water surfaces and ponds, and the like; accurate image segmentation and labeling can bring more accurate reasoning for the recognition model;
performing semantic segmentation on the image; the method specifically comprises the following steps:
segmenting the sampled and labeled ground image by using a semantic segmentation algorithm, training and estimating acquired ground image data by using the image semantic segmentation algorithm to generate a trained image semantic segmentation model, deploying a semantic segmentation model inference program to an onboard computer platform, inputting an onboard vision sensor image of an unmanned aerial vehicle, performing inference by using a pre-generated model, and outputting a labeled segmentation image;
merging landing areas with the same risk level; the method specifically comprises the following steps:
combining the labeled segmentation images according to different risk levels to generate a better landable area, a general landable area and an area which cannot be landed, and setting the better landable area and the general landable area as an area to be landed;
extracting the outline of the image area; the method specifically comprises the following steps:
the method comprises the steps of accessing current ground height information of an unmanned aerial vehicle, calculating an actual distance represented by each pixel of an undetermined landing area, setting a rectangular area according to the physical size of the unmanned aerial vehicle, sequentially extracting the outline of an image area in the undetermined landing area, comparing the outline of the image area with the rectangular area, and screening and sequencing the area, length and width of the undetermined landing area capable of accommodating the rectangular area; setting the pending landing area which can not accommodate the rectangular area as a landing area incapable of being accommodated;
as an alternative embodiment, when the area and the length and the width of the region to be landed, which can accommodate the rectangular region, are screened, the image area and the size deviation coefficient are set according to the error existing in the visual calculation distance, and the rectangular region is enlarged based on the deviation coefficient and then screened;
outputting the information of the ground to be landed; the method specifically comprises the following steps:
and performing preliminary screening according to the current height of the unmanned aerial vehicle, the distance between the center point of the area to be landed and the current position of the unmanned aerial vehicle, and giving image-trusted ground information to be landed, wherein the ground information to be landed comprises the range and the position of the area to be landed.
On the basis of the above technical solution, as shown in fig. 2, the method for identifying the landable area based on the fused laser point cloud information includes performing ground unevenness calculation according to the ground three-dimensional point cloud information synchronously constructed by the airborne laser radar, completing the ground unevenness information of each piece of ground information to be landed, and determining the landable area suitable for landing, and includes the following steps:
accessing ground information to be landed, and acquiring a result set of an area to be landed, wherein the result set comprises the ground information;
after the laser radar and the camera are calibrated jointly, accessing ground three-dimensional point cloud information synchronously constructed by the laser radar, and projecting the point cloud to a to-be-landed area to enable each point in the point cloud to correspond to a pixel point in an image of the to-be-landed area;
effectively cutting the point cloud information according to the image area outline of the area to be landed, and only reserving the point cloud information in the image area outline;
when the area to be landed meets the requirements of the physical size and the landing range of the airplane, point cloud unevenness calculation is further carried out, and ground unevenness information completion is carried out on the ground information to be landed corresponding to the area to be landed;
as an alternative embodiment, as shown in fig. 4, when the area to be landed is much larger than the physical size and landing range requirement of the aircraft, firstly performing multi-window segmentation on the area to be landed according to the landing range requirement to generate a plurality of segmented areas, and then performing ground unevenness information completion on the ground information to be landed corresponding to the plurality of segmented areas in a sliding window manner;
screening the regions to be landed according to the unevenness threshold value, and sequencing the screened regions to be landed through target point distance calculation;
and sequentially determining the landable areas suitable for landing based on the sorting and outputting.
On the basis of the technical scheme, when landing position information is provided for a guide system, if the landing position information provides space point coordinates relative to the aircraft body coordinates, the aircraft attitude is determined through an inertial measurement unit IMU, and then the space point coordinates relative to the aircraft body coordinates are converted into actual GPS space point coordinates;
and the guidance system calculates the expected flight attitude and the flight speed of the unmanned aerial vehicle at the next moment according to the actual GPS space point coordinates.
One embodiment is as follows.
As shown in fig. 3, it is a preferred landable area matching method for a drone based on multi-sensor fusion:
calibrating a plurality of sensors in a combined manner; the method specifically comprises the following steps:
the camera and the laser radar are calibrated in a combined mode, and the camera and the inertial measurement unit IMU are calibrated in a combined mode; the method can be implemented according to the prior art, for example, calibration is carried out by adopting an automatic calibration tool box, and calibration is carried out by adopting a Kalibr tool;
coordinate conversion among different modal information is completed by jointly calibrating the multiple sensors, and the unified coordinate system and the machine body coordinate system can be ensured to be synchronous at the same time, so that operations such as rotation and translation from the camera coordinate system to the image coordinate system, from the image coordinate system to the machine body coordinate system, from the image coordinate system to the laser radar point cloud coordinate system and the like are facilitated, and multi-modal information space unification is realized;
the multi-sensor time is uniform; the method specifically comprises the following steps:
selecting any one of the following time unification methods to constrain the information of the sensors in different modes within proper frequency: a method for unifying information matching time of the nearest sensor and a method for unifying tracking interpolation matching time;
generally, the scanning frequency of the laser radar is lower than the frame rate of camera image acquisition processing, different time unification methods can be selected according to different real-time requirements, and three-dimensional point cloud information needs to be as same as data acquired by a camera as possible;
matching three-dimensional point cloud information; the method specifically comprises the following steps:
projecting the point cloud into an image containing an area to be landed, effectively cutting point cloud information, only reserving the point cloud information in the outline of the image area, and ensuring that the coincidence of the image area and the laser radar area is kept consistent;
calculating the unevenness of the ground; the method specifically comprises the following steps:
according to two indexes required by the physical size and the landing range of the unmanned aerial vehicle, when the area to be landed meets the indexes, further performing point cloud unevenness calculation, completing ground unevenness information of the ground information to be landed corresponding to the area to be landed, performing extreme value, mean value, variance and other calculations on the ground unevenness result, filtering the area to be landed, determining a landable area suitable for landing and outputting the landable area by taking point cloud information as a screening condition that the point cloud information is smooth and the height information does not change much;
as an alternative embodiment, at least two landable areas suitable for landing are obtained according to a preset threshold value, and the optimal landing place screening process is further performed as follows:
when at least two landable areas exist, the current position of the unmanned aerial vehicle is taken as a reference, and the landable area with the closest distance is selected as the preferred landable area by default, so that the energy of the unmanned aerial vehicle can be saved, and meanwhile, the emergency can be quickly responded and solved;
selecting another landable area with the closest distance as a landable area for standby landing by taking the landable area with the closest distance as a reference in combination with the residual electric quantity (such as a new energy power unmanned aerial vehicle) or the residual navigation mileage (such as a fuel power unmanned aerial vehicle) of the unmanned aerial vehicle;
when the preferred landable area changes, the screening process described above is repeated to change the landable area to be landed.
The airborne sensor of the unmanned aerial vehicle is installed at the center of the bottom of the body as much as possible during installation and deployment, so that external parameter calculation and combined calibration work are facilitated.
Those not described in detail in this specification are within the skill of the art.
The above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, but equivalent modifications or changes made by those skilled in the art according to the present disclosure should be included in the scope of the present invention as set forth in the appended claims.

Claims (8)

1. An unmanned aerial vehicle landing method based on multi-sensor fusion is characterized in that airborne sensor data of an unmanned aerial vehicle are used for calculating a landable area in real time and completing autonomous landing according to the following steps:
image segmentation processing is carried out according to image information obtained by an airborne camera, pre-classification identification is carried out on ground scenes, and information of the ground to be landed is determined;
calculating the ground unevenness according to ground three-dimensional point cloud information synchronously constructed by an airborne laser radar, completing the ground unevenness information of each piece of ground information to be landed, and determining a landable area suitable for landing;
calculating landing position information of the unmanned aerial vehicle through the multi-sensor position relation and the measurement data, wherein the landing position information comprises the relative position information and attitude information of a landable area;
and providing landing position information for a guidance system, and enabling the unmanned aerial vehicle to land autonomously in a landable area by adjusting the flight attitude of the unmanned aerial vehicle.
2. The unmanned aerial vehicle landing method based on multi-sensor fusion of claim 1, wherein when an unknown environment alarm is generated, an airborne sensor of the unmanned aerial vehicle is automatically triggered, and a landable area is calculated in real time and autonomous landing is completed.
3. The unmanned aerial vehicle landing method based on multi-sensor fusion of claim 1, wherein after image segmentation processing is performed according to image information obtained by an onboard camera, pre-screening is performed according to distance and physical size of the unmanned aerial vehicle to improve the identification speed of the landing area.
4. The unmanned aerial vehicle landing method based on multi-sensor fusion of claim 1, wherein the image segmentation is performed according to image information obtained by an onboard camera, the ground scene is pre-classified and identified, the ground information to be landed is determined, and the image semantic segmentation-based landable area identification method is adopted as follows:
collecting a ground image and labeling a sample; the method specifically comprises the following steps:
collecting ground images with different heights, carrying out category marking on targets with different categories in the ground images, and carrying out sampling marking on objects with different danger levels observable in the flight process;
performing semantic segmentation on the image; the method specifically comprises the following steps:
segmenting the sampled and labeled ground image by using a semantic segmentation algorithm, training and estimating acquired ground image data by using the image semantic segmentation algorithm to generate a trained image semantic segmentation model, deploying a semantic segmentation model inference program to an onboard computer platform, inputting an onboard vision sensor image of an unmanned aerial vehicle, performing inference by using a pre-generated model, and outputting a labeled segmentation image;
merging landing areas with the same risk level; the method specifically comprises the following steps:
combining the labeled segmentation images according to different risk levels to generate a better landable area, a general landable area and an area which cannot be landed, and setting the better landable area and the general landable area as an area to be landed;
extracting the outline of the image area; the method specifically comprises the following steps:
the method comprises the steps of accessing current ground height information of an unmanned aerial vehicle, calculating an actual distance represented by each pixel of an area to be landed, setting a rectangular area according to the physical size of the unmanned aerial vehicle, sequentially extracting the outline of an image area in the area to be landed, comparing the outline of the image area with the rectangular area, screening the area and the length and the width of the area to be landed, and sorting the area, the length and the width of the area to be landed and the area to be landed;
outputting the information of the ground to be landed; the method specifically comprises the following steps:
and performing preliminary screening according to the current height of the unmanned aerial vehicle, the distance between the center point of the area to be landed and the current position of the unmanned aerial vehicle, and giving image-trusted ground information to be landed, wherein the ground information to be landed comprises the range and the position of the area to be landed.
5. The unmanned aerial vehicle landing method based on multi-sensor fusion of claim 4, wherein when the area and the length and the width of the area to be landed, which can accommodate the rectangular area, are screened, the image area and the size deviation coefficient are set according to the error existing in the visual calculation distance, and the screening is performed after the rectangular area is expanded based on the deviation coefficient.
6. The unmanned aerial vehicle landing method based on multi-sensor fusion of claim 1, wherein the method comprises the following steps of calculating the unevenness of the ground according to the ground three-dimensional point cloud information synchronously constructed by the airborne laser radar, completing the unevenness of the ground information of each piece of ground information to be landed, and determining the landable area suitable for landing, wherein the method comprises the following steps:
accessing ground information to be landed, and acquiring a result set of an area to be landed, wherein the result set comprises the ground information;
after the laser radar and the camera are calibrated jointly, accessing ground three-dimensional point cloud information synchronously constructed by the laser radar, and projecting the point cloud to a to-be-landed area to enable each point in the point cloud to correspond to a pixel point in an image of the to-be-landed area;
effectively cutting the point cloud information according to the image area outline of the area to be landed, and only reserving the point cloud information in the image area outline;
when the area to be landed meets the requirements of the physical size and the landing range of the airplane, point cloud unevenness calculation is further carried out, and ground unevenness information completion is carried out on the ground information to be landed corresponding to the area to be landed;
screening the regions to be landed according to the unevenness threshold value, and sequencing the screened regions to be landed through target point distance calculation;
and sequentially determining the landable areas suitable for landing based on the sorting and outputting.
7. The unmanned aerial vehicle landing method based on multi-sensor fusion of claim 6, wherein when the area to be landed is far larger than the physical size and landing range requirements of the aircraft, the area to be landed is subjected to multi-window segmentation according to the landing range requirements to generate a plurality of segmented areas, and then the ground unevenness information corresponding to the plurality of segmented areas is subjected to ground unevenness information completion in a sliding window manner.
8. The unmanned aerial vehicle landing method based on multi-sensor fusion of claim 1, wherein when landing position information is provided to a guidance system, if the landing position information gives out a space point coordinate relative to an aircraft body coordinate, an aircraft attitude is determined by an Inertial Measurement Unit (IMU), and then the space point coordinate relative to the aircraft body coordinate is converted into an actual GPS space point coordinate;
and the guidance system calculates the expected flight attitude and the flight speed of the unmanned aerial vehicle at the next moment according to the actual GPS space point coordinates.
CN202210196642.8A 2022-03-01 2022-03-01 Unmanned aerial vehicle landing method based on multi-sensor fusion Pending CN114564042A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210196642.8A CN114564042A (en) 2022-03-01 2022-03-01 Unmanned aerial vehicle landing method based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210196642.8A CN114564042A (en) 2022-03-01 2022-03-01 Unmanned aerial vehicle landing method based on multi-sensor fusion

Publications (1)

Publication Number Publication Date
CN114564042A true CN114564042A (en) 2022-05-31

Family

ID=81715297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210196642.8A Pending CN114564042A (en) 2022-03-01 2022-03-01 Unmanned aerial vehicle landing method based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN114564042A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016545A (en) * 2022-08-05 2022-09-06 四川腾盾科技有限公司 Landing point autonomous selection method, device and medium for unmanned aerial vehicle landing
CN115761516A (en) * 2022-12-26 2023-03-07 中国电子科技集团公司第十五研究所 Aerial emergency delivery landing region analysis method, server and storage medium
CN116482711A (en) * 2023-06-21 2023-07-25 之江实验室 Local static environment sensing method and device for autonomous selection of landing zone

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115016545A (en) * 2022-08-05 2022-09-06 四川腾盾科技有限公司 Landing point autonomous selection method, device and medium for unmanned aerial vehicle landing
CN115761516A (en) * 2022-12-26 2023-03-07 中国电子科技集团公司第十五研究所 Aerial emergency delivery landing region analysis method, server and storage medium
CN115761516B (en) * 2022-12-26 2024-03-05 中国电子科技集团公司第十五研究所 Method, server and storage medium for analyzing landing zone of air emergency delivery
CN116482711A (en) * 2023-06-21 2023-07-25 之江实验室 Local static environment sensing method and device for autonomous selection of landing zone

Similar Documents

Publication Publication Date Title
CN114564042A (en) Unmanned aerial vehicle landing method based on multi-sensor fusion
Li et al. Automatic bridge crack detection using Unmanned aerial vehicle and Faster R-CNN
CN105957342B (en) Track grade road plotting method and system based on crowdsourcing space-time big data
McGee et al. Obstacle detection for small autonomous aircraft using sky segmentation
US20190042865A1 (en) Image-Based Pedestrian Detection
CN110988912A (en) Road target and distance detection method, system and device for automatic driving vehicle
CN104049641B (en) A kind of automatic landing method, device and aircraft
US11092444B2 (en) Method and system for recording landmarks in a traffic environment of a mobile unit
CN109885086B (en) Unmanned aerial vehicle vertical landing method based on composite polygonal mark guidance
CN109583415A (en) A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN104808685A (en) Vision auxiliary device and method for automatic landing of unmanned aerial vehicle
CN112923904B (en) Geological disaster hidden danger detection method for multi-unmanned aerial vehicle collaborative photogrammetry
CN110196454B (en) Geological survey integrated system based on unmanned aerial vehicle
Li et al. Toward automated power line corridor monitoring using advanced aircraft control and multisource feature fusion
US11371851B2 (en) Method and system for determining landmarks in an environment of a vehicle
CN111796602A (en) Plant protection unmanned aerial vehicle barrier is surveyed and early warning system
CN112949366B (en) Obstacle identification method and device
CN112596071A (en) Unmanned aerial vehicle autonomous positioning method and device and unmanned aerial vehicle
CN112379681A (en) Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle
Savva et al. ICARUS: Automatic autonomous power infrastructure inspection with UAVs
CN112378397A (en) Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle
WO2023109589A1 (en) Smart car-unmanned aerial vehicle cooperative sensing system and method
CN113284144A (en) Tunnel detection method and device based on unmanned aerial vehicle
CN117036989A (en) Miniature unmanned aerial vehicle target recognition and tracking control method based on computer vision
CN112380933A (en) Method and device for identifying target by unmanned aerial vehicle and unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination