WO2023030062A1 - Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program - Google Patents

Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program Download PDF

Info

Publication number
WO2023030062A1
WO2023030062A1 PCT/CN2022/113856 CN2022113856W WO2023030062A1 WO 2023030062 A1 WO2023030062 A1 WO 2023030062A1 CN 2022113856 W CN2022113856 W CN 2022113856W WO 2023030062 A1 WO2023030062 A1 WO 2023030062A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
flight
feature point
image feature
Prior art date
Application number
PCT/CN2022/113856
Other languages
French (fr)
Chinese (zh)
Inventor
黄佳伟
任一珂
刘长杰
Original Assignee
中移(成都)信息通信科技有限公司
***通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中移(成都)信息通信科技有限公司, ***通信集团有限公司 filed Critical 中移(成都)信息通信科技有限公司
Publication of WO2023030062A1 publication Critical patent/WO2023030062A1/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions

Definitions

  • the present application relates to the field of information technology, and in particular to a flight control method, device, equipment, medium and program of an unmanned aerial vehicle.
  • the embodiment of the present application provides a flight control method, device, equipment, medium and program of an unmanned aerial vehicle, through the three-dimensional Coordinate information, building a three-dimensional map, can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information; at the same time, determine the flight trajectory based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the drone's flight during flight. Affected by the actual flight environment.
  • An embodiment of the present application provides a flight control method of an unmanned aerial vehicle, the method comprising:
  • the screen content of the image to be processed includes information about the flight environment ahead;
  • An embodiment of the present application provides a flight control device for an unmanned aerial vehicle, the device comprising:
  • the acquisition part is configured to acquire the image to be processed; wherein, the screen content of the image to be processed includes the information of the flight environment ahead;
  • the first determining part is configured to determine image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed;
  • the second determination part is configured to determine the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment
  • the adjustment part is configured to adjust the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
  • the third determining part is configured to determine the flight track of the UAV based on the three-dimensional map.
  • An embodiment of the present application also provides an electronic device, the electronic device including: a processor, a memory, and a communication bus; wherein the communication bus is used to implement a communication connection between the processor and the memory;
  • the processor is used to execute the program in the memory, so as to realize the flight control method of the drone as described above.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to Realize the flight control method of the unmanned aerial vehicle as described above.
  • the embodiment of the present application also provides a computer program, the computer program includes computer-readable codes, and when the computer-readable codes run in an electronic device, the processor of the electronic device executes the above-mentioned The flight control method of the unmanned aerial vehicle.
  • the flight control method, device, equipment, medium and program of the unmanned aerial vehicle provided by the embodiment of the present application, firstly, obtain the image to be processed, the screen content of the image to be processed includes the flight environment information in front; secondly, based on the temporally adjacent Two frames of images to be processed, determine the image feature point pairs that meet the preset conditions; and in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs; finally, based on the three-dimensional coordinate information, the pair corresponding to the image to be processed
  • the map to be adjusted is adjusted to obtain a three-dimensional map; based on the three-dimensional map, the flight trajectory of the UAV is determined.
  • the three-dimensional map is constructed through the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent frames of images to be processed in time series, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information ;
  • the flight trajectory is determined based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the influence of the actual flight environment on the UAV during flight.
  • FIG. 1 is a schematic flow chart of a flight control method for an unmanned aerial vehicle provided in an embodiment of the present application
  • FIG. 2 is a schematic flow chart of another unmanned aerial vehicle flight control method provided by the embodiment of the present application.
  • FIG. 3 is a schematic flow diagram of another flight control method for a drone provided in an embodiment of the present application.
  • FIG. 4 is a schematic flow chart of building a three-dimensional map during a flight provided by an embodiment of the present application
  • FIG. 5 is a schematic diagram representing the corresponding relationship between pairs of image feature points provided by the embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a flight control device for a drone provided in an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • references to "an embodiment of the present application” or “the foregoing embodiment” throughout the specification mean that a specific feature, structure or characteristic related to the embodiment is included in at least one embodiment of the present application. Therefore, appearances of "in the embodiment of the present application” or “in the foregoing embodiment” throughout the specification do not necessarily refer to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
  • the serial numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than the implementation process of the embodiments of the present application. constitute any limitation. The serial numbers of the above embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments.
  • the information processing device executes any step in the embodiments of the present application, and may be a processor of the information processing device executes the step. It is also worth noting that the embodiment of the present application does not limit the order in which the information processing device executes the following steps. In addition, the methods used to process data in different embodiments may be the same method or different methods. It should also be noted that any step in the embodiments of the present application can be executed independently by the information processing device, that is, when the information processing device executes any step in the following embodiments, it may not depend on the execution of other steps.
  • UAV obstacle avoidance UAVs are gradually replacing humans to complete various special tasks, such as search, rescue, fire fighting and data collection. When completing such tasks, UAVs are often in an environment with complex terrain, such as buildings, narrow indoors, rugged mountains and forests, etc. Obstacles in the environment will pose a collision threat to UAVs at any time. make it difficult to complete the task. UAVs must collect environmental information as much as possible through their limited sensors and storage space in order to detect obstacles in space in time and avoid obstacles in the original route by updating the flight path.
  • CV Computer Vision
  • CV is a simulation of biological vision using computers and related equipment. Its main task is to obtain various information of the corresponding scene by processing the collected pictures or videos.
  • the main goal of traditional computer vision systems is to extract features from images, including edge detection, corner detection, and image segmentation. Depending on the type and quality of the input image, different algorithms perform differently.
  • VSLAM Vision-based synchronous positioning and map construction
  • drones or vehicles use airborne equipment to perceive obstacles in the front environment, obtain information such as relative distance and their own angle and posture, and process these information to realize real-time Update dynamic environment information.
  • the UAV autonomously formulates obstacle avoidance routes based on dynamic environmental information and its own flight status.
  • the flight control system adjusts the flight speed and direction of the UAV to achieve obstacle avoidance.
  • the existing UAV obstacle recognition is usually implemented in the following two ways:
  • the first type is mainly based on the manual identification of the drone operator, and the operator manually operates the drone to complete the evasive flight of the obstacles in the space through the real-time transmission of the picture sent by the drone. This puts high demands on the technical requirements of drone operators, and accidents may occur due to misoperation, and there will be problems such as shortage of manpower when multiple drones are required to work at the same time.
  • the second is to use algorithms to enable UAVs to autonomously identify and avoid obstacles; usually, obstacles in real space are mapped to a two-dimensional virtual map to realize UAVs’ perception of obstacles, which will lead to The point clouds of various obstacles in the height information are compressed into two-dimensional space, and the original shape of the obstacle cannot be perceived. If it is an obstacle with an irregular shape, it will lead to modeling errors. Although this method is feasible for avoiding some relatively small obstacles in the space. However, some large-scale obstacles, such as: forests, mountains, buildings, etc. If you only control the UAV to avoid obstacles on a two-dimensional horizontal plane, it will definitely increase the distance traveled for obstacle avoidance, which is obviously disadvantageous for many small UAVs with limited battery reserve capacity.
  • the embodiment of the present application provides a flight control method of a drone, which is applied to electronic equipment.
  • the method includes the following steps:
  • Step 101 acquire an image to be processed.
  • the picture content of the image to be processed includes the information of the flight environment ahead.
  • the electronic device may be any device with data processing capabilities; wherein, the electronic device may be a data processing device installed inside the drone, or it may be an electronic device capable of information interaction with the drone.
  • the device can also be a cloud processor for managing drones, etc.
  • the electronic device may receive images to be processed sent by at least one of the image acquisition part and the video acquisition part arranged on the UAV; correspondingly, in the embodiment of the present application, the UAV is provided with an image acquisition part and At least one of the video acquisition part, at least one of the image acquisition part and the video acquisition part may be: a monocular camera or a binocular camera.
  • the forward flight environment information refers to the forward environment information where the UAV is located during flight.
  • the unmanned aerial vehicle may perform flight missions in environments such as mountains, forests, buildings, or indoors.
  • the number of images to be processed can be one, or two or more;
  • the picture format of the image to be processed can be bitmap (Bitmap, BMP) format, Joint Photographic Experts Group (Joint Photographic Experts Group , JEPG) format and Portable Network Graphics (Portable Network Graphics, PNG) format, etc.
  • Step 102 based on two temporally adjacent frames of images to be processed, determine image feature point pairs satisfying preset conditions.
  • the electronic device determines image feature point pairs satisfying preset conditions based on two frames of images to be processed that are temporally adjacent; wherein, temporally adjacent refers to the time at which the two frames of images to be processed are respectively collected
  • Adjacent may also mean that two frames of images to be processed are adjacent in time when they are respectively acquired by the electronic device.
  • the preset condition can be set in advance, for example, it can refer to the similarity of the related attribute information between the two image feature points in the image feature point pair, and can also refer to the similarity between the two image feature points in the image feature point pair
  • the preset distance between them is less than or equal to the preset threshold, and it may also mean that the image feature points have the same position information of the two image feature points in the respective images to be processed.
  • the electronic device first extracts at least one image feature point on each image to be processed from two temporally adjacent frames to be processed; secondly, selects any image feature point as the first image feature point, at the same time calculate the Hamming distance between other image feature points that are not in the same image to be processed as the first image feature point, and if the Hamming distance is less than or equal to the preset distance, the first image feature point and the corresponding The other image feature points of are determined as a pair of image feature point pairs.
  • the electronic device may describe the image feature point pair based on the position information of the image feature point pair in the corresponding image to be processed, or may describe the image feature point pair based on the feature information to describe the image feature point pair.
  • the two image feature points in the image feature point pair are obtained by collecting the same space point in the front flight environment by the image acquisition device installed inside the UAV based on adjacent time points. That is to say, each image feature point in the image feature point pair is mapped to the same spatial point in the forward flight environment.
  • the number of feature point pairs may be one, two or more in the embodiment of the present application.
  • Step 103 in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs.
  • the electronic device determines the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment; wherein, the three-dimensional coordinate information is a spatial point in the forward flight environment that has a mapping relationship with the image feature point pair coordinate information.
  • the three-dimensional coordinate information may refer to a three-dimensional coordinate parameter of a spatial point having a mapping relationship with the image feature point pair in the world coordinate system.
  • the electronic device is based on the coordinate position of each image feature point in the image feature point pair in the corresponding image to be processed.
  • the coordinate position can be the electronic device taking the image acquisition device in the drone as a reference
  • the established camera coordinate system obtains the corresponding coordinate position information, and then based on the two coordinate positions, the three-dimensional coordinate information associated with the image feature point pair is determined based on a geometric algorithm.
  • Step 104 based on the three-dimensional coordinate information, adjust the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map.
  • the electronic device based on the determined three-dimensional coordinate information, the electronic device adjusts or corrects the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map with height information; A virtual three-dimensional image corresponding to the environment.
  • the map to be adjusted corresponding to the image to be processed may be a two-dimensional topographical map, or a three-dimensional topographical map.
  • the map to be adjusted corresponding to the image to be processed is a two-dimensional planar map
  • the electronic device can fuse the determined three-dimensional coordinate information with the two-dimensional coordinate information in the two-dimensional planar map to construct a map with height information 3D map of .
  • the map to be adjusted corresponding to the image to be processed is a three-dimensional map
  • the electronic device may adjust or correct the three-dimensional coordinate information based on the determined three-dimensional coordinate information to obtain an updated three-dimensional map.
  • the map to be adjusted corresponding to the image to be processed may be that the electronic device generates a two-dimensional plane map corresponding to each frame of the image to be processed after acquiring each frame of the image to be processed; it may also be The electronic device determines a preset three-dimensional stereoscopic image corresponding to the multiple frames of images to be processed based on the acquired multiple frames of images to be processed and based on a correlation algorithm.
  • Step 105 based on the three-dimensional map, determine the flight track of the drone.
  • the electronic device determines the flight trajectory of the drone based on the obtained three-dimensional map; wherein, the flight trajectory may refer to an actual obstacle avoidance path in the forward flight environment.
  • the electronic device obtains the obstacle information existing in the flying environment in front, and then determines the flight trajectory that the UAV needs to avoid obstacles during the flight, that is, the UAV flight trajectory.
  • the image feature point pairs, the three-dimensional coordinate information associated with the image feature point pairs in the forward flight environment, and the three-dimensional map are determined sequentially through the collected images to be processed; in this way, the forward flight can be restored more accurately
  • the state of obstacles in the environment can then give a more accurate three-dimensional map with height information, and then enable the electronic device to give a more accurate obstacle avoidance path for the drone based on the determined three-dimensional map, that is, to ensure that no one is as far as possible
  • the flight path of the aircraft is not affected by obstacles in the flying environment ahead.
  • the flight control method of the unmanned aerial vehicle provided by the embodiment of the present application, firstly, obtain the image to be processed, and the screen content of the image to be processed includes the information of the flying environment in front; secondly, based on the two frames of images to be processed adjacent to each other in time sequence, it is determined that the following conditions are met: Image feature point pairs with preset conditions; and in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs; finally, based on the three-dimensional coordinate information, adjust the map to be adjusted corresponding to the image to be processed, and obtain Three-dimensional map: Based on the three-dimensional map, determine the flight trajectory of the drone.
  • the three-dimensional map is constructed through the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent frames of images to be processed in time series, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information ;
  • the flight trajectory is determined based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the influence of the actual flight environment on the UAV during flight.
  • the embodiment of the present application also provides a flight control method of a drone, which is applied to electronic equipment, as shown in Figure 1 and Figure 2, the method includes the following steps:
  • Step 201 collecting information about the flight environment ahead to obtain a preset image.
  • the electronic device collects the flight environment information ahead to obtain a preset image, that is, the content of the screen collected by the electronic device includes the flight environment information ahead to obtain a preset image; where the preset image can be set on the drone And it includes at least the electronic equipment of the image collection part, which is obtained by collecting the flight environment information in front of the UAV during the flight.
  • the preset image may be an image directly collected by the electronic device during the flight of the drone without any data processing; correspondingly, the number of preset images and the frequency of collection are not discussed in this embodiment Any restrictions.
  • Step 202 Adjust the image contrast of the preset image to obtain an image to be processed.
  • the electronic device adjusts the image contrast of the preset image to obtain the image to be processed; wherein, the contrast adjustment can be to correct or optimize the pixel value of the preset image, or to adjust the pixel value of the preset image Image contrast enhancement.
  • image contrast adjustment refers to image contrast enhancement.
  • the electronic device may directly enhance the image contrast of the preset image, or may enhance the image contrast of the preset image through an indirect method; wherein, based on at least one of histogram stretching and histogram equalization, The image contrast is enhanced, and its specific implementation process is not described in detail in the embodiments of this application.
  • the image to be processed is obtained by enhancing the image contrast of the acquired preset image; in this way, the feature information in the image to be processed can be made more prominent, and the intensity gradient of the pixel value of the key point increases , so that more prominent image feature points can be extracted when the image to be processed is extracted later.
  • the electronic device determines image feature point pairs that meet the preset conditions based on the two adjacent frames of images to be processed in time sequence, that is, the electronic device executes step 102 provided in the above embodiment, which can be performed in the following ways from step 203 to step 205 to fulfill:
  • Step 203 Determine at least one image feature point of each frame of the image to be processed in the two frames of images to be processed that are temporally adjacent.
  • the electronic device determines at least one image feature point of each frame of the image to be processed in two adjacent frames of the image to be processed in time sequence; wherein, the number and parameter information corresponding to the image feature points of different images to be processed can be Same or different.
  • the electronic device determines at least one image feature point of each frame of the image to be processed in the two adjacent frames of the image to be processed in time sequence, that is, the electronic device executes the above step 203, and can pass the following steps 203a and 203b way to achieve:
  • Step 203a Perform image downsampling on each frame of the image to be processed according to the image resolution gradient, and generate an image pyramid corresponding to each frame of the image to be processed.
  • the electronic device performs image down-sampling on each frame of the image to be processed among the two adjacent frames of the image to be processed in time sequence according to the image resolution gradient, and obtains an image pyramid corresponding to each frame of the image to be processed.
  • the image pyramid is a kind of multi-scale expression of the image, which is an effective but conceptually simple structure to explain the image at multiple resolutions;
  • the image pyramid of an image is a series of pyramid shapes (bottom-up) step by step Reduced and derived from a set of image resolutions from the same original image. It is obtained by down-sampling in steps, and the sampling is stopped until a certain termination condition is reached.
  • a layer-by-layer image is compared to a pyramid, and the higher the level, the smaller the image and the lower the resolution.
  • Step 203b performing feature extraction on images at each level in the image pyramid corresponding to each frame of image to be processed, to obtain at least one image feature point of each frame of image to be processed.
  • the electronic device performs a feature extraction operation, that is, performs feature extraction on images at each level of the image pyramid corresponding to each frame of the image to be processed in two consecutive frames of images to be processed in time sequence, and obtains the image of each frame to be processed. At least one image feature point of the image is processed.
  • the electronic device may use a relevant neural network to perform feature extraction on images at each level; it may also perform feature extraction on images at each level based on a target detection algorithm.
  • the electronic device can perform feature extraction from images at each level in the image pyramid corresponding to each image to be processed based on the fast feature point extraction and description algorithm (Oriented FAST and Rotated BRIEF, ORB).
  • the fast feature point extraction and description algorithm Oriented FAST and Rotated BRIEF, ORB.
  • the electronic device fuses image feature points corresponding to images at each level in the image pyramid of the image to be processed, and combines them to obtain a set of image feature points of each image to be processed.
  • the electronic device performs feature extraction on the image of each level in the image pyramid of each frame of the image to be processed, and obtains the image feature points of each frame of the image to be processed; thus, image enhancement is performed on the collected real-time image Preprocessing and multi-layer downsampling, and then performing feature point extraction on each layer of the sampled image can improve the quantity and quality of feature point extraction, and has strong robustness for some complex flight environments, and can improve The technical effect of improving the UAV's ability to identify obstacles.
  • Step 204 Determine the binary parameter corresponding to at least one image feature point.
  • the electronic device may use ORB to describe at least one image feature point of each obtained image to be processed, that is, use a binary value to describe the image feature point.
  • Step 205 among the image feature points of each frame of the image to be processed in the two frames of images to be processed adjacent in time sequence, based on the binary parameter corresponding to at least one image feature point, determine the image feature point pair.
  • the electronic device performs feature matching on the image feature points based on the binary parameters corresponding to each image feature point among the image feature points of each frame of the image to be processed in the two adjacent frames of images to be processed in time sequence , to get image feature point pairs.
  • the corresponding image feature points are obtained by performing feature extraction on each frame of the image to be processed. Points are matched to obtain image feature point pairs; in this way, image feature point pairs can be determined efficiently and accurately, which in turn can improve the accuracy of real-time perception of the flying environment in front of the drone during flight.
  • the electronic device determines the pair of image feature points based on the binary parameter corresponding to at least one image feature point among at least one image feature point of each frame of the image to be processed in the two adjacent frames in time sequence. , that is, the electronic device executes step 205, which may be implemented by the following steps 205a and 205b:
  • Step 205 a based on the binary parameters corresponding to the image feature points, determine the Hamming distance between two image feature points located in two temporally adjacent frames of images to be processed.
  • the electronic device calculates the Hamming distance between binary parameters corresponding to two image feature points in different images to be processed in two consecutive frames of images to be processed in time sequence.
  • the Hamming distance is used in the data transmission error control coding.
  • the Hamming distance is a concept, which represents the number of different bits corresponding to two (same length) words, and d(x, y) represents two words x Hamming distance between and y. Perform an XOR operation on two strings and count the number of 1s, then this number is the Hamming distance.
  • Step 205b if the Hamming distance is less than a preset threshold, determine two image feature points as an image feature point pair.
  • the Hamming distance when the Hamming distance is less than the preset threshold, it is considered that the corresponding two image feature points are approximately similar, that is, a matching feature point pair, and then the two image feature points are composed of image feature points right.
  • the number of image feature point pairs existing in two consecutive frames of images to be processed in time sequence may be one, two or more, which is not limited in this embodiment of the present application.
  • the matching relationship between the two image feature points is determined by calculating the Hamming distance between the two image feature points in two consecutive frames of images to be processed that are different and in sequential order; , which can efficiently and accurately determine image feature point pairs.
  • the flight control method of the UAV provided by the embodiment of the present application can efficiently and accurately give the image feature point pair by preprocessing the image, extracting the image feature point based on the image pyramid, and determining the image feature point pair based on feature matching;
  • the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent images to be processed in time series is constructed to construct a three-dimensional map, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information;
  • Determining the flight trajectory based on the three-dimensional map to achieve obstacle avoidance flight can reduce the influence of the actual flight environment on the flight process of the UAV.
  • the embodiment of the present application also provides a flight control method of a drone, which is applied to electronic equipment, as shown in Figure 1 and Figure 3, the method includes the following steps:
  • Step 301 Obtain the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed.
  • the electronic device acquires the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed; wherein, the two-dimensional coordinate information may be based on the image acquisition settings on the drone The corresponding coordinate parameters in the camera coordinate system established for the reference object.
  • the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed may be completely the same or different; correspondingly, in the embodiment of the present application, the electronic device In point pairing, the first coordinate of image feature point A in the first image to be processed is marked as (x1, y1), and the second coordinate of image feature point B in the second image to be processed is marked as (x2, y2) ; Wherein, the first image to be processed and the second image to be processed are adjacent in time sequence, and their time sequence is not limited in this embodiment of the present application.
  • Step 302 based on the two-dimensional coordinate information, determine the spatial position relationship between two image feature points in the image feature point pair.
  • the electronic device is based on the first coordinates of the feature points of the first image and the second coordinates of the feature points of the second image, and at the same time according to the two frames of images to be processed adjacent in time sequence, that is, the first image to be processed and the epipolar geometric relationship between the second image to be processed, and calculate an essential matrix or a fundamental matrix that characterizes the spatial position relationship between the feature points of the two images.
  • the electronic device can jointly determine the spatial position relationship between two image feature points in the image feature point pair based on the acquisition parameters of the image acquisition part set on the drone and the two-dimensional coordinate information.
  • Step 303 based on the spatial position relationship and the two-dimensional coordinate information, determine the three-dimensional coordinate information in the forward flight environment.
  • the electronic device determines relevant three-dimensional coordinate information based on the spatial position relationship, that is, based on the essential matrix or fundamental matrix representing the spatial position relationship between two image feature points, and the two-dimensional coordinate information.
  • the corresponding spatial position relationship is determined, and then the corresponding three-dimensional coordinate information is determined; in this way, the actual flight can be determined efficiently and accurately.
  • the three-dimensional coordinate points in the environment can further enable the electronic equipment to construct a more accurate topographic map with height information based on the three-dimensional coordinate information in the later stage.
  • the electronic device determines the three-dimensional coordinate information in the forward flight environment based on the spatial position relationship and the two-dimensional coordinate information, that is, the electronic device executes step 303, which can be achieved by the following steps 303a and 303b:
  • Step 303a analyzing the spatial position relationship to obtain the rotation matrix parameters and translation matrix parameters representing the flight change parameters.
  • the electronic device analyzes the spatial position relationship, which may be dismantling the essential matrix parameters or fundamental matrix parameters representing the spatial position relationship to obtain the rotation matrix parameters and translation matrix parameters representing the flight change parameters.
  • Step 303b based on the rotation matrix parameters, the translation matrix parameters and the two-dimensional coordinate information, determine the three-dimensional coordinate information in the forward flight environment.
  • the electronic device determines the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment based on the rotation matrix parameter, the translation matrix parameter and the two-dimensional coordinate information, according to the geometric operation.
  • the geometric operation may be a relationship between a ray formed between a point mapped to a point in the forward flight environment by using at least one of the feature points of the first image and the feature point of the second image, and the relationship between the coincidence of the space ray in the camera coordinate system, The three-dimensional coordinate parameters of the spatial points associated with the image feature point pairs in the forward flight environment are determined.
  • the electronic device is based on the relevant geometric calculation and the two-dimensional coordinate information of the image feature point pair, and the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment; thus, it can efficiently And accurately determine the coordinate parameters of the spatial points with height information, and then enable the electronic equipment to construct a three-dimensional topographic map based on the coordinate parameters in the later stage.
  • the electronic device adjusts the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information.
  • step 104 which can be realized through the following steps 304 to 307:
  • Step 304 acquiring the initial position information and initial flight attitude parameters of the UAV flight.
  • the electronic device acquires and determines the initial position information and initial flight attitude parameters of the UAV during flight; wherein, the initial position parameters can be represented by three-dimensional coordinate parameters in world coordinates; at the same time, the initial flight The attitude parameter can be the angle difference corresponding to the flying origin of the UAV, etc.
  • Step 305 Determine the distance between the initial position information and the three-dimensional coordinate information.
  • the electronic device calculates and determines the distance difference between the three-dimensional coordinate information and the initial position information.
  • each image feature point pair corresponds to a space point and the three-dimensional coordinate information of the space point in the forward flight environment, wherein the distance between different three-dimensional coordinate information and the initial position information is different.
  • the distance may be a distance difference in any direction of the x-axis, y-axis and z-axis.
  • Step 306 based on the distance, initial position information, and initial flight attitude parameters, construct coordinate vector parameters with preset dimensions that match the three-dimensional coordinate information.
  • the electronic device can calculate the reciprocal of the distance, and fuse the reciprocal of the distance, initial position information, and initial flight attitude parameters to generate a coordinate vector parameter with a preset dimension that matches the three-dimensional coordinate information; where , the preset dimension can be six dimensions.
  • the electronic device may use inverse depth parameterization to perform rapid depth convergence on the extracted three-dimensional coordinate information, so as to improve calculation efficiency.
  • Step 307 based on the coordinate vector parameters, adjust the coordinates to be adjusted in the map to be adjusted to obtain a three-dimensional map.
  • the electronic device adjusts the coordinate information to be adjusted in the map to be adjusted based on the extracted coordinate vector parameters, that is, the coordinate information of the actual space point in the flying environment ahead, to obtain a three-dimensional map; wherein, the to-be-adjusted
  • the coordinates can be two-dimensional coordinates or three-dimensional coordinates.
  • the convergence speed and calculation efficiency of the extended Kalman filter for the depth calculation of the feature points can be improved, and the small drone
  • the man-machine can also quickly update the effect of obstacle depth information when moving at high speed; at the same time, the inverse depth parameterization can enable the algorithm to deal with long-distance features, including some feature points that are so far away that the parallax is very small during the movement of the drone. , so as to enhance the obstacle perception efficiency.
  • the electronic device adjusts the coordinates to be adjusted in the map to be adjusted based on the coordinate vector parameters to obtain a three-dimensional map, that is, the electronic device executes step 307, which can be achieved through the following steps 307a to 307c:
  • Step 307a Based on the coordinate vector parameters, an updated covariance matrix is constructed.
  • the electronic device determines parameters for the correction matrix based on the coordinate vector parameters, that is, constructs an updated covariance matrix; wherein, the extended Kalman filter may be used to update the covariance matrix.
  • Step 307b Based on the updated covariance matrix, adjust the coordinates of the map to be adjusted to obtain corrected three-dimensional coordinate information.
  • the electronic device based on the updated covariance matrix, corrects the coordinate parameters associated with the three-dimensional coordinate parameters in the map to be adjusted to obtain the corrected three-dimensional coordinate information;
  • the height information of the coordinates can be increased or decreased, and the height information can also be filled.
  • Step 307c constructing a three-dimensional map based on the corrected three-dimensional coordinate information.
  • the electronic device constructs and generates a three-dimensional map that matches the forward flight environment information based on the corrected three-dimensional coordinate information.
  • the electronic device obtains a three-dimensional map with height information based on optimizing the map to be adjusted associated with the image to be processed; in this way, the state of obstacles can be restored more accurately, ensuring that no one is as far as possible The flight path of the aircraft is not affected.
  • the flight control method of the UAV provided in the embodiment of the present application is based on the pair of image feature points in the two adjacent images to be processed in time sequence, and determines the space associated with the pair of image feature points in the forward flight environment through geometric operations.
  • the three-dimensional coordinate information of the point and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be restored efficiently and accurately, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve Obstacle avoidance flight can reduce the impact of the actual flight environment on the UAV during flight.
  • the electronic device determines the flight trajectory of the drone based on the three-dimensional map, which can be achieved by the following steps A1 and A2:
  • Step A1 Determine the avoidance route based on the three-dimensional map.
  • the electronic device senses information of obstacles in the flying environment ahead in advance, and then determines an avoidance route to circumvent the obstacles.
  • Step A2 based on the avoidance route, determine the flight trajectory of the UAV.
  • the electronic device determines the flight track of the drone based on the avoidance route.
  • the electronic device based on the three-dimensional map, knows the obstacles in the flying environment ahead, and then determines the corresponding avoidance route to perform the relevant flight tasks; Impact.
  • Step 1 The UAV executes the flight mission, that is, starts to operate, which corresponds to 401 in FIG. 4 .
  • Step 2 collecting real-time images, which corresponds to 402 in FIG. 4 ; wherein, a monocular camera may be used to collect real-time images.
  • Step 3 Enhance the collected real-time image to highlight the feature information in the image, which corresponds to 403 in Figure 4; because the UAV obstacle avoidance algorithm in the related technology is to collect real-time images in an ideal environment and perform Obstacle recognition and space perception, but in the actual application of UAV application scenarios, there are often many visual disturbances in the environment, such as: weak light, natural shadows and haze, etc., such disturbances may have a greater impact on machine vision The impact of the image directly leads to errors or insufficient feature extraction in the feature extraction of the spatial environment, and then it is necessary to optimize the problem of blurred image feature information in some special environments.
  • the UAV obstacle avoidance algorithm in the related technology is to collect real-time images in an ideal environment and perform Obstacle recognition and space perception, but in the actual application of UAV application scenarios, there are often many visual disturbances in the environment, such as: weak light, natural shadows and haze, etc., such disturbances may have a greater impact on machine vision
  • the impact of the image directly leads to errors or in
  • the collected real-time image is nonlinearly shifted and the pixel values in the image are redistributed to ensure that the number of all pixel values of the real-time image within a certain gray scale range is roughly equal.
  • increase the contrast of the pixel value in the middle peak part of the image reduce the contrast of the valley parts on both sides, and output the flat segmented histogram corresponding to the image.
  • the feature information in the real-time image can be highlighted, and the intensity gradient of key pixels increases, and more prominent features can be extracted when performing feature extraction on the real-time image. point.
  • Step 4 downsample the image according to the gradient, construct an image pyramid, and extract ORB feature points from each layer of the image pyramid, and perform feature point matching between image frames, which corresponds to 404, feature point extraction and Feature point matching.
  • the embodiment of the present application can down-sample the image to be processed based on the image resolution to form an 8-layer image pyramid, and extract ORB feature points at each level of the image pyramid; The number of feature points in the grid, if the number of feature points is not enough, adjust the corner point calculation threshold until at least 5 feature points can be extracted from the grid; among them, extracting 5 feature points per grid can get better The characteristic description effect.
  • the ORB feature algorithm may be used as the feature extraction and description algorithm of the image frame.
  • FAST corners are used to detect feature points with intensity differences in the image, and then feature descriptors (BRIEF) algorithm is used to calculate the descriptors of feature points.
  • BRIEF feature descriptors
  • 16 pixel points are found near it with a radius of 3 pixels; If there are n consecutive pixels and the absolute value of the gray difference of pixel p is greater than a threshold t, then this pixel p can be selected as a candidate corner point for screening. If the final calculation result shows that there are 10 or more pixel points satisfying the condition on the circumference, the point can be considered as a FAST corner point.
  • the ORB algorithm describes the feature points by using the improved BRIEF algorithm.
  • Gaussian filtering is used to remove noise from the image and the integral image is used for smoothing; then a window of size S ⁇ S (preset) is taken with the image feature point as the center, and two pixels are randomly selected from the window Point x and y are used as a point pair, compare their pixel values, and perform binary assignment.
  • the most obvious feature of the ORB algorithm is fast calculation speed and good scale and rotation invariance, which is mainly due to the extremely high speed of the FAST corner detection algorithm, and the unique binary string representation of the BRIEF algorithm not only saves The storage space is reduced, and the matching time is greatly shortened.
  • the use of the ORB feature algorithm saves a lot of computing space for the entire obstacle avoidance algorithm.
  • the ORB algorithm is more robust than other feature point algorithms, and can continuously extract stable features. All feature points in the image will be used for feature matching in subsequent frames.
  • the feature matching between images can ensure that the UAV can realize real-time continuous perception of the surrounding environment during the flight process, and if an unknown obstacle appears in the flight path, it can also be detected in time and accurately locate the obstacle position. That is, after the feature point extraction is completed, the feature points in the image are described in the form of binary strings. At this time, the feature matching between image frames can be completed according to the described feature information. The main idea of this part is to traverse all the map points in the previous image frame, project all of them to the current frame, and then find a feature point with the closest descriptor distance in the current frame as its matching point.
  • Step 5 Restoring the depth information of the relevant feature points in the actual space through geometric calculation, that is, corresponding to 405 in FIG. 4 , calculating the depth of the feature points.
  • FIG. 5 it is a schematic diagram representing a corresponding relationship between pairs of image feature points provided by the embodiment of the present application.
  • p 1 and p 2 are located in image frame I 1 and image frame I 2 respectively, and p 1 and p 2 are a pair of feature points; at the same time is the projection of point P in the space of p1 and p2 on the image frame I1 and the image frame I2 .
  • the plane formed by point P and camera optical centers O 1 and O 2 is called polar plane.
  • the intersection points e 1 and e 2 of O 1 , O 2 and I 1 , I 2 respectively are called poles.
  • point P may exist at Any position on , that is, the corresponding in I 2 In , the coordinates of point P in the space point can be determined by finding the exact position of p 2 in the image frame I 2 through feature matching.
  • the antipolar constraint is satisfied, as shown in formula (1):
  • K is the internal reference of the camera. It can also be converted into formula (2) and formula (3);
  • E is the essential matrix
  • R is the fundamental matrix
  • epipolar constraint can be simplified as formula (4):
  • the problem of camera movement and pose change can be transformed into: calculating the matrix E or F through the pixel coordinates of the paired feature points; or calculating the rotation matrix R and translation matrix t through the calculated E or F.
  • the three-dimensional coordinates of the point P in space are determined by using the coincidence relationship between the two-dimensional coordinate point ray in the image and the space point ray under the camera coordinate system, and the calculation formula is shown in (5):
  • x represents p 1 and p 2
  • X represents the three-dimensional coordinates of the spatial point P in the world coordinate system.
  • Step 6 Perform inverse depth parameterization on the depth information of the feature points, and optimize the spatial point cloud by using the extended Kalman filter, which corresponds to 406 in FIG. 4 , optimizing the depth information.
  • the inverse depth parameterization combined with the extended Kalman filter can be used to optimize the camera pose data, that is, to optimize the three-dimensional coordinate information of the spatial point P. Because small UAVs fly in a fast and narrow space, there are high requirements for the calculation efficiency of obstacle perception algorithms; in the embodiment of this application, the pose data stored in the database can be used to calculate the existing unmanned The process of continuous optimization and correction of machine pose and three-dimensional space point coordinates.
  • the vision-based perception method uses the extended Kalman filter to optimize the coordinates of the feature points in the image in the environment space to minimize the accumulated errors during the flight.
  • the embodiment of this application uses the inverse depth parameterization method to perform rapid depth convergence on the extracted feature points; the use of inverse depth parameterization makes the convergence speed faster than Cartesian parameterization, where the uncertainty in the inverse depth is closer to a Gaussian distribution than the standard depth.
  • the feature points stored in the database are represented by a six-dimensional vector, which is composed of the Cartesian coordinates of the feature point P relative to an anchor point: [x a , y a , z a ] T , orientation
  • the angle ⁇ , the elevation angle ⁇ , and the reciprocal ⁇ of the distance from the feature point P to the anchor point are jointly defined, where the anchor point is the spatial position of the drone when the database is initialized.
  • R is the rotation matrix from the spatial coordinate system to the camera coordinates.
  • the system executes the mapping algorithm according to the image sequence. Each feature point is regarded as an independent measurement data, and the correlation between the measured value and the real value is ignored.
  • Step 7 Establish a topographic map with height information according to the depth information of feature points in space, which corresponds to 408 in Figure 4; wherein, using the three-dimensional coordinates of the convergence points in space, a terrain grid represented by height can be generated, and the specific position The height information is updated from the point coordinates in the database, and the height of the location in the grid is raised or lowered when a new convergent point is received.
  • Step 8 the UAV performs obstacle avoidance flight according to the constructed three-dimensional terrain map.
  • the grid terrain map generated by the filter can be used for UAV obstacle avoidance.
  • the obstacle avoidance algorithm judges the next step by considering the grid height of the topographic map of the UAV in the direction of the horizontal velocity vector. First compare the height of the drone to the specified grid with the minimum height of the specified grid. If this minimum altitude would hinder the UAV's original trajectory, the UAV performs a smooth pull-up maneuver by itself. In a similar way, the algorithm also enables the drone to quickly return to the desired altitude after passing an obstacle.
  • the embodiment of the present application proposes a monocular camera-based UAV obstacle sensing method during flight, which can sense the obstacle height information and ensure that the UAV can avoid obstacles from above the obstacle. It can shorten the obstacle avoidance distance of the drone and improve its perception of obstacles.
  • this application designs a multi-scale feature extraction method to extract ORB features from layers with different resolutions to ensure the uniform distribution of feature points in the image, thereby obtaining better obstacle perception effects.
  • common visual environment perception methods may also have problems such as misrecognition, high error or interruption.
  • the processing step improves the robustness of obstacle-aware methods as much as possible.
  • the method of calculating the depth information of the feature points in the image based on the inverse depth parameterization combined with the extended Kalman filter can improve the rapid convergence of the depth information, and can better restore the depth of the distant points in the space.
  • this embodiment of the present application also provides a flight control device 6 for a drone, which can be applied to a drone provided in the corresponding embodiments in Figures 1 to 3
  • the flight control device 6 of the UAV includes: an acquisition part 61, a first determination part 62, a second determination part 63, an adjustment part 64 and a third determination part 65, wherein :
  • the acquisition part 61 is configured to acquire the image to be processed; wherein, the screen content of the image to be processed includes the information of the flight environment ahead;
  • the first determining part 62 is configured to determine image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed;
  • the second determining part 63 is configured to determine the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment
  • the adjusting part 64 is configured to adjust the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
  • the third determining part 65 is configured to determine the flight track of the UAV based on the three-dimensional map.
  • the acquiring part 61 is further configured to acquire the forward flight environment information to obtain a preset image; adjust the image contrast of the preset image to obtain the image to be processed.
  • the first determining part 62 is further configured to determine at least one image feature point of each frame of the image to be processed in the two adjacent frames of the image to be processed in time sequence; determine the at least one image feature point The binary parameter corresponding to the feature point; in at least one image feature point of each frame of the image to be processed in the two adjacent frames to be processed in the time sequence, based on the binary parameter corresponding to the at least one image feature point, determine the Image feature point pairs.
  • the first determining part 62 is further configured to perform image downsampling on each frame of the image to be processed according to the image resolution gradient, and generate an image pyramid corresponding to each frame of the image to be processed; Feature extraction is performed on images at each level in the image pyramid corresponding to each frame of the image to be processed to obtain at least one image feature point of each frame of the image to be processed.
  • the first determining part 62 is further configured to determine two image feature points in the two temporally adjacent frames of images to be processed based on the binary parameters corresponding to the image feature points Hamming distance between them; if the Hamming distance is less than the preset threshold, the two image feature points are determined as the image feature point pair.
  • the second determining part 63 is further configured to obtain the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed; based on the two-dimensional coordinate information, determining the spatial position relationship between two image feature points in the image feature point pair; based on the spatial position relationship and the two-dimensional coordinate information, determining the three-dimensional coordinate information in the forward flight environment.
  • the second determining part 63 is further configured to analyze the spatial position relationship to obtain rotation matrix parameters and translation matrix parameters that characterize the flight change parameters; based on the rotation matrix parameters, The translation matrix parameters and the two-dimensional coordinate information determine the three-dimensional coordinate information in the forward flight environment.
  • the adjustment part 64 is further configured to obtain the initial position information and initial flight attitude parameters of the UAV flight; determine the distance between the initial position information and the three-dimensional coordinate information; Based on the distance, the initial position information, and the initial flight attitude parameters, construct a coordinate vector parameter with a preset dimension that matches the three-dimensional coordinate information; based on the coordinate vector parameter, map the map to be adjusted The coordinates to be adjusted are adjusted to obtain the three-dimensional map.
  • the adjustment part 64 is further configured to construct an updated covariance matrix based on the coordinate vector parameters; based on the updated covariance matrix, the map to be adjusted The coordinates to be adjusted are adjusted to obtain corrected three-dimensional coordinate information; and the three-dimensional map is constructed based on the corrected three-dimensional coordinate information.
  • the third determining part 65 is further configured to determine an avoidance route based on the three-dimensional map; and determine the flight trajectory of the UAV based on the avoidance route.
  • the flight control device of the UAV determines the space associated with the image feature point pair in the forward flight environment through geometric calculation based on the pair of image feature points in the two adjacent frames of images to be processed in time sequence.
  • the three-dimensional coordinate information of the point and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be restored efficiently and accurately, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve Obstacle avoidance flight can reduce the impact of the actual flight environment on the UAV during flight.
  • this embodiment of the present application also provides an electronic device 7, which can be applied to a flight control method for a drone provided in the embodiments corresponding to Figures 1 to 3, as shown in Figure 7 , the electronic device 7 includes: a processor 71, a memory 72 and a communication bus 73, wherein:
  • the communication bus 73 is used to realize the communication connection between the processor 71 and the memory 72 .
  • the processor 71 is used to execute the program of the UAV flight control method stored in the memory 72, so as to realize the UAV flight control method provided in the embodiments corresponding to FIG. 1 to FIG. 3 .
  • the electronic device determines the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment through geometric calculation based on the image feature point pair in the two adjacent images to be processed in time sequence , and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be efficiently and accurately restored, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve obstacle avoidance flight, which can Reduce the impact of drone flight from the actual flight environment.
  • the embodiments of the present application further provide a computer-readable storage medium, where one or more programs are stored in the computer-readable storage medium, and the one or more programs can be executed by one or more processors to Realize the flight control method of the unmanned aerial vehicle provided by the embodiments corresponding to Fig. 1 to 3 .
  • the embodiment of the present application also provides a computer program, the computer program includes computer readable codes, and when the computer readable codes run in an electronic device, the processor of the electronic device executes to realize the Embodiments corresponding to 1 to 3 provide the flight control method of the UAV.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, or each unit can be used as a single unit, or two or more units can be integrated into one unit; the above-mentioned integration
  • the unit can be realized in the form of hardware or in the form of hardware plus software functional unit.
  • the above-mentioned integrated units of the present application are implemented in the form of software function parts and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium and includes several instructions for Make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes various media capable of storing program codes such as removable storage devices, ROMs, magnetic disks or optical disks.
  • the embodiment of the present application discloses a flight control method, device, equipment, medium, and program of a UAV; wherein, the method includes: acquiring an image to be processed; wherein, the screen content of the image to be processed includes the front flight environment information; based on two adjacent frames of images to be processed in time sequence, determine image feature point pairs that meet preset conditions; in the forward flight environment, determine three-dimensional coordinate information associated with the image feature point pairs; based on the The three-dimensional coordinate information is used to adjust the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map; based on the three-dimensional map, the flight trajectory of the UAV is determined.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A flight control method and apparatus for an unmanned aerial vehicle, and a device, a medium and a program. The method comprises: acquiring images to be processed (101), wherein picture content of said images comprises front flight environment information; determining, on the basis of two adjacent frames of said images in a time sequence, an image feature point pair that meets a preset condition (102); determining, from a front flight environment, three-dimensional coordinate information, which is associated with the image feature point pair (103); adjusting, on the basis of the three-dimensional coordinate information, a map to be adjusted, which corresponds to said images, so as to obtain a three-dimensional map (104); and determining a flight trajectory of an unmanned aerial vehicle on the basis of the three-dimensional map (105). Therefore, actual flight environment information can be efficiently and precisely restored, and a three-dimensional topographic map having height information is constructed, such that the influence of an actual flight environment on an unmanned aerial vehicle during a flight process can be reduced.

Description

一种无人机的飞行控制方法、装置、设备、介质及程序A flight control method, device, equipment, medium and program for an unmanned aerial vehicle
相关申请的交叉引用Cross References to Related Applications
本专利申请要求2021年09月01日提交的中国专利申请号为202111019049.8、申请人为中移(成都)信息通信科技有限公司,***通信集团有限公司,申请名称为“一种无人机的飞行控制方法、装置、设备及存储介质”的优先权,该申请的全文以引用的方式并入本申请中。This patent application requires that the Chinese patent application number submitted on September 1, 2021 is 202111019049.8, the applicant is China Mobile (Chengdu) Information and Communication Technology Co., Ltd., China Mobile Communications Group Co., Ltd. Control method, device, equipment and storage medium", the entirety of which is incorporated into this application by reference.
技术领域technical field
本申请涉及信息技术领域,尤其涉及一种无人机的飞行控制方法、装置、设备、介质及程序。The present application relates to the field of information technology, and in particular to a flight control method, device, equipment, medium and program of an unmanned aerial vehicle.
背景技术Background technique
无人机在未知环境中进行救援或数据收集时,需对未知环境进行识别;相关技术中,由于检测识别算法识别未知环境的精准度不高,进而易影响无人机的飞行任务。When UAVs perform rescue or data collection in an unknown environment, they need to identify the unknown environment; in related technologies, due to the low accuracy of detection and recognition algorithms in identifying unknown environments, it is easy to affect the flight mission of UAVs.
发明内容Contents of the invention
为解决上述技术问题,本申请实施例提供一种无人机的飞行控制方法、装置、设备、介质及程序,通过与时序上相邻的两帧待处理图像中的图像特征点对关联的三维坐标信息,构建三维地图,能够高效且精准地还原实际飞行环境信息,并构建具有高度信息的三维地形图;同时基于三维地图确定飞行轨迹以实现避障飞行,能够降低无人机在飞行过程中受实际飞行环境的影响。In order to solve the above-mentioned technical problems, the embodiment of the present application provides a flight control method, device, equipment, medium and program of an unmanned aerial vehicle, through the three-dimensional Coordinate information, building a three-dimensional map, can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information; at the same time, determine the flight trajectory based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the drone's flight during flight. Affected by the actual flight environment.
为达到上述目的,本申请的技术方案是这样实现的:In order to achieve the above object, the technical solution of the present application is achieved in this way:
本申请实施例提供一种无人机的飞行控制方法,所述方法包括:An embodiment of the present application provides a flight control method of an unmanned aerial vehicle, the method comprising:
获取待处理图像;其中,所述待处理图像的画面内容包括前方飞行环境信息;Acquiring an image to be processed; wherein, the screen content of the image to be processed includes information about the flight environment ahead;
基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;Determining image feature point pairs satisfying preset conditions based on two adjacent frames of images to be processed in time sequence;
在所述前方飞行环境中,确定与所述图像特征点对相关联的三维坐标信息;In the forward flight environment, determine three-dimensional coordinate information associated with the image feature point pair;
基于所述三维坐标信息,对与所述待处理图像对应的待调整地图进行调整,得到三维地图;Adjusting the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
基于所述三维地图,确定所述无人机的飞行轨迹。Based on the three-dimensional map, determine the flight track of the drone.
本申请实施例提供一种无人机的飞行控制装置,所述装置包括:An embodiment of the present application provides a flight control device for an unmanned aerial vehicle, the device comprising:
获取部分,被配置为获取待处理图像;其中,所述待处理图像的画面内容包括前方飞行环境信息;The acquisition part is configured to acquire the image to be processed; wherein, the screen content of the image to be processed includes the information of the flight environment ahead;
第一确定部分,被配置为基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;The first determining part is configured to determine image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed;
第二确定部分,被配置为在所述前方飞行环境中,确定与所述图像特征点对相关联的三维坐标信息;The second determination part is configured to determine the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment;
调整部分,被配置为基于所述三维坐标信息,对与所述待处理图像对应的待调整地图进行调整,得到三维地图;The adjustment part is configured to adjust the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
第三确定部分,被配置为基于所述三维地图,确定所述无人机的飞行轨迹。The third determining part is configured to determine the flight track of the UAV based on the three-dimensional map.
本申请实施例还提供一种电子设备,所述电子设备包括:处理器、存储器和通信总线;其中,所述通信总线用于实现所述处理器和所述存储器之间的通信连接;An embodiment of the present application also provides an electronic device, the electronic device including: a processor, a memory, and a communication bus; wherein the communication bus is used to implement a communication connection between the processor and the memory;
所述处理器用于执行所述存储器中的程序,以实现如上述所述的无人机的飞行控制方法。The processor is used to execute the program in the memory, so as to realize the flight control method of the drone as described above.
相应地,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者多个程序,所述一个或者多个程序可被一个或者多个处理器执行,以实现如上述所述的无人机的飞行控制方法。Correspondingly, the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to Realize the flight control method of the unmanned aerial vehicle as described above.
本申请实施例还提供一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如上述所述的无人机的飞行控制方法。The embodiment of the present application also provides a computer program, the computer program includes computer-readable codes, and when the computer-readable codes run in an electronic device, the processor of the electronic device executes the above-mentioned The flight control method of the unmanned aerial vehicle.
本申请实施例提供的无人机的飞行控制方法、装置、设备、介质及程序,首先,获取待处理图像,该待处理图像的画面内容包括前方飞行环境信息;其次,基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;并在前方飞行环境中,确定与图像特征点对相关联的三维坐标信息;最后,基于三维坐标信息,对与待处理图像对应的待调整地图进行调整,得到三维地图;基于三维地图,确定无人机的飞行轨迹。如此,通过与时序上相邻的两帧待处理图像中的图像特征点对关联的三维坐标信息,构建三维地图,能够高效且精准地还原实际飞行环境信息,并构建具有高度信息的三维地形图;同时基于三维地图确定飞行轨迹以实现避障飞行,能够降低无人机在飞行过程中受实际飞行环境的影响。The flight control method, device, equipment, medium and program of the unmanned aerial vehicle provided by the embodiment of the present application, firstly, obtain the image to be processed, the screen content of the image to be processed includes the flight environment information in front; secondly, based on the temporally adjacent Two frames of images to be processed, determine the image feature point pairs that meet the preset conditions; and in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs; finally, based on the three-dimensional coordinate information, the pair corresponding to the image to be processed The map to be adjusted is adjusted to obtain a three-dimensional map; based on the three-dimensional map, the flight trajectory of the UAV is determined. In this way, the three-dimensional map is constructed through the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent frames of images to be processed in time series, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information ; At the same time, the flight trajectory is determined based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the influence of the actual flight environment on the UAV during flight.
为使本申请的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned purpose, features and advantages of the present application more comprehensible, preferred embodiments will be described in detail below together with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本申请实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,此处的附图被并入说明书中并构成本说明书中的一部分,这些附图示出了符合本申请的实施例,并与说明书一起用于说明本申请的技术方案。应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following will briefly introduce the accompanying drawings that need to be used in the embodiments. The accompanying drawings here are incorporated into the specification and constitute a part of the specification. The figures show the embodiments according to the application, and are used together with the description to explain the technical solution of the application. It should be understood that the following drawings only show some embodiments of the present application, and therefore should not be regarded as limiting the scope. For those of ordinary skill in the art, they can also From these drawings other related drawings are obtained.
图1为本申请实施例提供的一种无人机的飞行控制方法的流程示意图;FIG. 1 is a schematic flow chart of a flight control method for an unmanned aerial vehicle provided in an embodiment of the present application;
图2为本申请实施例提供的另一种无人机的飞行控制方法的流程示意图;FIG. 2 is a schematic flow chart of another unmanned aerial vehicle flight control method provided by the embodiment of the present application;
图3为本申请实施例提供的又一种无人机的飞行控制方法的流程示意图;FIG. 3 is a schematic flow diagram of another flight control method for a drone provided in an embodiment of the present application;
图4为本申请实施例提供的一种飞行过程中构建三维地图的流程示意图;FIG. 4 is a schematic flow chart of building a three-dimensional map during a flight provided by an embodiment of the present application;
图5本申请实施例提供的一种表征图像特征点对之间对应的关系的示意图;FIG. 5 is a schematic diagram representing the corresponding relationship between pairs of image feature points provided by the embodiment of the present application;
图6为本申请实施例提供的一种无人机的飞行控制装置的结构示意图;FIG. 6 is a schematic structural diagram of a flight control device for a drone provided in an embodiment of the present application;
图7为本申请实施例提供的一种电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application.
应理解,说明书通篇中提到的“本申请实施例”或“前述实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“本申请实施例中”或“在前述实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中应。在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。It should be understood that references to "an embodiment of the present application" or "the foregoing embodiment" throughout the specification mean that a specific feature, structure or characteristic related to the embodiment is included in at least one embodiment of the present application. Therefore, appearances of "in the embodiment of the present application" or "in the foregoing embodiment" throughout the specification do not necessarily refer to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. In various embodiments of the present application, the serial numbers of the above-mentioned processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, rather than the implementation process of the embodiments of the present application. constitute any limitation. The serial numbers of the above embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments.
在未做特殊说明的情况下,信息处理设备执行本申请实施例中的任一步骤,可以是信息处理设备的处理器执行该步骤。还值得注意的是,本申请实施例并不限定信息处理设备执行下述步骤的先后顺序。另外,不同实施例中对数据进行处理所采用的方式可以 是相同的方法或不同的方法。还需说明的是,本申请实施例中的任一步骤是信息处理设备可以独立执行的,即信息处理设备执行下述实施例中的任一步骤时,可以不依赖于其它步骤的执行。Unless otherwise specified, the information processing device executes any step in the embodiments of the present application, and may be a processor of the information processing device executes the step. It is also worth noting that the embodiment of the present application does not limit the order in which the information processing device executes the following steps. In addition, the methods used to process data in different embodiments may be the same method or different methods. It should also be noted that any step in the embodiments of the present application can be executed independently by the information processing device, that is, when the information processing device executes any step in the following embodiments, it may not depend on the execution of other steps.
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。Before further describing the embodiments of the present application in detail, the nouns and terms involved in the embodiments of the present application are described, and the nouns and terms involved in the embodiments of the present application are applicable to the following explanations.
1)无人机避障:无人机正逐步代替人工完成各种特殊任务,如搜索、救援、消防和数据收集等。在完成此类任务时,无人机往往处在具有复杂地形的环境中,诸如建筑群、狭窄的室内、崎岖的山区和森林等,环境中的障碍物随时会对无人机构成碰撞威胁,使其难以完成任务。无人机必须通过其有限的传感器和存储空间尽可能地收集环境信息,以便及时探测空间中存在的障碍物,并且通过更新飞行路径避开原定路线中的障碍物。1) UAV obstacle avoidance: UAVs are gradually replacing humans to complete various special tasks, such as search, rescue, fire fighting and data collection. When completing such tasks, UAVs are often in an environment with complex terrain, such as buildings, narrow indoors, rugged mountains and forests, etc. Obstacles in the environment will pose a collision threat to UAVs at any time. make it difficult to complete the task. UAVs must collect environmental information as much as possible through their limited sensors and storage space in order to detect obstacles in space in time and avoid obstacles in the original route by updating the flight path.
2)计算机视觉(Computer Vision,CV):CV是使用计算机及相关设备对生物视觉的一种模拟。它的主要任务就是通过对采集的图片或视频进行处理以获得相应场景的各类信息。传统的计算机视觉***的主要目标是从图像中提取特征,包括边缘检测、角点检测以及图像分割等。根据输入图像的类型和质量,不同的算法执行的效果不同。2) Computer Vision (CV): CV is a simulation of biological vision using computers and related equipment. Its main task is to obtain various information of the corresponding scene by processing the collected pictures or videos. The main goal of traditional computer vision systems is to extract features from images, including edge detection, corner detection, and image segmentation. Depending on the type and quality of the input image, different algorithms perform differently.
3)基于视觉的同步定位与地图构建(Visual Simultaneous Localization And Mapping,VSLAM):无人机通过视觉传感器对周边环境进行图像采集,并对图像进行滤波和几何计算,完成自身位置确定和路径识别,进而帮助无人机控制***做出导航决策。VSLAM最主要的特点是自主性和实时性,且无需依靠外界预计算,只需对储存***和环境中的信息进行计算就可以在陌生环境中完成无人机实时追踪。3) Vision-based synchronous positioning and map construction (Visual Simultaneous Localization And Mapping, VSLAM): The UAV collects images of the surrounding environment through visual sensors, and performs filtering and geometric calculation on the images to complete its own position determination and path identification. This in turn helps the UAV control system make navigation decisions. The most important feature of VSLAM is autonomy and real-time, and it does not need to rely on external pre-computation. It only needs to calculate the information in the storage system and the environment to complete real-time tracking of UAVs in unfamiliar environments.
相关技术中,在视觉障碍物感知领域中,无人机或车辆通过使用机载设备感知前方环境中的障碍物,获得相对距离和自身角度位姿等信息,并通过对这些信息进行处理来实时更新动态的环境信息。无人机根据动态环境信息和自身飞行状态自主制定避障路线。最后,飞行控制***调整无人机飞行速度和方向,实现避障。In related technologies, in the field of visual obstacle perception, drones or vehicles use airborne equipment to perceive obstacles in the front environment, obtain information such as relative distance and their own angle and posture, and process these information to realize real-time Update dynamic environment information. The UAV autonomously formulates obstacle avoidance routes based on dynamic environmental information and its own flight status. Finally, the flight control system adjusts the flight speed and direction of the UAV to achieve obstacle avoidance.
同时现有无人机障碍物识别通常采用以下两种方式来实现:At the same time, the existing UAV obstacle recognition is usually implemented in the following two ways:
第一种,以无人机操作员人工识别为主,通过无人机实时传回的画面,操作员手动操作无人机完成对空间中障碍物的躲避飞行。这对于无人机操作员的技术要求提出了较高要求,可能因为误操作发生事故,并且当需要多无人机同时工作时会出现人手不足等问题。The first type is mainly based on the manual identification of the drone operator, and the operator manually operates the drone to complete the evasive flight of the obstacles in the space through the real-time transmission of the picture sent by the drone. This puts high demands on the technical requirements of drone operators, and accidents may occur due to misoperation, and there will be problems such as shortage of manpower when multiple drones are required to work at the same time.
第二种,通过算法使无人机自主进行障碍物识别与避障;通常会将现实空间中的障碍物映射到二维虚拟地图中实现无人机对障碍物的感知,这样会导致原本具有高度信息的各种障碍物的点云就都被压缩到了二维空间,障碍物原本的形状也无法感知,如果是外形不规则的障碍物就会导致建模误差。虽然这种方法对躲避空间中一些体积比较小的障碍物是可行的。但是,一些大面积的障碍物,比如:森林、山脉、大楼等。如果只是控制无人机进行二维水平面的避障,一定会增加为了避障而行进距离,对于很多电池储备能力有限的小型无人机来说这显然是不利的。The second is to use algorithms to enable UAVs to autonomously identify and avoid obstacles; usually, obstacles in real space are mapped to a two-dimensional virtual map to realize UAVs’ perception of obstacles, which will lead to The point clouds of various obstacles in the height information are compressed into two-dimensional space, and the original shape of the obstacle cannot be perceived. If it is an obstacle with an irregular shape, it will lead to modeling errors. Although this method is feasible for avoiding some relatively small obstacles in the space. However, some large-scale obstacles, such as: forests, mountains, buildings, etc. If you only control the UAV to avoid obstacles on a two-dimensional horizontal plane, it will definitely increase the distance traveled for obstacle avoidance, which is obviously disadvantageous for many small UAVs with limited battery reserve capacity.
基于以上问题,本申请实施例提供一种无人机的飞行控制方法,应用于电子设备,参照图1所示,该方法包括以下步骤:Based on the above problems, the embodiment of the present application provides a flight control method of a drone, which is applied to electronic equipment. Referring to FIG. 1, the method includes the following steps:
步骤101、获取待处理图像。 Step 101, acquire an image to be processed.
其中,待处理图像的画面内容包括前方飞行环境信息。Wherein, the picture content of the image to be processed includes the information of the flight environment ahead.
在本申请实施例中,电子设备可以是任一具有数据处理能力的设备;其中,电子设备可以是设置于无人机内部的数据处理设备,也可以是能够与无人机进行信息交互的电子设备,还可以是管理无人机的云处理器等。In this embodiment of the application, the electronic device may be any device with data processing capabilities; wherein, the electronic device may be a data processing device installed inside the drone, or it may be an electronic device capable of information interaction with the drone. The device can also be a cloud processor for managing drones, etc.
其中,电子设备可以是接收设置在无人机上的图像采集部分和视频采集部分中的至少之一发送的待处理图像;相应地,在本申请实施例中,无人机上设置有图像采集部分和视频采集部分中的至少之一,该图像采集部分和视频采集部分中的至少之一可以是: 单目摄像头或双目摄像头。Wherein, the electronic device may receive images to be processed sent by at least one of the image acquisition part and the video acquisition part arranged on the UAV; correspondingly, in the embodiment of the present application, the UAV is provided with an image acquisition part and At least one of the video acquisition part, at least one of the image acquisition part and the video acquisition part may be: a monocular camera or a binocular camera.
在本申请实施例中,前方飞行环境信息指代无人机在飞行过程中所处的前方环境信息。在本申请实施例中,无人机可以是在山区、森林、建筑群或室内等环境中执行飞行任务。In the embodiment of the present application, the forward flight environment information refers to the forward environment information where the UAV is located during flight. In the embodiment of the present application, the unmanned aerial vehicle may perform flight missions in environments such as mountains, forests, buildings, or indoors.
在本申请实施例中,待处理图像的数量可以是一个,也可以是两个及以上;待处理图像的图片格式可以是位图(Bitmap,BMP)格式、联合图像专家组(Joint Photographic Experts Group,JEPG)格式以及便携式网络图形(Portable Network Graphics,PNG)格式等。In the embodiment of the present application, the number of images to be processed can be one, or two or more; the picture format of the image to be processed can be bitmap (Bitmap, BMP) format, Joint Photographic Experts Group (Joint Photographic Experts Group , JEPG) format and Portable Network Graphics (Portable Network Graphics, PNG) format, etc.
步骤102、基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对。 Step 102 , based on two temporally adjacent frames of images to be processed, determine image feature point pairs satisfying preset conditions.
在本申请实施例中,电子设备基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;其中,时序上相邻指代两帧待处理图像各自被采集的时间相邻,也可以指代两帧待处理图像各自被电子设备获取的时间相邻。In the embodiment of the present application, the electronic device determines image feature point pairs satisfying preset conditions based on two frames of images to be processed that are temporally adjacent; wherein, temporally adjacent refers to the time at which the two frames of images to be processed are respectively collected Adjacent may also mean that two frames of images to be processed are adjacent in time when they are respectively acquired by the electronic device.
其中,预设条件可以是事先设定好的,比如,可以指代图像特征点对中两个图像特征点之间的相关属性信息相似,还可以指代图像特征点对中两个图像特征点之间的预设距离小于或等于预设阈值,还可以指代图像特征点对两个图像特征点在各自待处理图像中所处的位置信息相同等。Among them, the preset condition can be set in advance, for example, it can refer to the similarity of the related attribute information between the two image feature points in the image feature point pair, and can also refer to the similarity between the two image feature points in the image feature point pair The preset distance between them is less than or equal to the preset threshold, and it may also mean that the image feature points have the same position information of the two image feature points in the respective images to be processed.
在本申请实施例中,电子设备首先从时序上相邻的两帧待处理图像中,提取每一待处理图像上的至少一个图像特征点;其次,选取任一图像特征点作为第一图像特征点,同时计算与第一图像特征点不在同一待处理图像的其他图像特征点之间的汉明距离,在该汉明距离小于或等于预设距离的情况下,将第一图像特征点与对应的其他图像特征点,确定为一对图像特征点对。In the embodiment of the present application, the electronic device first extracts at least one image feature point on each image to be processed from two temporally adjacent frames to be processed; secondly, selects any image feature point as the first image feature point, at the same time calculate the Hamming distance between other image feature points that are not in the same image to be processed as the first image feature point, and if the Hamming distance is less than or equal to the preset distance, the first image feature point and the corresponding The other image feature points of are determined as a pair of image feature point pairs.
在本申请实施例中,在本申请实施例中,电子设备可以基于图像特征点对在对应的待处理图像中的位置信息,来描述该图像特征点对,也可以基于图像特征点对所描述的特征信息,来描述该图像特征点对。In the embodiment of the present application, in the embodiment of the present application, the electronic device may describe the image feature point pair based on the position information of the image feature point pair in the corresponding image to be processed, or may describe the image feature point pair based on the feature information to describe the image feature point pair.
在本申请实施例中,图像特征点对中的两个图像特征点为设置在无人机内部的图像采集设备在前方飞行环境中对同一空间点,基于相邻时间点进行采集得到的。也就是说,图像特征点对中每一图像特征点映射至前方飞行环境中的空间点相同。In the embodiment of the present application, the two image feature points in the image feature point pair are obtained by collecting the same space point in the front flight environment by the image acquisition device installed inside the UAV based on adjacent time points. That is to say, each image feature point in the image feature point pair is mapped to the same spatial point in the forward flight environment.
其中,特征点对的数量在本申请实施例中可以是一个,两个及以上。Wherein, the number of feature point pairs may be one, two or more in the embodiment of the present application.
步骤103、在前方飞行环境中,确定与图像特征点对相关联的三维坐标信息。 Step 103, in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs.
在本申请实施例中,电子设备在前方飞行环境中,确定与图像特征点对相关联的三维坐标信息;其中,该三维坐标信息是前方飞行环境中与图像特征点对具有映射关系的空间点的坐标信息。In the embodiment of the present application, the electronic device determines the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment; wherein, the three-dimensional coordinate information is a spatial point in the forward flight environment that has a mapping relationship with the image feature point pair coordinate information.
其中,该三维坐标信息可以指代与图像特征点对具有映射关系的空间点在世界坐标系下的三维坐标参数。Wherein, the three-dimensional coordinate information may refer to a three-dimensional coordinate parameter of a spatial point having a mapping relationship with the image feature point pair in the world coordinate system.
在本申请实施例中,电子设备基于图像特征点对中每一图像特征点在对应的待处理图像中的坐标位置,该坐标位置可以是电子设备以无人机中的图像采集设备为参考物建立的相机坐标系,得到对应的坐标位置信息,进而基于该两个坐标位置,基于几何算法确定与图像特征点对相关联的三维坐标信息。In the embodiment of the present application, the electronic device is based on the coordinate position of each image feature point in the image feature point pair in the corresponding image to be processed. The coordinate position can be the electronic device taking the image acquisition device in the drone as a reference The established camera coordinate system obtains the corresponding coordinate position information, and then based on the two coordinate positions, the three-dimensional coordinate information associated with the image feature point pair is determined based on a geometric algorithm.
步骤104、基于三维坐标信息,对与待处理图像对应的待调整地图进行调整,得到三维地图。 Step 104, based on the three-dimensional coordinate information, adjust the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map.
在本申请实施例中,电子设备基于确定的三维坐标信息,对与待处理图像对应的待调整地图进行调整或修正,以得到具有高度信息的三维地图;其中,该三维地图可以是与前方飞行环境对应的虚拟三维立体图像。In the embodiment of the present application, based on the determined three-dimensional coordinate information, the electronic device adjusts or corrects the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map with height information; A virtual three-dimensional image corresponding to the environment.
在本申请实施例中,与待处理图像对应的待调整地图可以是二维地形图,还可以是三维地形图。In this embodiment of the present application, the map to be adjusted corresponding to the image to be processed may be a two-dimensional topographical map, or a three-dimensional topographical map.
在本申请实施例中,与待处理图像对应的待调整地图为二维平面地图,电子设备可以将确定的三维坐标信息,与该二维平面地图中二维坐标信息进行融合,构建具有高度信息的三维地图。In the embodiment of the present application, the map to be adjusted corresponding to the image to be processed is a two-dimensional planar map, and the electronic device can fuse the determined three-dimensional coordinate information with the two-dimensional coordinate information in the two-dimensional planar map to construct a map with height information 3D map of .
在本申请实施例中,与待处理图像对应的待调整地图为三维立体图,电子设备可以基于确定的三维坐标信息,对该三维立体坐标信息进行调整或修正,得到更新后的三维地图。In the embodiment of the present application, the map to be adjusted corresponding to the image to be processed is a three-dimensional map, and the electronic device may adjust or correct the three-dimensional coordinate information based on the determined three-dimensional coordinate information to obtain an updated three-dimensional map.
在本申请实施例中,与待处理图像对应的待调整地图可以是电子设备在获取到每一帧待处理图像的情况下,生成与每一帧待处理图像对应的二维平面地图;也可以是电子设备基于获取到多帧待处理图像,并基于相关算法确定与多帧待处理图像对应的预设三维立体图像。In the embodiment of the present application, the map to be adjusted corresponding to the image to be processed may be that the electronic device generates a two-dimensional plane map corresponding to each frame of the image to be processed after acquiring each frame of the image to be processed; it may also be The electronic device determines a preset three-dimensional stereoscopic image corresponding to the multiple frames of images to be processed based on the acquired multiple frames of images to be processed and based on a correlation algorithm.
步骤105、基于三维地图,确定无人机的飞行轨迹。 Step 105, based on the three-dimensional map, determine the flight track of the drone.
在本申请实施例中,电子设备基于得到的三维地图,确定无人机的飞行轨迹;其中,飞行轨迹可以是指代在前方飞行环境中的实际避障路径。In the embodiment of the present application, the electronic device determines the flight trajectory of the drone based on the obtained three-dimensional map; wherein, the flight trajectory may refer to an actual obstacle avoidance path in the forward flight environment.
在本申请实施例中,电子设备基于三维地图,获取前方飞行环境中存在的障碍物信息,进而确定无人机在飞行过程中需对障碍物进行避障所执行的飞行轨迹,即无人机的飞行轨迹。In the embodiment of the present application, based on the three-dimensional map, the electronic device obtains the obstacle information existing in the flying environment in front, and then determines the flight trajectory that the UAV needs to avoid obstacles during the flight, that is, the UAV flight trajectory.
在本申请实施例中,依次通过采集的待处理图像,确定图像特征点对、在前方飞行环境中与图像特征点对相关联的三维坐标信息以及三维地图;如此,能够更加准确地还原前方飞行环境中的障碍物状态,进而能够给出更加精准的具有高度信息的三维地图,进而能够使得电子设备基于确定的三维地图给出更加精准地无人机的避障路径,即尽可能保证无人机飞行轨迹不受前方飞行环境中障碍物的影响。In the embodiment of the present application, the image feature point pairs, the three-dimensional coordinate information associated with the image feature point pairs in the forward flight environment, and the three-dimensional map are determined sequentially through the collected images to be processed; in this way, the forward flight can be restored more accurately The state of obstacles in the environment can then give a more accurate three-dimensional map with height information, and then enable the electronic device to give a more accurate obstacle avoidance path for the drone based on the determined three-dimensional map, that is, to ensure that no one is as far as possible The flight path of the aircraft is not affected by obstacles in the flying environment ahead.
本申请实施例提供的无人机的飞行控制方法,首先,获取待处理图像,该待处理图像的画面内容包括前方飞行环境信息;其次,基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;并在前方飞行环境中,确定与图像特征点对相关联的三维坐标信息;最后,基于三维坐标信息,对与待处理图像对应的待调整地图进行调整,得到三维地图;基于三维地图,确定无人机的飞行轨迹。如此,通过与时序上相邻的两帧待处理图像中的图像特征点对关联的三维坐标信息,构建三维地图,能够高效且精准地还原实际飞行环境信息,并构建具有高度信息的三维地形图;同时基于三维地图确定飞行轨迹以实现避障飞行,能够降低无人机在飞行过程中受实际飞行环境的影响。The flight control method of the unmanned aerial vehicle provided by the embodiment of the present application, firstly, obtain the image to be processed, and the screen content of the image to be processed includes the information of the flying environment in front; secondly, based on the two frames of images to be processed adjacent to each other in time sequence, it is determined that the following conditions are met: Image feature point pairs with preset conditions; and in the forward flight environment, determine the three-dimensional coordinate information associated with the image feature point pairs; finally, based on the three-dimensional coordinate information, adjust the map to be adjusted corresponding to the image to be processed, and obtain Three-dimensional map: Based on the three-dimensional map, determine the flight trajectory of the drone. In this way, the three-dimensional map is constructed through the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent frames of images to be processed in time series, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information ; At the same time, the flight trajectory is determined based on the three-dimensional map to achieve obstacle avoidance flight, which can reduce the influence of the actual flight environment on the UAV during flight.
基于前述实施例,本申请实施例还提供一种无人机的飞行控制方法,应用于电子设备,参照图1和图2所示,该方法包括以下步骤:Based on the foregoing embodiments, the embodiment of the present application also provides a flight control method of a drone, which is applied to electronic equipment, as shown in Figure 1 and Figure 2, the method includes the following steps:
步骤201、采集前方飞行环境信息,得到预设图像。 Step 201 , collecting information about the flight environment ahead to obtain a preset image.
在本申请实施例中,电子设备采集前方飞行环境信息,得到预设图像,即电子设备采集画面内容包括前方飞行环境信息,得到预设图像;其中,预设图像,可以是设置于无人机上且至少包括图像采集部分的电子设备,对无人机在飞行过程中的前方飞行环境信息进行采集得到的。In the embodiment of the present application, the electronic device collects the flight environment information ahead to obtain a preset image, that is, the content of the screen collected by the electronic device includes the flight environment information ahead to obtain a preset image; where the preset image can be set on the drone And it includes at least the electronic equipment of the image collection part, which is obtained by collecting the flight environment information in front of the UAV during the flight.
其中,预设图像可以是电子设备在无人机处于飞行过程中直接采集得到的图像,且没有经过任何数据处理的图像;相应地,预设图像的数量以及采集频率在本申请实施例中不作任何限定。Among them, the preset image may be an image directly collected by the electronic device during the flight of the drone without any data processing; correspondingly, the number of preset images and the frequency of collection are not discussed in this embodiment Any restrictions.
步骤202、对预设图像的图像对比度进行调整,得到待处理图像。 Step 202. Adjust the image contrast of the preset image to obtain an image to be processed.
在本申请实施例中,电子设备对预设图像的图像对比度进行调整,得到待处理图像;其中,对比度调整可以是对预设图像的像素值进行修正或优化,也可以是将预设图像的图像对比度增强,在本申请以下其他实施例中,图像对比度进行调整均指代增强图像对比度。In the embodiment of the present application, the electronic device adjusts the image contrast of the preset image to obtain the image to be processed; wherein, the contrast adjustment can be to correct or optimize the pixel value of the preset image, or to adjust the pixel value of the preset image Image contrast enhancement. In the following other embodiments of the present application, image contrast adjustment refers to image contrast enhancement.
其中,电子设备可以直接将预设图像的图像对比度进行增强,也可以通过间接方法 将预设图像的图像对比度进行增强;其中,可以基于直方图拉伸和直方图均衡化中的至少之一,将图像对比度进行增强,其具体实现过程在本申请实施例中不作详细说明。Wherein, the electronic device may directly enhance the image contrast of the preset image, or may enhance the image contrast of the preset image through an indirect method; wherein, based on at least one of histogram stretching and histogram equalization, The image contrast is enhanced, and its specific implementation process is not described in detail in the embodiments of this application.
在本申请实施例中,通过将采集得到的预设图像的图像对比度进行增强,得到待处理图像;如此,能够使得待处理图像中的特征信息更加凸显,关键点的像素值的强度梯度增大,进而能够使得后期对待处理图像进行特征提取时可以提取到更加显著的图像特征点。In the embodiment of the present application, the image to be processed is obtained by enhancing the image contrast of the acquired preset image; in this way, the feature information in the image to be processed can be made more prominent, and the intensity gradient of the pixel value of the key point increases , so that more prominent image feature points can be extracted when the image to be processed is extracted later.
相应地,电子设备基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对,即电子设备执行上述实施例提供的步骤102,可以通过以下步骤203至步骤205的方式来实现:Correspondingly, the electronic device determines image feature point pairs that meet the preset conditions based on the two adjacent frames of images to be processed in time sequence, that is, the electronic device executes step 102 provided in the above embodiment, which can be performed in the following ways from step 203 to step 205 to fulfill:
步骤203、确定时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点。Step 203: Determine at least one image feature point of each frame of the image to be processed in the two frames of images to be processed that are temporally adjacent.
在本申请实施例中,电子设备确定时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点;其中,不同待处理图像的图像特征点对应的数量、参数信息可以相同,也可以不同。In the embodiment of the present application, the electronic device determines at least one image feature point of each frame of the image to be processed in two adjacent frames of the image to be processed in time sequence; wherein, the number and parameter information corresponding to the image feature points of different images to be processed can be Same or different.
在本申请实施例中,电子设备确定时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点,即电子设备执行上述步骤203,可以通过以下步骤203a和步骤203b的方式来实现:In the embodiment of the present application, the electronic device determines at least one image feature point of each frame of the image to be processed in the two adjacent frames of the image to be processed in time sequence, that is, the electronic device executes the above step 203, and can pass the following steps 203a and 203b way to achieve:
步骤203a、按照图像分辨率梯度,对每帧待处理图像进行图像下采样,生成每帧待处理图像对应的图像金字塔。Step 203a: Perform image downsampling on each frame of the image to be processed according to the image resolution gradient, and generate an image pyramid corresponding to each frame of the image to be processed.
在本申请实施例中,电子设备按照图像分辨率梯度,对时序上相邻的两帧待处理图像中每帧待处理图像进行图像下采样,得到每帧待处理图像对应图像金字塔。In the embodiment of the present application, the electronic device performs image down-sampling on each frame of the image to be processed among the two adjacent frames of the image to be processed in time sequence according to the image resolution gradient, and obtains an image pyramid corresponding to each frame of the image to be processed.
其中,图像金字塔是图像多尺度表达的一种,是一种以多分辨率来解释图像的有效但概念简单的结构;一幅图像的图像金字塔是一系列以金字塔形状(自下而上)逐步降低,且来源于同一张原始图的图像分辨率集合。其通过梯次向下采样获得,直到达到某个终止条件才停止采样。其中,将一层一层的图像比喻成金字塔,层级越高,则图像越小,分辨率越低。Among them, the image pyramid is a kind of multi-scale expression of the image, which is an effective but conceptually simple structure to explain the image at multiple resolutions; the image pyramid of an image is a series of pyramid shapes (bottom-up) step by step Reduced and derived from a set of image resolutions from the same original image. It is obtained by down-sampling in steps, and the sampling is stopped until a certain termination condition is reached. Among them, a layer-by-layer image is compared to a pyramid, and the higher the level, the smaller the image and the lower the resolution.
步骤203b、对每帧待处理图像对应的图像金字塔中每一层级的图像进行特征提取,得到每帧待处理图像的至少一个图像特征点。Step 203b, performing feature extraction on images at each level in the image pyramid corresponding to each frame of image to be processed, to obtain at least one image feature point of each frame of image to be processed.
在本申请实施例中,电子设备执行特征提取操作,即对时序上连续的两帧待处理图像中每帧待处理图像对应的图像金字塔中每一层级的图像,进行特征提取,得到每帧待处理图像的至少一个图像特征点。其中,电子设备可以采用相关神经网络,对每一层级的图像进行特征提取;也可以基于目标检测算法,对每一层级的图像进行特征提取。In the embodiment of the present application, the electronic device performs a feature extraction operation, that is, performs feature extraction on images at each level of the image pyramid corresponding to each frame of the image to be processed in two consecutive frames of images to be processed in time sequence, and obtains the image of each frame to be processed. At least one image feature point of the image is processed. Wherein, the electronic device may use a relevant neural network to perform feature extraction on images at each level; it may also perform feature extraction on images at each level based on a target detection algorithm.
在一种可行的实现过程中,电子设备可以基于快速特征点提取和描述算法(Oriented FAST and Rotated BRIEF,ORB),从每一待处理图像对应的图像金字塔中每一层级的图像进行特征提取。In a feasible implementation process, the electronic device can perform feature extraction from images at each level in the image pyramid corresponding to each image to be processed based on the fast feature point extraction and description algorithm (Oriented FAST and Rotated BRIEF, ORB).
其中,电子设备将待处理图像的图像金字塔中每一层级的图像对应的图像特征点进行融合,组合得到每一待处理图像的图像特征点集合。Wherein, the electronic device fuses image feature points corresponding to images at each level in the image pyramid of the image to be processed, and combines them to obtain a set of image feature points of each image to be processed.
在本申请实施例中,电子设备对每帧待处理图像的图像金字塔中每一层级的图像进行特征提取,得到每帧待处理图像的图像特征点;如此,对采集到的实时图像进行图像增强预处理并进行多层下采样,再对采样的每层图像进行特征点提取,能够提高特征点提取的数量与质量,对于一些复杂的飞行环境具有较强的鲁棒性,能够提高特殊环境下提升无人机对障碍物的识别能力的技术效果。In the embodiment of the present application, the electronic device performs feature extraction on the image of each level in the image pyramid of each frame of the image to be processed, and obtains the image feature points of each frame of the image to be processed; thus, image enhancement is performed on the collected real-time image Preprocessing and multi-layer downsampling, and then performing feature point extraction on each layer of the sampled image can improve the quantity and quality of feature point extraction, and has strong robustness for some complex flight environments, and can improve The technical effect of improving the UAV's ability to identify obstacles.
步骤204、确定至少一个图像特征点对应的二进制参数。 Step 204. Determine the binary parameter corresponding to at least one image feature point.
在本申请实施例中,电子设备可以采用ORB对得到的每一待处理图像的至少一个图像特征点进行描述,即采用二进制数值对图像特征点进行描述。In the embodiment of the present application, the electronic device may use ORB to describe at least one image feature point of each obtained image to be processed, that is, use a binary value to describe the image feature point.
步骤205、在时序上相邻的两帧待处理图像中每帧待处理图像的图像特征点中,基于至少一个图像特征点对应的二进制参数,确定图像特征点对。 Step 205 , among the image feature points of each frame of the image to be processed in the two frames of images to be processed adjacent in time sequence, based on the binary parameter corresponding to at least one image feature point, determine the image feature point pair.
在本申请实施例中,电子设备在时序上相邻的两帧待处理图像中每帧待处理图像的图像特征点中,基于每一图像特征点对应的二进制参数,对图像特征点进行特征匹配,以得到图像特征点对。In the embodiment of the present application, the electronic device performs feature matching on the image feature points based on the binary parameters corresponding to each image feature point among the image feature points of each frame of the image to be processed in the two adjacent frames of images to be processed in time sequence , to get image feature point pairs.
在本申请实施例中,通过对每帧待处理图像进行特征提取,得到对应的图像特征点,同时基于图像特征点对应的二进制参数,对处于不同且时序上连续的待处理图像中的图像特征点进行特征匹配,得到图像特征点对;如此,能够高效且精准地确定图像特征点对,进而能够提高无人机在飞行过程中对前方飞行环境实时感知的准确度。In the embodiment of the present application, the corresponding image feature points are obtained by performing feature extraction on each frame of the image to be processed. Points are matched to obtain image feature point pairs; in this way, image feature point pairs can be determined efficiently and accurately, which in turn can improve the accuracy of real-time perception of the flying environment in front of the drone during flight.
在本申请实施例中,电子设备在时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点中,基于至少一个图像特征点对应的二进制参数,确定图像特征点对,即电子设备执行步骤205,可以通过以下步骤205a和步骤205b的方式来实现:In the embodiment of the present application, the electronic device determines the pair of image feature points based on the binary parameter corresponding to at least one image feature point among at least one image feature point of each frame of the image to be processed in the two adjacent frames in time sequence. , that is, the electronic device executes step 205, which may be implemented by the following steps 205a and 205b:
步骤205a、基于图像特征点对应的二进制参数,确定位于时序上相邻的两帧待处理图像中的两个图像特征点之间的汉明距离。Step 205 a , based on the binary parameters corresponding to the image feature points, determine the Hamming distance between two image feature points located in two temporally adjacent frames of images to be processed.
在本申请实施例中,电子设备计算分别处于时序上连续的两帧待处理图像中,不同待处理图像中的两个图像特征点对应的二进制参数之间的汉明距离。In the embodiment of the present application, the electronic device calculates the Hamming distance between binary parameters corresponding to two image feature points in different images to be processed in two consecutive frames of images to be processed in time sequence.
其中,汉明距离是使用在数据传输差错控制编码里面的,汉明距离是一个概念,它表示两个(相同长度)字对应位不同的数量,以d(x,y)表示两个字x和y之间的汉明距离。对两个字符串进行异或运算,并统计结果为1的个数,那么这个数就是汉明距离。Among them, the Hamming distance is used in the data transmission error control coding. The Hamming distance is a concept, which represents the number of different bits corresponding to two (same length) words, and d(x, y) represents two words x Hamming distance between and y. Perform an XOR operation on two strings and count the number of 1s, then this number is the Hamming distance.
步骤205b、在汉明距离小于预设阈值的情况下,将两个图像特征点,确定为图像特征点对。Step 205b, if the Hamming distance is less than a preset threshold, determine two image feature points as an image feature point pair.
在本申请实施例中,在汉明距离小于预设阈值的情况下,则认为对应的两个图像特征点近似相似,即为匹配特征点对,进而将该两个图像特征点组成图像特征点对。In the embodiment of the present application, when the Hamming distance is less than the preset threshold, it is considered that the corresponding two image feature points are approximately similar, that is, a matching feature point pair, and then the two image feature points are composed of image feature points right.
其中,在时序上连续的两帧待处理图像中,存在的图像特征点对的数量可以是一个,两个及以上,本申请实施例对此不作任何限定。Wherein, the number of image feature point pairs existing in two consecutive frames of images to be processed in time sequence may be one, two or more, which is not limited in this embodiment of the present application.
在本申请实施例中,通过计算处于不同的且处于时序上连续的两帧待处理图像中的两个图像特征点之间的汉明距离,确定两个图像特征点之间的匹配关系;如此,能够高效且精准地确定图像特征点对。In the embodiment of the present application, the matching relationship between the two image feature points is determined by calculating the Hamming distance between the two image feature points in two consecutive frames of images to be processed that are different and in sequential order; , which can efficiently and accurately determine image feature point pairs.
本申请实施例提供的无人机的飞行控制方法,通过对图像进行预处理、基于图像金字塔提取图像特征点以及基于特征匹配确定图像特征点对,能够高效且精准地给出图像特征点对;同时与时序上相邻的两帧待处理图像中的图像特征点对关联的三维坐标信息,构建三维地图,能够高效且精准地还原实际飞行环境信息,并构建具有高度信息的三维地形图;同时基于三维地图确定飞行轨迹以实现避障飞行,能够降低无人机在飞行过程中受实际飞行环境的影响。The flight control method of the UAV provided by the embodiment of the present application can efficiently and accurately give the image feature point pair by preprocessing the image, extracting the image feature point based on the image pyramid, and determining the image feature point pair based on feature matching; At the same time, the three-dimensional coordinate information associated with the image feature point pairs in the two adjacent images to be processed in time series is constructed to construct a three-dimensional map, which can efficiently and accurately restore the actual flight environment information, and construct a three-dimensional topographic map with height information; at the same time Determining the flight trajectory based on the three-dimensional map to achieve obstacle avoidance flight can reduce the influence of the actual flight environment on the flight process of the UAV.
基于前述实施例,本申请实施例还提供一种无人机的飞行控制方法,应用于电子设备,参照图1和图3所示,该方法包括以下步骤:Based on the aforementioned embodiments, the embodiment of the present application also provides a flight control method of a drone, which is applied to electronic equipment, as shown in Figure 1 and Figure 3, the method includes the following steps:
步骤301、获取图像特征点对中每一图像特征点在对应的待处理图像中的二维坐标信息。 Step 301. Obtain the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed.
在本申请实施例中,电子设备获取图像特征点对中每一图像特征点在对应的待处理图像中的二维坐标信息;其中,该二维坐标信息可以是基于无人机上的图像采集设置为参考物建立的相机坐标系中对应的坐标参数。In the embodiment of the present application, the electronic device acquires the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed; wherein, the two-dimensional coordinate information may be based on the image acquisition settings on the drone The corresponding coordinate parameters in the camera coordinate system established for the reference object.
其中,图像特征点对中每一图像特征点在对应的待处理图像中的二维坐标信息之间可以完全相同,也可以不同;相应地,在本申请实施例中,电子设备在一个图像特征点对中,将图像特征点A在第一待处理图像中的第一坐标记为(x1,y1),图像特征点B在第二待处理图像中的第二坐标记为(x2,y2);其中,第一待处理图像和第二待处理图像 在时序上相邻,其时序上的前后在本申请实施例中不作任何限定。Wherein, the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed may be completely the same or different; correspondingly, in the embodiment of the present application, the electronic device In point pairing, the first coordinate of image feature point A in the first image to be processed is marked as (x1, y1), and the second coordinate of image feature point B in the second image to be processed is marked as (x2, y2) ; Wherein, the first image to be processed and the second image to be processed are adjacent in time sequence, and their time sequence is not limited in this embodiment of the present application.
步骤302、基于二维坐标信息,确定图像特征点对中两个图像特征点之间的空间位置关系。 Step 302, based on the two-dimensional coordinate information, determine the spatial position relationship between two image feature points in the image feature point pair.
在本申请实施例中,电子设备基于第一图像特征点的第一坐标,以及第二图像特征点的第二坐标,同时根据时序上相邻的两帧待处理图像,即第一待处理图像与第二待处理图像之间的对极几何关系,计算表征两个图像特征点之间的空间位置关系的本质矩阵或基础矩阵。In the embodiment of the present application, the electronic device is based on the first coordinates of the feature points of the first image and the second coordinates of the feature points of the second image, and at the same time according to the two frames of images to be processed adjacent in time sequence, that is, the first image to be processed and the epipolar geometric relationship between the second image to be processed, and calculate an essential matrix or a fundamental matrix that characterizes the spatial position relationship between the feature points of the two images.
在本申请实施例中,电子设备可以基于设置于无人机上的图像采集部分的采集参数,以及二维坐标信息,共同确定图像特征点对中两个图像特征点之间的空间位置关系。In the embodiment of the present application, the electronic device can jointly determine the spatial position relationship between two image feature points in the image feature point pair based on the acquisition parameters of the image acquisition part set on the drone and the two-dimensional coordinate information.
步骤303、基于空间位置关系以及二维坐标信息,在前方飞行环境中确定三维坐标信息。 Step 303, based on the spatial position relationship and the two-dimensional coordinate information, determine the three-dimensional coordinate information in the forward flight environment.
在本申请实施例中,电子设备基于该空间位置关系,即基于表征两个图像特征点之间的空间位置关系的本质矩阵或基础矩阵,以及二维坐标信息,确定相关三维坐标信息。In the embodiment of the present application, the electronic device determines relevant three-dimensional coordinate information based on the spatial position relationship, that is, based on the essential matrix or fundamental matrix representing the spatial position relationship between two image feature points, and the two-dimensional coordinate information.
在本申请实施例中,通过图像特征点对中每一图像特征点对应的二维坐标信息,确定对应的空间位置关系,进而确定对应的三维坐标信息;如此,能够高效且精准地确定实际飞行环境中的三维坐标点,进而能够使得电子设备后期基于该三维坐标信息构建更加精准地具有高度信息的地形图。In the embodiment of the present application, through the two-dimensional coordinate information corresponding to each image feature point in the image feature point pair, the corresponding spatial position relationship is determined, and then the corresponding three-dimensional coordinate information is determined; in this way, the actual flight can be determined efficiently and accurately. The three-dimensional coordinate points in the environment can further enable the electronic equipment to construct a more accurate topographic map with height information based on the three-dimensional coordinate information in the later stage.
在本申请实施例中,电子设备基于空间位置关系以及二维坐标信息,在前方飞行环境中确定三维坐标信息,即电子设备执行步骤303,可以通过以下步骤303a和步骤303b的方式来实现:In the embodiment of the present application, the electronic device determines the three-dimensional coordinate information in the forward flight environment based on the spatial position relationship and the two-dimensional coordinate information, that is, the electronic device executes step 303, which can be achieved by the following steps 303a and 303b:
步骤303a、对空间位置关系进行解析,得到表征飞行变化参数的旋转矩阵参数和平移矩阵参数。Step 303a, analyzing the spatial position relationship to obtain the rotation matrix parameters and translation matrix parameters representing the flight change parameters.
在本申请实施例中,电子设备对空间位置关系进行解析,可以是对表征空间位置关系的本质矩阵参数或基础矩阵参数进行拆解,得到表征飞行变化参数的旋转矩阵参数和平移矩阵参数。In the embodiment of the present application, the electronic device analyzes the spatial position relationship, which may be dismantling the essential matrix parameters or fundamental matrix parameters representing the spatial position relationship to obtain the rotation matrix parameters and translation matrix parameters representing the flight change parameters.
步骤303b、基于旋转矩阵参数、平移矩阵参数以及二维坐标信息,在前方飞行环境中确定三维坐标信息。Step 303b, based on the rotation matrix parameters, the translation matrix parameters and the two-dimensional coordinate information, determine the three-dimensional coordinate information in the forward flight environment.
在本申请实施例中,电子设备基于旋转矩阵参数、平移矩阵参数以及二维坐标信息,根据几何运算,在前方飞行环境中确定与图像特征点对相关联的空间点的三维坐标信息。In the embodiment of the present application, the electronic device determines the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment based on the rotation matrix parameter, the translation matrix parameter and the two-dimensional coordinate information, according to the geometric operation.
其中,该几何运算可以是利用第一图像特征点和第二图像特征点中的至少之一映射至前方飞行环境中的点之间构成的射线,与相机坐标系下的空间射线重合的关系,确定前方飞行环境中,与图像特征点对相关联的空间点的三维坐标参数。Wherein, the geometric operation may be a relationship between a ray formed between a point mapped to a point in the forward flight environment by using at least one of the feature points of the first image and the feature point of the second image, and the relationship between the coincidence of the space ray in the camera coordinate system, The three-dimensional coordinate parameters of the spatial points associated with the image feature point pairs in the forward flight environment are determined.
在本申请实施例中,电子设备基于相关几何运算,以及图像特征点对的二维坐标信息,在前方飞行环境中与该图像特征点对相关联的空间点的三维坐标信息;如此,能够高效且精准地确定具有高度信息的空间点坐标参数,进而能够使得电子设备后期基于该坐标参数构建三维地形图。In the embodiment of the present application, the electronic device is based on the relevant geometric calculation and the two-dimensional coordinate information of the image feature point pair, and the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment; thus, it can efficiently And accurately determine the coordinate parameters of the spatial points with height information, and then enable the electronic equipment to construct a three-dimensional topographic map based on the coordinate parameters in the later stage.
在本申请实施例中,电子设备基于三维坐标信息,对于待处理图像对应的待调整地图进行调整,得到三维地图之前,即电子设备执行步骤104,可以通过以下步骤304至步骤307来实现:In the embodiment of the present application, the electronic device adjusts the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information. Before the three-dimensional map is obtained, that is, the electronic device performs step 104, which can be realized through the following steps 304 to 307:
步骤304、获取无人机飞行的初始位置信息以及初始飞行姿态参数。 Step 304, acquiring the initial position information and initial flight attitude parameters of the UAV flight.
在本申请实施例中,电子设备获取并确定无人机在飞行过程中的初始位置信息以及初始飞行姿态参数;其中,初始位置参数可以是在世界坐标下的三维坐标参数来表示;同时初始飞行姿态参数可以是无人机飞行原点对应的角度差等。In the embodiment of the present application, the electronic device acquires and determines the initial position information and initial flight attitude parameters of the UAV during flight; wherein, the initial position parameters can be represented by three-dimensional coordinate parameters in world coordinates; at the same time, the initial flight The attitude parameter can be the angle difference corresponding to the flying origin of the UAV, etc.
步骤305、确定初始位置信息与三维坐标信息之间的距离。 Step 305. Determine the distance between the initial position information and the three-dimensional coordinate information.
在本申请实施例中,电子设备计算并确定三维坐标信息与初始位置信息之间的距离 差。In the embodiment of the present application, the electronic device calculates and determines the distance difference between the three-dimensional coordinate information and the initial position information.
在本申请实施例中,每一图像特征点对在前方飞行环境中对应一个空间点以及该空间点的三维坐标信息,其中,不同的三维坐标信息与初始位置信息之间的距离不同。该距离可以是x轴、y轴以及z轴中任一方向上的距离差。In the embodiment of the present application, each image feature point pair corresponds to a space point and the three-dimensional coordinate information of the space point in the forward flight environment, wherein the distance between different three-dimensional coordinate information and the initial position information is different. The distance may be a distance difference in any direction of the x-axis, y-axis and z-axis.
步骤306、基于距离、初始位置信息以及初始飞行姿态参数,构建与三维坐标信息匹配的具有预设维度的坐标向量参数。 Step 306 , based on the distance, initial position information, and initial flight attitude parameters, construct coordinate vector parameters with preset dimensions that match the three-dimensional coordinate information.
在本申请实施例中,电子设备可以将距离求倒数,并将距离的倒数、初始位置信息以及初始飞行姿态参数进行融合,以生成与三维坐标信息匹配的具有预设维度的坐标向量参数;其中,该预设维度可以是六维。In the embodiment of the present application, the electronic device can calculate the reciprocal of the distance, and fuse the reciprocal of the distance, initial position information, and initial flight attitude parameters to generate a coordinate vector parameter with a preset dimension that matches the three-dimensional coordinate information; where , the preset dimension can be six dimensions.
在本申请实施例中,电子设备可以采用逆深度参数化对提取的三维坐标信息进行快速深度收敛,以提高计算效率。In the embodiment of the present application, the electronic device may use inverse depth parameterization to perform rapid depth convergence on the extracted three-dimensional coordinate information, so as to improve calculation efficiency.
步骤307、基于坐标向量参数,对待调整地图中的待调整坐标进行调整,得到三维地图。 Step 307 , based on the coordinate vector parameters, adjust the coordinates to be adjusted in the map to be adjusted to obtain a three-dimensional map.
在本申请实施例中,电子设备基于提取的坐标向量参数,即前方飞行环境中的实际空间点的坐标信息,对待调整地图中的待调整坐标信息进行调整,得到三维地图;其中,该待调整坐标可以是二维坐标,也可以是三维坐标。In the embodiment of the present application, the electronic device adjusts the coordinate information to be adjusted in the map to be adjusted based on the extracted coordinate vector parameters, that is, the coordinate information of the actual space point in the flying environment ahead, to obtain a three-dimensional map; wherein, the to-be-adjusted The coordinates can be two-dimensional coordinates or three-dimensional coordinates.
在本申请实施例中,通过采用逆深度参数化方案对特征点与无人机姿态数据进行处理,能够提高扩展卡尔曼滤波对特征点深度计算时的收敛速度,以及计算效率,进而使得小型无人机在高速运动时也能快速更新障碍物深度信息的效果;同时,逆深度参数化能够使得算法可以处理远距离的特征,包括一些远到在无人机移动期间视差很小的特征点运动,从而能够增强障碍物感知效率。In the embodiment of the present application, by using the inverse depth parameterization scheme to process the feature points and the UAV attitude data, the convergence speed and calculation efficiency of the extended Kalman filter for the depth calculation of the feature points can be improved, and the small drone The man-machine can also quickly update the effect of obstacle depth information when moving at high speed; at the same time, the inverse depth parameterization can enable the algorithm to deal with long-distance features, including some feature points that are so far away that the parallax is very small during the movement of the drone. , so as to enhance the obstacle perception efficiency.
在本申请实施例中,电子设备基于坐标向量参数,对待调整地图中的待调整坐标进行调整,得到三维地图,即电子设备执行步骤307,可以通过以下步骤307a至步骤307c的方式来实现:In the embodiment of the present application, the electronic device adjusts the coordinates to be adjusted in the map to be adjusted based on the coordinate vector parameters to obtain a three-dimensional map, that is, the electronic device executes step 307, which can be achieved through the following steps 307a to 307c:
步骤307a、基于坐标向量参数,构建更新后的协方差矩阵。Step 307a. Based on the coordinate vector parameters, an updated covariance matrix is constructed.
在本申请实施例中,电子设备基于坐标向量参数,确定用于修正矩阵参数,即构建更新协方差矩阵;其中,可以采用扩展卡尔曼滤波更新协方差矩阵。In the embodiment of the present application, the electronic device determines parameters for the correction matrix based on the coordinate vector parameters, that is, constructs an updated covariance matrix; wherein, the extended Kalman filter may be used to update the covariance matrix.
步骤307b、基于更新后的协方差矩阵,对待调整地图的待调整坐标进行调整,得到修正后的三维坐标信息。Step 307b. Based on the updated covariance matrix, adjust the coordinates of the map to be adjusted to obtain corrected three-dimensional coordinate information.
在本申请实施例中,电子设备基于更新协方差矩阵,对待调整地图中与三维坐标参数相关联的坐标参数进行修正,以得到修正后的三维坐标信息;其中,调整可以是将待调整地图中的坐标的高度信息进行增加或减少,还可以是填充该高度信息。In the embodiment of the present application, based on the updated covariance matrix, the electronic device corrects the coordinate parameters associated with the three-dimensional coordinate parameters in the map to be adjusted to obtain the corrected three-dimensional coordinate information; The height information of the coordinates can be increased or decreased, and the height information can also be filled.
步骤307c、基于修正后的三维坐标信息,构建三维地图。Step 307c, constructing a three-dimensional map based on the corrected three-dimensional coordinate information.
在本申请实施例中,电子设备基于修正后的三维坐标信息,构建生成与前方飞行环境信息向匹配的三维地图。In the embodiment of the present application, the electronic device constructs and generates a three-dimensional map that matches the forward flight environment information based on the corrected three-dimensional coordinate information.
在本申请实施例中,电子设备基于对与待处理图像相关联的待调整地图进行优化,得到具有高度信息的三维立体地图;如此,能够更加准确地还原出障碍物状态,尽可能保证无人机飞行轨迹不受影响。In the embodiment of the present application, the electronic device obtains a three-dimensional map with height information based on optimizing the map to be adjusted associated with the image to be processed; in this way, the state of obstacles can be restored more accurately, ensuring that no one is as far as possible The flight path of the aircraft is not affected.
本申请实施例提供的无人机的飞行控制方法,基于与时序上相邻的两帧待处理图像中的图像特征点对,通过几何运算在前方飞行环境中确定与图像特征点对关联的空间点的三维坐标信息,进而基于该三维坐标信息来优化初期的地图;如此,能够高效且精准地还原实际飞行环境信息,并构建具有高度信息的三维地形图;同时基于三维地图确定飞行轨迹以实现避障飞行,能够降低无人机在飞行过程中受实际飞行环境的影响。The flight control method of the UAV provided in the embodiment of the present application is based on the pair of image feature points in the two adjacent images to be processed in time sequence, and determines the space associated with the pair of image feature points in the forward flight environment through geometric operations. The three-dimensional coordinate information of the point, and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be restored efficiently and accurately, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve Obstacle avoidance flight can reduce the impact of the actual flight environment on the UAV during flight.
基于前述实施例,在本申请实施例提供的无人机的飞行控制的方法中,电子设备基于三维地图,确定无人机的飞行轨迹,可以通过以下步骤A1和步骤A2的方式来实现:Based on the aforementioned embodiments, in the method for flight control of the drone provided in the embodiment of the present application, the electronic device determines the flight trajectory of the drone based on the three-dimensional map, which can be achieved by the following steps A1 and A2:
步骤A1、基于三维地图,确定规避路线。Step A1. Determine the avoidance route based on the three-dimensional map.
在本申请实施例中,电子设备基于确定的三维地图,提前感知前方飞行环境中的障碍物信息,进而确定出对障碍物进行绕行的规避路线。In the embodiment of the present application, based on the determined three-dimensional map, the electronic device senses information of obstacles in the flying environment ahead in advance, and then determines an avoidance route to circumvent the obstacles.
步骤A2、基于规避路线,确定无人机的飞行轨迹。Step A2, based on the avoidance route, determine the flight trajectory of the UAV.
在本申请实施例中,电子设备基于规避路线,确定无人机的飞行轨迹。In the embodiment of the present application, the electronic device determines the flight track of the drone based on the avoidance route.
在本申请实施例中,电子设备基于三维地图,获知前方飞行环境中的障碍物,进而确定出对应的规避路线,以执行相关飞行任务;如此,能够降低无人机在飞行过程中受实际环境的影响。In the embodiment of the present application, based on the three-dimensional map, the electronic device knows the obstacles in the flying environment ahead, and then determines the corresponding avoidance route to perform the relevant flight tasks; Impact.
基于此,如图4所示,为本申请实施例提供的一种飞行过程中构建三维地图的流程示意图;可通过以下步骤来实现:Based on this, as shown in Figure 4, it is a schematic flow chart of building a three-dimensional map during flight provided by the embodiment of the present application; it can be realized by the following steps:
步骤一、无人机执行飞行任务,即开始操作,对应图4中的401。Step 1: The UAV executes the flight mission, that is, starts to operate, which corresponds to 401 in FIG. 4 .
步骤二、采集实时图像,即对应图4中的402;其中,可以是使用单目摄像头采集实时图像。Step 2, collecting real-time images, which corresponds to 402 in FIG. 4 ; wherein, a monocular camera may be used to collect real-time images.
步骤三、对采集的实时图像进行增强处理以突出图像中的特征信息,即对应图4中的403;因相关技术中的无人机避障算法都是在理想的环境下采集实时图像并进行障碍物识别与空间感知,但是在实际应用中的无人机应用场景下,环境中往往存在很多视觉干扰,比如:弱光照、自然阴影以及雾霾等,此类干扰可能对机器视觉产生较大的影响,直接导致对空间环境进行特征提取时出现误差或特征提取不足,进而需针对一些特殊环境下图像特征信息模糊的问题进行优化。Step 3: Enhance the collected real-time image to highlight the feature information in the image, which corresponds to 403 in Figure 4; because the UAV obstacle avoidance algorithm in the related technology is to collect real-time images in an ideal environment and perform Obstacle recognition and space perception, but in the actual application of UAV application scenarios, there are often many visual disturbances in the environment, such as: weak light, natural shadows and haze, etc., such disturbances may have a greater impact on machine vision The impact of the image directly leads to errors or insufficient feature extraction in the feature extraction of the spatial environment, and then it is necessary to optimize the problem of blurred image feature information in some special environments.
本申请实施例中,首先对采集的实时图像进行非线性移位并重新分配图像中的像素值,确保实时图像在一定灰度范围内的所有像素值的数目大致相等。其次,再增加图像中像素值处于中间峰值部分的对比度,降低两侧山谷部分的对比度,输出图像对应的平坦分段直方图。如此,经过对实时采集的图像进行直方图均衡化后,使得实时图像中的特征信息能够得到凸显,关键像素的强度梯度增大,之后对实时图像进行特征提取时就可以提取到更加显著的特征点。In the embodiment of the present application, firstly, the collected real-time image is nonlinearly shifted and the pixel values in the image are redistributed to ensure that the number of all pixel values of the real-time image within a certain gray scale range is roughly equal. Secondly, increase the contrast of the pixel value in the middle peak part of the image, reduce the contrast of the valley parts on both sides, and output the flat segmented histogram corresponding to the image. In this way, after the histogram equalization is performed on the real-time collected image, the feature information in the real-time image can be highlighted, and the intensity gradient of key pixels increases, and more prominent features can be extracted when performing feature extraction on the real-time image. point.
步骤四、按梯度对图像进行下采样,构建图像金字塔,并从图像金字塔的每一层提取ORB特征点,以及进行图像帧间的特征点匹配,即对应图4中的404、特征点提取以及特征点匹配。Step 4, downsample the image according to the gradient, construct an image pyramid, and extract ORB feature points from each layer of the image pyramid, and perform feature point matching between image frames, which corresponds to 404, feature point extraction and Feature point matching.
其中,对图像帧(即采集的实时图像进行增强处理之后的图像,即对应本申请实施例中的待处理图像)进行特征提取时,若提取到的特征点数量不足或者不能从障碍物区域提取到正确的特征点,在进行空间地图点构建的时候就可能出现误差,进而影响无人机避障算法的鲁棒性。同时若图像中的特征点分布足够均匀,对于视觉感知的整体效率也是有帮助的。进而为了提高特征提取的质量,本申请实施例可以基于图像分辨率对待处理图像进行下采样,构成8层图像金字塔,并在图像金字塔各层级上提取ORB特征点;然后统计图像网格化每一网格内特征点个数,如果特征点数量不够,就调整角点计算阈值直到能够从网格中提取到至少5个特征点;其中,每个网格提取5个特征点是可以获得较好的特征描述效果。Wherein, when performing feature extraction on the image frame (that is, the image after the enhanced processing of the collected real-time image, that is, the image to be processed in the embodiment of the application), if the number of extracted feature points is insufficient or cannot be extracted from the obstacle area When the correct feature points are detected, errors may occur when the spatial map points are constructed, which will affect the robustness of the UAV obstacle avoidance algorithm. At the same time, if the distribution of feature points in the image is uniform enough, it is also helpful for the overall efficiency of visual perception. Furthermore, in order to improve the quality of feature extraction, the embodiment of the present application can down-sample the image to be processed based on the image resolution to form an 8-layer image pyramid, and extract ORB feature points at each level of the image pyramid; The number of feature points in the grid, if the number of feature points is not enough, adjust the corner point calculation threshold until at least 5 feature points can be extracted from the grid; among them, extracting 5 feature points per grid can get better The characteristic description effect.
在本申请实施例中可以使用ORB特征算法作为图像帧的特征提取和描述算法。首先,使用FAST角点来检测图像中具有强度差异的特征点,然后使用特征描述子(BRIEF)算法来计算特征点的描述符。其中,对于待处理图像中的某一像素点p,以3个像素点为半径在其附近找到16个像素点;同时对这个圆周上的像素点的灰度值与像素点p的灰度值做差,如果存在n个连续像素与像素点p灰度差值的绝对值大于一个阈值t,那么这个像素点p就可以被选为候选角点进行筛选。若最终计算结果表明在圆周上有10个及以上的像素点满足条件,则该点便可以被认为是一个FAST角点。其次,在提取到FAST角点之后,ORB算法通过使用改进后的BRIEF算法对特征点进行描述。首先对图像使用高斯滤波处理以 去除噪声并采用积分图像来进行平滑处理;再以图像特征点为中心取尺寸为S×S(事先设定好的)的窗口,从窗口内随机选取两个像素点x和y作为点对,比较其像素值大小,并进行二进制赋值。In the embodiment of the present application, the ORB feature algorithm may be used as the feature extraction and description algorithm of the image frame. First, FAST corners are used to detect feature points with intensity differences in the image, and then feature descriptors (BRIEF) algorithm is used to calculate the descriptors of feature points. Among them, for a certain pixel point p in the image to be processed, 16 pixel points are found near it with a radius of 3 pixels; If there are n consecutive pixels and the absolute value of the gray difference of pixel p is greater than a threshold t, then this pixel p can be selected as a candidate corner point for screening. If the final calculation result shows that there are 10 or more pixel points satisfying the condition on the circumference, the point can be considered as a FAST corner point. Secondly, after extracting the FAST corner points, the ORB algorithm describes the feature points by using the improved BRIEF algorithm. Firstly, Gaussian filtering is used to remove noise from the image and the integral image is used for smoothing; then a window of size S×S (preset) is taken with the image feature point as the center, and two pixels are randomly selected from the window Point x and y are used as a point pair, compare their pixel values, and perform binary assignment.
其中,ORB算法最明显的特点是计算速度快,且具有较好的尺度与旋转不变性,这主要归功于FAST角点检测算法极高的速度,并且BRIEF算法独特的二进制字符串表示方式不仅节省了存储空间,也在很大程度上缩短了匹配时间。ORB特征算法的使用为整个避障算法节省了大量的计算空间。且对于无人机这类高速运动的物体,ORB算法相比于其他特征点算法具有更高的鲁棒性,能够持续提取到稳定的特征。图像中所有特征点将用于后续帧的特征匹配。Among them, the most obvious feature of the ORB algorithm is fast calculation speed and good scale and rotation invariance, which is mainly due to the extremely high speed of the FAST corner detection algorithm, and the unique binary string representation of the BRIEF algorithm not only saves The storage space is reduced, and the matching time is greatly shortened. The use of the ORB feature algorithm saves a lot of computing space for the entire obstacle avoidance algorithm. And for high-speed moving objects such as drones, the ORB algorithm is more robust than other feature point algorithms, and can continuously extract stable features. All feature points in the image will be used for feature matching in subsequent frames.
此外,图像间特征匹配可以保证无人机在飞行过程中实现对周边环境进行实时的持续感知,如果飞行路径中出现未知障碍物也可以及时发现并准确定位障碍物位置。即完成特征点提取后,图像中的特征点被通过二进制字符串的方式进行描述。此时可根据经过描述的特征信息完成图像帧间的特征匹配。该部分的主要思路是遍历前一幅图像帧中所有的地图点,将其全部投影到当前帧,然后在当前帧中找到一个描述子距离最相近的特征点作为其匹配点。In addition, the feature matching between images can ensure that the UAV can realize real-time continuous perception of the surrounding environment during the flight process, and if an unknown obstacle appears in the flight path, it can also be detected in time and accurately locate the obstacle position. That is, after the feature point extraction is completed, the feature points in the image are described in the form of binary strings. At this time, the feature matching between image frames can be completed according to the described feature information. The main idea of this part is to traverse all the map points in the previous image frame, project all of them to the current frame, and then find a feature point with the closest descriptor distance in the current frame as its matching point.
对于获取的两幅连续图像帧,在经过特征点提取和特征点描述之后,假设生成了两幅图像中的某两个图像特征点,即特征描述子A、B,然后使用计算它们之间的汉明距离
Figure PCTCN2022113856-appb-000001
来判断它们是否是匹配的点。通过计算得出的D值越小,说明这两个图像特征点的相似度越高。如果D小于设定好的阈值,就说明这一对图像特征点能够完成匹配,即构成图像特征点对。
For the acquired two consecutive image frames, after feature point extraction and feature point description, it is assumed that some two image feature points in the two images are generated, that is, feature descriptors A and B, and then calculated using Hamming distance
Figure PCTCN2022113856-appb-000001
to judge whether they are matching points. The smaller the calculated D value, the higher the similarity between the two image feature points. If D is less than the set threshold, it means that the pair of image feature points can complete the matching, that is, the pair of image feature points is formed.
步骤五、通过几何计算还原实际空间中相关特征点的深度信息,即对应图4中的405、计算特征点深度。Step 5. Restoring the depth information of the relevant feature points in the actual space through geometric calculation, that is, corresponding to 405 in FIG. 4 , calculating the depth of the feature points.
若从时序上相邻的两帧图像帧中获得足够多匹配的图像特征点对,进而可通过这些图像特征点对之间对应关系,计算出无人机的单目摄像头采集两幅图像帧之间的运动变化。该运动变化可以使用旋转矩阵R和平移矩阵t表示。如图5所示,为本申请实施例提供的一种表征图像特征点对之间对应的关系的示意图。If enough matching image feature point pairs are obtained from two temporally adjacent image frames, the relationship between the two image frames collected by the UAV's monocular camera can be calculated through the corresponding relationship between these image feature point pairs. movement changes between. This motion change can be expressed using a rotation matrix R and a translation matrix t. As shown in FIG. 5 , it is a schematic diagram representing a corresponding relationship between pairs of image feature points provided by the embodiment of the present application.
其中,对于时序上相邻的两帧图像帧I 1和I 2,p 1和p 2分别位于图像帧I 1和图像帧I 2中,且p 1和p 2是一对特征点对;同时是p 1和p 2空间中的点P在图像帧I 1和图像帧I 2的投影。 Wherein, for two adjacent image frames I 1 and I 2 in time sequence, p 1 and p 2 are located in image frame I 1 and image frame I 2 respectively, and p 1 and p 2 are a pair of feature points; at the same time is the projection of point P in the space of p1 and p2 on the image frame I1 and the image frame I2 .
点P与相机光心O 1、O 2构成的平面称为极平面。O 1、O 2分别与I 1、I 2的交点e 1、e 2称为极点。对于图像帧I 1,P点可能存在于
Figure PCTCN2022113856-appb-000002
上的任意位置,也就是I 2中对应的
Figure PCTCN2022113856-appb-000003
中,通过特征匹配在图像帧I 2中找到了p 2的准确位置就能确定点P在空间点的坐标。而对于p 1和p 2则满足对极约束,如公式(1)所示:
The plane formed by point P and camera optical centers O 1 and O 2 is called polar plane. The intersection points e 1 and e 2 of O 1 , O 2 and I 1 , I 2 respectively are called poles. For image frame I 1 , point P may exist at
Figure PCTCN2022113856-appb-000002
Any position on , that is, the corresponding in I 2
Figure PCTCN2022113856-appb-000003
In , the coordinates of point P in the space point can be determined by finding the exact position of p 2 in the image frame I 2 through feature matching. And for p 1 and p 2 , the antipolar constraint is satisfied, as shown in formula (1):
Figure PCTCN2022113856-appb-000004
Figure PCTCN2022113856-appb-000004
其中K为相机内参。其还可以转换为公式(2)和公式(3);where K is the internal reference of the camera. It can also be converted into formula (2) and formula (3);
E=t ΛR    公式(2); E=t Λ R formula (2);
F=K -TEK -1   公式(3); F=K -T EK -1 formula (3);
其中,E为本质矩阵,R为基础矩阵,同时对极约束可以简化为公式(4):Among them, E is the essential matrix, R is the fundamental matrix, and the epipolar constraint can be simplified as formula (4):
Figure PCTCN2022113856-appb-000005
Figure PCTCN2022113856-appb-000005
进而,相机的运动位姿变化问题就可以转化为:通过配对的特征点的像素坐标,计算出矩阵E或F;或通过求得的E或F,求出旋转矩阵R和平移矩阵t。Furthermore, the problem of camera movement and pose change can be transformed into: calculating the matrix E or F through the pixel coordinates of the paired feature points; or calculating the rotation matrix R and translation matrix t through the calculated E or F.
此外,利用图像中的二维坐标点射线与相机坐标系下的空间点射线重合的关系,确定空间中的点P的三维坐标,计算公式如(5)所示:In addition, the three-dimensional coordinates of the point P in space are determined by using the coincidence relationship between the two-dimensional coordinate point ray in the image and the space point ray under the camera coordinate system, and the calculation formula is shown in (5):
x·(RX+t)=0    公式(5);x (RX+t) = 0 formula (5);
其中,x表示p 1和p 2,X表示在世界坐标系下的空间点P的三维坐标。 Wherein, x represents p 1 and p 2 , and X represents the three-dimensional coordinates of the spatial point P in the world coordinate system.
步骤六、对特征点深度信息进行逆深度参数化,并使用扩展卡尔曼滤波对空间点云进行优化,即对应图4中的406、优化深度信息。Step 6. Perform inverse depth parameterization on the depth information of the feature points, and optimize the spatial point cloud by using the extended Kalman filter, which corresponds to 406 in FIG. 4 , optimizing the depth information.
其中,本申请实施例可采用逆深度参数化结合扩展卡尔曼滤波来优化相机位姿数据,即优化空间点P的三维坐标信息。因小型无人机在快速狭小空间中飞行时,对障碍物感知算法的计算效率有较高的要求;本申请实施例中可使用储存在数据库中的位姿数据,通过计算对已有无人机位姿与三维空间点坐标进行持续优化修正的过程。Among them, in the embodiment of the present application, the inverse depth parameterization combined with the extended Kalman filter can be used to optimize the camera pose data, that is, to optimize the three-dimensional coordinate information of the spatial point P. Because small UAVs fly in a fast and narrow space, there are high requirements for the calculation efficiency of obstacle perception algorithms; in the embodiment of this application, the pose data stored in the database can be used to calculate the existing unmanned The process of continuous optimization and correction of machine pose and three-dimensional space point coordinates.
其中,基于视觉的感知方法使用扩展卡尔曼滤波来优化图像中特征点在环境空间中的坐标以尽可能减少飞行过程中累积的误差。为了保证能快速准确地生成空间中障碍物的地形信息,本申请实施例中采用逆深度参数化方法对提取的特征点进行快速深度收敛;因逆深度参数化的使用使得收敛速度快于笛卡尔参数化,其中,逆深度的不确定度比标准深度更接近高斯分布。Among them, the vision-based perception method uses the extended Kalman filter to optimize the coordinates of the feature points in the image in the environment space to minimize the accumulated errors during the flight. In order to ensure that the terrain information of obstacles in the space can be generated quickly and accurately, the embodiment of this application uses the inverse depth parameterization method to perform rapid depth convergence on the extracted feature points; the use of inverse depth parameterization makes the convergence speed faster than Cartesian parameterization, where the uncertainty in the inverse depth is closer to a Gaussian distribution than the standard depth.
在参数化的过程中,储存在数据库中的特征点均由六维向量表示,这个向量由特征点P相对于一个锚点的笛卡尔坐标:[x a,y a,z a] T、方位角ψ、仰角θ以及特征点P到锚点的距离的倒数ρ共同定义,其中锚点为数据库初始化时无人机的空间位置。特征点P可以表示为:y=[x a,y a,z a,ψ,θ,ρ] TIn the process of parameterization, the feature points stored in the database are represented by a six-dimensional vector, which is composed of the Cartesian coordinates of the feature point P relative to an anchor point: [x a , y a , z a ] T , orientation The angle ψ, the elevation angle θ, and the reciprocal ρ of the distance from the feature point P to the anchor point are jointly defined, where the anchor point is the spatial position of the drone when the database is initialized. The feature point P can be expressed as: y=[x a , y a , z a ,ψ, θ, ρ] T .
同时,无人机的状态向量可采用四元数进行表示:x=[p,v,e,s b,w b] T,其中p是无人机在空间坐标系下的位置,v是速度,e是四元数误差,s b与w b分别是无人机飞行过程中的加速度与陀螺仪偏差。因此,在计算模型中,空间中三维点与无人机的距离可以表示为无人机状态和特征状态的归一化向量,即如公式(6)所示: At the same time, the state vector of the drone can be represented by a quaternion: x=[p,v,e,s b ,w b ] T , where p is the position of the drone in the space coordinate system, and v is the velocity , e is the quaternion error, s b and w b are the acceleration and gyroscope deviation during the flight of the UAV. Therefore, in the computational model, the distance between a 3D point in space and the UAV can be expressed as a normalized vector of UAV state and feature state, as shown in formula (6):
Figure PCTCN2022113856-appb-000006
Figure PCTCN2022113856-appb-000006
其中,R是从空间坐标系到摄像机坐标的旋转矩阵。该公式允许估计器在无限深度下处理特征,在这种情况下ρ=0。因此,对于无人机执行工作任务的一些宽阔室外空间,经过逆深度参数化过后,算法也可以处理一些远距离的图像特征点,这些点在无人机运动过程中往往只存在较小位移,所以视差很小,以往的特征点深度计算方法处理此类特征点是比较困难的。该专利方法也很好的解决了这个问题。同时在定义逆深度参数后,***按照图像序列执行建图算法。其中每个特征点被视为独立的测量数据,并且忽略测量值和真实值之间的相关性。在每个时间戳,查看特征点数据库的子集,即对应图4中的407,并将其于用特征状态更新,过程中使用扩展卡尔曼滤波更新协方差矩阵。由此可以计算出数据库中有效特征点的深度信息。where R is the rotation matrix from the spatial coordinate system to the camera coordinates. This formulation allows the estimator to process features at infinite depth, in this case ρ=0. Therefore, for some wide outdoor spaces where UAVs perform work tasks, after inverse depth parameterization, the algorithm can also process some long-distance image feature points, which often have only a small displacement during the movement of the UAV. Therefore, the parallax is very small, and it is difficult for the previous feature point depth calculation method to deal with such feature points. This patented method also solves this problem well. At the same time, after defining the inverse depth parameters, the system executes the mapping algorithm according to the image sequence. Each feature point is regarded as an independent measurement data, and the correlation between the measured value and the real value is ignored. At each time stamp, look at a subset of the feature point database, which corresponds to 407 in Figure 4, and update it with the feature state, and update the covariance matrix using the extended Kalman filter during the process. Thus, the depth information of valid feature points in the database can be calculated.
步骤七、根据空间中特征点深度信息建立具有高度信息的地形图,即对应图4中的408;其中,利用空间中收敛点的三维坐标,可以生成由高度表示的地形网格,具***置的高度信息由数据库中的点坐标进行更新,当接收到新的收敛点时,网格中位置的高度被升高或降低。Step 7. Establish a topographic map with height information according to the depth information of feature points in space, which corresponds to 408 in Figure 4; wherein, using the three-dimensional coordinates of the convergence points in space, a terrain grid represented by height can be generated, and the specific position The height information is updated from the point coordinates in the database, and the height of the location in the grid is raised or lowered when a new convergent point is received.
步骤八、无人机根据构建的三维地形图进行避障飞行。Step 8, the UAV performs obstacle avoidance flight according to the constructed three-dimensional terrain map.
通过滤波器生成的网格地形图可以被用于进行无人机避障。避障算法通过考虑无人机在水平的速度矢量方向上的地形图的网格高度判断下一步操作。首先将无人机到指定网格的高度与指定网格的最小高度进行比较。如果这个最小高度将阻碍无人机原有轨迹,则无人机自行执行平滑拉起机动。以类似的方式,该算法还可以使无人机在通过障碍物后迅速返回到所需的高度。The grid terrain map generated by the filter can be used for UAV obstacle avoidance. The obstacle avoidance algorithm judges the next step by considering the grid height of the topographic map of the UAV in the direction of the horizontal velocity vector. First compare the height of the drone to the specified grid with the minimum height of the specified grid. If this minimum altitude would hinder the UAV's original trajectory, the UAV performs a smooth pull-up maneuver by itself. In a similar way, the algorithm also enables the drone to quickly return to the desired altitude after passing an obstacle.
基于此,本申请实施例提出一种基于单目摄像头的无人机在飞行过程中障碍物感知的方法,能够感知障碍物高度信息,保证无人机可以从障碍物上空避障。能够缩短无人 机避障距离,使其对障碍物的感知能力提升。此外,本申请通过设计多尺度下特征提取方法,从不同分辨率下的图层中提取ORB特征,保证图像中特征点的均匀分布,进而得到更好的障碍物感知效果。对于一些复杂的无人机工作环境(如:暗光照环境、阴影环境、弱纹理环境等)普通视觉环境感知方法还可能出现误识别、高误差或中断等问题,本申请实施例通过提出图像预处理步骤尽可能地提升障碍感知方法的鲁棒性。此外基于逆深度参数化结合扩展卡尔曼滤波对图像中特征点进行深度信息计算的方法,能够提高深度信息的快速收敛,并且对于空间中远距离的点也能更好地还原其深度。Based on this, the embodiment of the present application proposes a monocular camera-based UAV obstacle sensing method during flight, which can sense the obstacle height information and ensure that the UAV can avoid obstacles from above the obstacle. It can shorten the obstacle avoidance distance of the drone and improve its perception of obstacles. In addition, this application designs a multi-scale feature extraction method to extract ORB features from layers with different resolutions to ensure the uniform distribution of feature points in the image, thereby obtaining better obstacle perception effects. For some complex UAV working environments (such as: dark lighting environment, shadow environment, weak texture environment, etc.), common visual environment perception methods may also have problems such as misrecognition, high error or interruption. The processing step improves the robustness of obstacle-aware methods as much as possible. In addition, the method of calculating the depth information of the feature points in the image based on the inverse depth parameterization combined with the extended Kalman filter can improve the rapid convergence of the depth information, and can better restore the depth of the distant points in the space.
基于前述实施例,本申请实施例还提供一种无人机的飞行控制装置6,该无人机的飞行控制装置6可以应用于图1至图3对应的实施例提供的一种无人机的飞行控制方法中,参照图6所示,该无人机的飞行控制装置6包括:获取部分61、第一确定部分62、第二确定部分63、调整部分64以及第三确定部分65,其中:Based on the foregoing embodiments, this embodiment of the present application also provides a flight control device 6 for a drone, which can be applied to a drone provided in the corresponding embodiments in Figures 1 to 3 In the flight control method, as shown in FIG. 6 , the flight control device 6 of the UAV includes: an acquisition part 61, a first determination part 62, a second determination part 63, an adjustment part 64 and a third determination part 65, wherein :
获取部分61,被配置为获取待处理图像;其中,所述待处理图像的画面内容包括前方飞行环境信息;The acquisition part 61 is configured to acquire the image to be processed; wherein, the screen content of the image to be processed includes the information of the flight environment ahead;
第一确定部分62,被配置为基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;The first determining part 62 is configured to determine image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed;
第二确定部分63,被配置为在所述前方飞行环境中,确定与所述图像特征点对相关联的三维坐标信息;The second determining part 63 is configured to determine the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment;
调整部分64,被配置为基于所述三维坐标信息,对与所述待处理图像对应的待调整地图进行调整,得到三维地图;The adjusting part 64 is configured to adjust the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
第三确定部分65,被配置为基于所述三维地图,确定所述无人机的飞行轨迹。The third determining part 65 is configured to determine the flight track of the UAV based on the three-dimensional map.
在本申请其他实施例中,获取部分61,还被配置为采集所述前方飞行环境信息,得到预设图像;对所述预设图像的图像对比度进行调整,得到所述待处理图像。In other embodiments of the present application, the acquiring part 61 is further configured to acquire the forward flight environment information to obtain a preset image; adjust the image contrast of the preset image to obtain the image to be processed.
在本申请其他实施例中,第一确定部分62,还被配置为确定所述时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点;确定所述至少一个图像特征点对应的二进制参数;在所述时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点中,基于所述至少一个图像特征点对应的二进制参数,确定所述图像特征点对。In other embodiments of the present application, the first determining part 62 is further configured to determine at least one image feature point of each frame of the image to be processed in the two adjacent frames of the image to be processed in time sequence; determine the at least one image feature point The binary parameter corresponding to the feature point; in at least one image feature point of each frame of the image to be processed in the two adjacent frames to be processed in the time sequence, based on the binary parameter corresponding to the at least one image feature point, determine the Image feature point pairs.
在本申请其他实施例中,第一确定部分62,还被配置为按照图像分辨率梯度,对所述每帧待处理图像进行图像下采样,生成所述每帧待处理图像对应的图像金字塔;对所述每帧待处理图像对应的图像金字塔中每一层级的图像进行特征提取,得到所述每帧待处理图像的至少一个图像特征点。In other embodiments of the present application, the first determining part 62 is further configured to perform image downsampling on each frame of the image to be processed according to the image resolution gradient, and generate an image pyramid corresponding to each frame of the image to be processed; Feature extraction is performed on images at each level in the image pyramid corresponding to each frame of the image to be processed to obtain at least one image feature point of each frame of the image to be processed.
在本申请其他实施例中,第一确定部分62,还被配置为基于所述图像特征点对应的二进制参数,确定位于所述时序上相邻的两帧待处理图像中的两个图像特征点之间的汉明距离;在所述汉明距离小于所述预设阈值的情况下,将所述两个图像特征点,确定为所述图像特征点对。In other embodiments of the present application, the first determining part 62 is further configured to determine two image feature points in the two temporally adjacent frames of images to be processed based on the binary parameters corresponding to the image feature points Hamming distance between them; if the Hamming distance is less than the preset threshold, the two image feature points are determined as the image feature point pair.
在本申请其他实施例中,第二确定部分63,还被配置为获取所述图像特征点对中每一图像特征点在对应的待处理图像中的二维坐标信息;基于所述二维坐标信息,确定所述图像特征点对中两个图像特征点之间的空间位置关系;基于所述空间位置关系以及所述二维坐标信息,在所述前方飞行环境中确定所述三维坐标信息。In other embodiments of the present application, the second determining part 63 is further configured to obtain the two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed; based on the two-dimensional coordinate information, determining the spatial position relationship between two image feature points in the image feature point pair; based on the spatial position relationship and the two-dimensional coordinate information, determining the three-dimensional coordinate information in the forward flight environment.
在本申请其他实施例中,第二确定部分63,还被配置为对所述空间位置关系进行解析,得到表征所述飞行变化参数的旋转矩阵参数和平移矩阵参数;基于所述旋转矩阵参数、所述平移矩阵参数以及所述二维坐标信息,在所述前方飞行环境中确定所述三维坐标信息。In other embodiments of the present application, the second determining part 63 is further configured to analyze the spatial position relationship to obtain rotation matrix parameters and translation matrix parameters that characterize the flight change parameters; based on the rotation matrix parameters, The translation matrix parameters and the two-dimensional coordinate information determine the three-dimensional coordinate information in the forward flight environment.
在本申请其他实施例中,调整部分64,还被配置为获取所述无人机飞行的初始位置信息以及初始飞行姿态参数;确定所述初始位置信息与所述三维坐标信息之间的距离;基于所述距离、所述初始位置信息以及所述初始飞行姿态参数,构建与所述三维坐标信 息匹配的具有预设维度的坐标向量参数;基于所述坐标向量参数,对所述待调整地图中的待调整坐标进行调整,得到所述三维地图。In other embodiments of the present application, the adjustment part 64 is further configured to obtain the initial position information and initial flight attitude parameters of the UAV flight; determine the distance between the initial position information and the three-dimensional coordinate information; Based on the distance, the initial position information, and the initial flight attitude parameters, construct a coordinate vector parameter with a preset dimension that matches the three-dimensional coordinate information; based on the coordinate vector parameter, map the map to be adjusted The coordinates to be adjusted are adjusted to obtain the three-dimensional map.
在本申请其他实施例中,调整部分64,还被配置为基于所述坐标向量参数,构建更新后的协方差矩阵;基于所述更新后的所述协方差矩阵,对所述待调整地图的待调整坐标进行调整,得到修正后的三维坐标信息;基于所述修正后的三维坐标信息,构建所述三维地图。In other embodiments of the present application, the adjustment part 64 is further configured to construct an updated covariance matrix based on the coordinate vector parameters; based on the updated covariance matrix, the map to be adjusted The coordinates to be adjusted are adjusted to obtain corrected three-dimensional coordinate information; and the three-dimensional map is constructed based on the corrected three-dimensional coordinate information.
在本申请其他实施例中,第三确定部分65,还被配置为基于所述三维地图,确定规避路线;基于所述规避路线,确定所述无人机的飞行轨迹。In other embodiments of the present application, the third determining part 65 is further configured to determine an avoidance route based on the three-dimensional map; and determine the flight trajectory of the UAV based on the avoidance route.
其中,本实施例中各个部分所执行的步骤的具体实现过程,可以参照图1至3对应的实施例提供的无人机的飞行控制方法中的实现过程。Wherein, for the specific implementation process of the steps performed by each part in this embodiment, reference may be made to the implementation process in the flight control method of the drone provided in the embodiment corresponding to FIGS. 1 to 3 .
本申请实施例提供的无人机的飞行控制装置,基于与时序上相邻的两帧待处理图像中的图像特征点对,通过几何运算在前方飞行环境中确定与图像特征点对关联的空间点的三维坐标信息,进而基于该三维坐标信息来优化初期的地图;如此,能够高效且精准地还原实际飞行环境信息,并构建具有高度信息的三维地形图;同时基于三维地图确定飞行轨迹以实现避障飞行,能够降低无人机在飞行过程中受实际飞行环境的影响。The flight control device of the UAV provided in the embodiment of the present application determines the space associated with the image feature point pair in the forward flight environment through geometric calculation based on the pair of image feature points in the two adjacent frames of images to be processed in time sequence. The three-dimensional coordinate information of the point, and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be restored efficiently and accurately, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve Obstacle avoidance flight can reduce the impact of the actual flight environment on the UAV during flight.
基于前述实施例,本申请实施例还提供一种电子设备7,该电子设备7可以应用于图1至3对应的实施例提供的一种无人机的飞行控制方法中,参照图7所示,该电子设备7包括:处理器71、存储器72和通信总线73,其中:Based on the foregoing embodiments, this embodiment of the present application also provides an electronic device 7, which can be applied to a flight control method for a drone provided in the embodiments corresponding to Figures 1 to 3, as shown in Figure 7 , the electronic device 7 includes: a processor 71, a memory 72 and a communication bus 73, wherein:
通信总线73用于实现处理器71和存储器72之间的通信连接。The communication bus 73 is used to realize the communication connection between the processor 71 and the memory 72 .
处理器71用于执行存储器72中存储的无人机的飞行控制方法的程序,以实现参照图1至图3对应的实施例提供的无人机的飞行控制方法。The processor 71 is used to execute the program of the UAV flight control method stored in the memory 72, so as to realize the UAV flight control method provided in the embodiments corresponding to FIG. 1 to FIG. 3 .
本申请实施例提供的电子设备,基于与时序上相邻的两帧待处理图像中的图像特征点对,通过几何运算在前方飞行环境中确定与图像特征点对关联的空间点的三维坐标信息,进而基于该三维坐标信息来优化初期的地图;如此,能够高效且精准地还原实际飞行环境信息,并构建具有高度信息的三维地形图;同时基于三维地图确定飞行轨迹以实现避障飞行,能够降低无人机飞行受实际飞行环境的影响。The electronic device provided in the embodiment of the present application determines the three-dimensional coordinate information of the spatial point associated with the image feature point pair in the forward flight environment through geometric calculation based on the image feature point pair in the two adjacent images to be processed in time sequence , and then optimize the initial map based on the three-dimensional coordinate information; in this way, the actual flight environment information can be efficiently and accurately restored, and a three-dimensional topographic map with height information can be constructed; at the same time, the flight trajectory can be determined based on the three-dimensional map to achieve obstacle avoidance flight, which can Reduce the impact of drone flight from the actual flight environment.
基于前述实施例,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有一个或者多个程序,该一个或者多个程序可被一个或者多个处理器执行,以实现如图1至3对应的实施例提供的无人机的飞行控制方法。Based on the foregoing embodiments, the embodiments of the present application further provide a computer-readable storage medium, where one or more programs are stored in the computer-readable storage medium, and the one or more programs can be executed by one or more processors to Realize the flight control method of the unmanned aerial vehicle provided by the embodiments corresponding to Fig. 1 to 3 .
本申请实施例还提供一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如图1至3对应的实施例提供的无人机的飞行控制方法。The embodiment of the present application also provides a computer program, the computer program includes computer readable codes, and when the computer readable codes run in an electronic device, the processor of the electronic device executes to realize the Embodiments corresponding to 1 to 3 provide the flight control method of the UAV.
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。It should be understood that reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present application. Thus, appearances of "in one embodiment" or "in an embodiment" in various places throughout the specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the order of execution, and the execution order of the processes should be determined by their functions and internal logic, and should not be used in the embodiments of the present application. The implementation process constitutes any limitation. The serial numbers of the above embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments. As used herein, the terms "comprises," "comprises," or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article, or apparatus that includes a set of elements includes not only those elements, but also includes the elements not expressly included. other elements listed, or also include elements inherent in such a process, method, article, or device. Without further limitations, an element defined by the phrase "comprising a ..." does not preclude the presence of additional identical elements in the process, method, article, or apparatus comprising that element.
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其 它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed devices and methods can be implemented in other ways. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods, such as: multiple units or components can be combined, or May be integrated into another system, or some features may be ignored, or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms of.
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units; they may be located in one place or distributed to multiple network units; Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。In addition, each functional unit in each embodiment of the present application can be integrated into one processing unit, or each unit can be used as a single unit, or two or more units can be integrated into one unit; the above-mentioned integration The unit can be realized in the form of hardware or in the form of hardware plus software functional unit. Those of ordinary skill in the art can understand that all or part of the steps to realize the above method embodiments can be completed by hardware related to program instructions, and the aforementioned programs can be stored in computer-readable storage media. When the program is executed, the execution includes The steps of the foregoing method embodiments; and the foregoing storage media include: removable storage devices, read-only memory (Read Only Memory, ROM), magnetic disks or optical disks and other media that can store program codes.
或者,本申请上述集成的单元如果以软件功能部分的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。Alternatively, if the above-mentioned integrated units of the present application are implemented in the form of software function parts and sold or used as independent products, they can also be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art can be embodied in the form of a software product. The computer software product is stored in a storage medium and includes several instructions for Make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the methods described in the various embodiments of the present application. The aforementioned storage medium includes various media capable of storing program codes such as removable storage devices, ROMs, magnetic disks or optical disks. The above is only a specific implementation of the application, but the scope of protection of the application is not limited thereto. Anyone familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the application. Should be covered within the protection scope of this application. Therefore, the protection scope of the present application should be determined by the protection scope of the claims.
工业实用性Industrial Applicability
本申请实施例公开了一种无人机的飞行控制方法、装置、设备、介质及程序;其中,所述方法包括:获取待处理图像;其中,所述待处理图像的画面内容包括前方飞行环境信息;基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;在所述前方飞行环境中,确定与所述图像特征点对相关联的三维坐标信息;基于所述三维坐标信息,对与所述待处理图像对应的待调整地图进行调整,得到三维地图;基于所述三维地图,确定所述无人机的飞行轨迹。The embodiment of the present application discloses a flight control method, device, equipment, medium, and program of a UAV; wherein, the method includes: acquiring an image to be processed; wherein, the screen content of the image to be processed includes the front flight environment information; based on two adjacent frames of images to be processed in time sequence, determine image feature point pairs that meet preset conditions; in the forward flight environment, determine three-dimensional coordinate information associated with the image feature point pairs; based on the The three-dimensional coordinate information is used to adjust the map to be adjusted corresponding to the image to be processed to obtain a three-dimensional map; based on the three-dimensional map, the flight trajectory of the UAV is determined.

Claims (14)

  1. 一种无人机的飞行控制方法,所述方法包括:A flight control method for an unmanned aerial vehicle, the method comprising:
    获取待处理图像;其中,所述待处理图像的画面内容包括前方飞行环境信息;Acquiring an image to be processed; wherein, the screen content of the image to be processed includes information about the flight environment ahead;
    基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;Determining image feature point pairs satisfying preset conditions based on two adjacent frames of images to be processed in time sequence;
    在所述前方飞行环境中,确定与所述图像特征点对相关联的三维坐标信息;In the forward flight environment, determine three-dimensional coordinate information associated with the image feature point pair;
    基于所述三维坐标信息,对与所述待处理图像对应的待调整地图进行调整,得到三维地图;Adjusting the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
    基于所述三维地图,确定所述无人机的飞行轨迹。Based on the three-dimensional map, determine the flight track of the drone.
  2. 根据权利要求1所述的方法,其中,所述获取待处理图像,包括:The method according to claim 1, wherein said acquiring the image to be processed comprises:
    采集所述前方飞行环境信息,得到预设图像;Collecting the flight environment information ahead to obtain a preset image;
    对所述预设图像的图像对比度进行调整,得到所述待处理图像。The image contrast of the preset image is adjusted to obtain the image to be processed.
  3. 根据权利要求1或2所述的方法,其中,所述基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对,包括:The method according to claim 1 or 2, wherein said determining image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed comprises:
    确定所述时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点;Determining at least one image feature point of each frame of the image to be processed in the two temporally adjacent frames of the image to be processed;
    确定所述至少一个图像特征点对应的二进制参数;determining a binary parameter corresponding to the at least one image feature point;
    在所述时序上相邻的两帧待处理图像中每帧待处理图像的图像特征点中,基于所述至少一个图像特征点对应的二进制参数,确定所述图像特征点对。In the image feature points of each frame of the image to be processed in the two adjacent frames of image to be processed in time sequence, the image feature point pair is determined based on the binary parameter corresponding to the at least one image feature point.
  4. 根据权利要求3所述的方法,其中,所述确定所述时序上相邻的两帧待处理图像中每帧待处理图像的至少一个图像特征点,包括:The method according to claim 3, wherein said determining at least one image feature point of each frame of the image to be processed in the two temporally adjacent frames of the image to be processed comprises:
    按照图像分辨率梯度,对所述每帧待处理图像进行图像下采样,生成所述每帧待处理图像对应的图像金字塔;According to the image resolution gradient, image downsampling is performed on each frame of the image to be processed to generate an image pyramid corresponding to each frame of the image to be processed;
    对所述每帧待处理图像对应的图像金字塔中每一层级的图像进行特征提取,得到所述每帧待处理图像的至少一个图像特征点。Feature extraction is performed on images at each level in the image pyramid corresponding to each frame of the image to be processed to obtain at least one image feature point of each frame of the image to be processed.
  5. 根据权利要求3或4所述的方法,其中,所述在所述时序上相邻的两帧待处理图像中每帧待处理图像的图像特征点中,基于所述至少一个图像特征点对应的二进制参数,确定所述图像特征点对,包括:The method according to claim 3 or 4, wherein, among the image feature points of each frame of the image to be processed in the two temporally adjacent frames of the image to be processed, based on the at least one image feature point corresponding Binary parameters to determine the image feature point pairs, including:
    基于所述图像特征点对应的二进制参数,确定位于所述时序上相邻的两帧待处理图像中的两个图像特征点之间的汉明距离;Based on the binary parameters corresponding to the image feature points, determine the Hamming distance between two image feature points in the two temporally adjacent frames of images to be processed;
    在所述汉明距离小于所述预设阈值的情况下,将所述两个图像特征点,确定为所述图像特征点对。If the Hamming distance is smaller than the preset threshold, the two image feature points are determined as the image feature point pair.
  6. 根据权利要求1至5任一所述的方法,其中,所述在所述前方飞行环境中,确定与所述图像特征点对相关联的三维坐标信息,包括:The method according to any one of claims 1 to 5, wherein, in the forward flight environment, determining the three-dimensional coordinate information associated with the image feature point pair comprises:
    获取所述图像特征点对中每一图像特征点在对应的待处理图像中的二维坐标信息;Obtaining two-dimensional coordinate information of each image feature point in the image feature point pair in the corresponding image to be processed;
    基于所述二维坐标信息,确定所述图像特征点对中两个图像特征点之间的空间位置关系;Based on the two-dimensional coordinate information, determine the spatial position relationship between two image feature points in the image feature point pair;
    基于所述空间位置关系以及所述二维坐标信息,在所述前方飞行环境中确定所述三维坐标信息。Based on the spatial position relationship and the two-dimensional coordinate information, the three-dimensional coordinate information is determined in the forward flight environment.
  7. 根据权利要求6所述的方法,其中,所述基于所述空间位置关系以及所述二维坐标信息,在所述前方飞行环境中确定所述三维坐标信息,包括:The method according to claim 6, wherein, based on the spatial position relationship and the two-dimensional coordinate information, determining the three-dimensional coordinate information in the forward flight environment includes:
    对所述空间位置关系进行解析,得到表征所述飞行变化参数的旋转矩阵参数和平移矩阵参数;Analyzing the spatial position relationship to obtain rotation matrix parameters and translation matrix parameters representing the flight change parameters;
    基于所述旋转矩阵参数、所述平移矩阵参数以及所述二维坐标信息,在所述前方飞 行环境中确定所述三维坐标信息。The three-dimensional coordinate information is determined in the forward flight environment based on the rotation matrix parameter, the translation matrix parameter and the two-dimensional coordinate information.
  8. 根据权利要求1至7任一项所述的方法,其中,所述基于所述三维坐标信息,对与所述待处理图像对应的待调整地图进行调整,得到三维地图,包括:The method according to any one of claims 1 to 7, wherein said adjusting the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map includes:
    获取所述无人机飞行的初始位置信息以及初始飞行姿态参数;Obtain the initial position information and initial flight attitude parameters of the UAV flight;
    确定所述初始位置信息与所述三维坐标信息之间的距离;determining the distance between the initial position information and the three-dimensional coordinate information;
    基于所述距离、所述初始位置信息以及所述初始飞行姿态参数,构建与所述三维坐标信息匹配的具有预设维度的坐标向量参数;Based on the distance, the initial position information, and the initial flight attitude parameters, constructing coordinate vector parameters with preset dimensions that match the three-dimensional coordinate information;
    基于所述坐标向量参数,对所述待调整地图中的待调整坐标进行调整,得到所述三维地图。Based on the coordinate vector parameters, the coordinates to be adjusted in the map to be adjusted are adjusted to obtain the three-dimensional map.
  9. 根据权利要求8所述的方法,其中,所述基于所述坐标向量参数,对所述待调整地图中的待调整坐标进行调整,得到所述三维地图,包括:The method according to claim 8, wherein said adjusting the coordinates to be adjusted in the map to be adjusted based on the coordinate vector parameters to obtain the three-dimensional map comprises:
    基于所述坐标向量参数,构建更新后的协方差矩阵;Building an updated covariance matrix based on the coordinate vector parameters;
    基于更新后的所述协方差矩阵,对所述待调整地图的待调整坐标进行调整,得到修正后的三维坐标信息;Based on the updated covariance matrix, adjusting coordinates to be adjusted of the map to be adjusted to obtain corrected three-dimensional coordinate information;
    基于所述修正后的三维坐标信息,构建所述三维地图。The three-dimensional map is constructed based on the corrected three-dimensional coordinate information.
  10. 根据权利要求1至9任一所述的方法,其中,所述基于所述三维地图,确定所述无人机的飞行轨迹,包括:The method according to any one of claims 1 to 9, wherein said determining the flight trajectory of the drone based on the three-dimensional map includes:
    基于所述三维地图,确定规避路线;determining an avoidance route based on the three-dimensional map;
    基于所述规避路线,确定所述无人机的飞行轨迹。Based on the avoidance route, the flight trajectory of the UAV is determined.
  11. 一种无人机的飞行控制装置,所述装置包括:A flight control device for an unmanned aerial vehicle, said device comprising:
    获取部分,被配置为获取待处理图像;其中,所述待处理图像的画面内容包括前方飞行环境信息;The acquisition part is configured to acquire the image to be processed; wherein, the screen content of the image to be processed includes the information of the flight environment ahead;
    第一确定部分,被配置为基于时序上相邻的两帧待处理图像,确定满足预设条件的图像特征点对;The first determining part is configured to determine image feature point pairs satisfying preset conditions based on two temporally adjacent frames of images to be processed;
    第二确定部分,被配置为在所述前方飞行环境中,确定与所述图像特征点对相关联的三维坐标信息;The second determination part is configured to determine the three-dimensional coordinate information associated with the image feature point pair in the forward flight environment;
    调整部分,被配置为基于所述三维坐标信息,对与所述待处理图像对应的待调整地图进行调整,得到三维地图;The adjustment part is configured to adjust the map to be adjusted corresponding to the image to be processed based on the three-dimensional coordinate information to obtain a three-dimensional map;
    第三确定部分,被配置为基于所述三维地图,确定所述无人机的飞行轨迹。The third determining part is configured to determine the flight track of the UAV based on the three-dimensional map.
  12. 一种电子设备,所述电子设备包括:处理器、存储器和通信总线;其中,所述通信总线用于实现所述处理器和所述存储器之间的通信连接;An electronic device, the electronic device comprising: a processor, a memory, and a communication bus; wherein the communication bus is used to implement a communication connection between the processor and the memory;
    所述处理器用于执行所述存储器中的程序,以实现如权利要求1至10任一所述的无人机的飞行控制方法。The processor is used to execute the program in the memory, so as to realize the flight control method of the UAV according to any one of claims 1 to 10.
  13. 一种计算机可读存储介质,所述计算机可读存储介质存储有一个或多个程序,所述一个或多个程序可被一个或者多个处理器执行,以实现如权利要求1至10任一所述的无人机的飞行控制方法。A computer-readable storage medium, the computer-readable storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to implement any one of claims 1 to 10. The flight control method of the unmanned aerial vehicle.
  14. 一种计算机程序,所述计算机程序包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备的处理器执行用于实现如权利要求1至10任一所述的无人机的飞行控制方法。A computer program, the computer program comprising computer-readable code, when the computer-readable code runs in an electronic device, the processor of the electronic device executes to implement any one of claims 1 to 10 The flight control method of the unmanned aerial vehicle.
PCT/CN2022/113856 2021-09-01 2022-08-22 Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program WO2023030062A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111019049.8 2021-09-01
CN202111019049.8A CN115729250A (en) 2021-09-01 2021-09-01 Flight control method, device and equipment of unmanned aerial vehicle and storage medium

Publications (1)

Publication Number Publication Date
WO2023030062A1 true WO2023030062A1 (en) 2023-03-09

Family

ID=85292015

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/113856 WO2023030062A1 (en) 2021-09-01 2022-08-22 Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program

Country Status (2)

Country Link
CN (1) CN115729250A (en)
WO (1) WO2023030062A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058209A (en) * 2023-10-11 2023-11-14 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106501829A (en) * 2016-09-26 2017-03-15 北京百度网讯科技有限公司 A kind of Navigation of Pilotless Aircraft method and apparatus
JP2017134617A (en) * 2016-01-27 2017-08-03 株式会社リコー Position estimation device, program and position estimation method
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN108917753A (en) * 2018-04-08 2018-11-30 中国人民解放军63920部队 Method is determined based on the position of aircraft of structure from motion
CN109407705A (en) * 2018-12-14 2019-03-01 厦门理工学院 A kind of method, apparatus, equipment and the storage medium of unmanned plane avoiding barrier
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN112434709A (en) * 2020-11-20 2021-03-02 西安视野慧图智能科技有限公司 Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle
US20210141378A1 (en) * 2018-07-18 2021-05-13 SZ DJI Technology Co., Ltd. Imaging method and device, and unmanned aerial vehicle

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017134617A (en) * 2016-01-27 2017-08-03 株式会社リコー Position estimation device, program and position estimation method
CN106501829A (en) * 2016-09-26 2017-03-15 北京百度网讯科技有限公司 A kind of Navigation of Pilotless Aircraft method and apparatus
CN107656545A (en) * 2017-09-12 2018-02-02 武汉大学 A kind of automatic obstacle avoiding searched and rescued towards unmanned plane field and air navigation aid
CN108917753A (en) * 2018-04-08 2018-11-30 中国人民解放军63920部队 Method is determined based on the position of aircraft of structure from motion
US20210141378A1 (en) * 2018-07-18 2021-05-13 SZ DJI Technology Co., Ltd. Imaging method and device, and unmanned aerial vehicle
CN109407705A (en) * 2018-12-14 2019-03-01 厦门理工学院 A kind of method, apparatus, equipment and the storage medium of unmanned plane avoiding barrier
CN110047142A (en) * 2019-03-19 2019-07-23 中国科学院深圳先进技术研究院 No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium
CN112434709A (en) * 2020-11-20 2021-03-02 西安视野慧图智能科技有限公司 Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058209A (en) * 2023-10-11 2023-11-14 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map
CN117058209B (en) * 2023-10-11 2024-01-23 山东欧龙电子科技有限公司 Method for calculating depth information of visual image of aerocar based on three-dimensional map

Also Published As

Publication number Publication date
CN115729250A (en) 2023-03-03

Similar Documents

Publication Publication Date Title
US11668571B2 (en) Simultaneous localization and mapping (SLAM) using dual event cameras
CN112258618B (en) Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
US11145073B2 (en) Computer vision systems and methods for detecting and modeling features of structures in images
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
CN108898676B (en) Method and system for detecting collision and shielding between virtual and real objects
CN107808407A (en) Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
CN106595659A (en) Map merging method of unmanned aerial vehicle visual SLAM under city complex environment
US20150243035A1 (en) Method and device for determining a transformation between an image coordinate system and an object coordinate system associated with an object of interest
CN113985445A (en) 3D target detection algorithm based on data fusion of camera and laser radar
CN111998862B (en) BNN-based dense binocular SLAM method
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
CN113568435B (en) Unmanned aerial vehicle autonomous flight situation perception trend based analysis method and system
CN115406447B (en) Autonomous positioning method of quad-rotor unmanned aerial vehicle based on visual inertia in rejection environment
CN112802096A (en) Device and method for realizing real-time positioning and mapping
CN113223045A (en) Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation
WO2023070115A1 (en) Three-dimensional building model generation based on classification of image elements
WO2023030062A1 (en) Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
CN117115414B (en) GPS-free unmanned aerial vehicle positioning method and device based on deep learning
Zhang et al. A stereo SLAM system with dense mapping
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN115773759A (en) Indoor positioning method, device and equipment of autonomous mobile robot and storage medium
CN113158816B (en) Construction method of visual odometer quadric road sign for outdoor scene object
CN113011212B (en) Image recognition method and device and vehicle
Su Vanishing points in road recognition: A review

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22863201

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE