WO2018095278A1 - 飞行器的信息获取方法、装置及设备 - Google Patents

飞行器的信息获取方法、装置及设备 Download PDF

Info

Publication number
WO2018095278A1
WO2018095278A1 PCT/CN2017/111577 CN2017111577W WO2018095278A1 WO 2018095278 A1 WO2018095278 A1 WO 2018095278A1 CN 2017111577 W CN2017111577 W CN 2017111577W WO 2018095278 A1 WO2018095278 A1 WO 2018095278A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
aircraft
sub
depth
Prior art date
Application number
PCT/CN2017/111577
Other languages
English (en)
French (fr)
Inventor
黄盈
周大军
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201611045197.6A external-priority patent/CN106529495B/zh
Priority claimed from CN201611100232.XA external-priority patent/CN106767682A/zh
Priority claimed from CN201611100259.9A external-priority patent/CN106767817B/zh
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018095278A1 publication Critical patent/WO2018095278A1/zh
Priority to US16/296,073 priority Critical patent/US10942529B2/en

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • G05D1/106Change initiated in response to external conditions, e.g. avoidance of elevated terrain or of no-fly zones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/0047Navigation or guidance aids for a single aircraft
    • G08G5/0069Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G5/00Traffic control systems for aircraft, e.g. air-traffic control [ATC]
    • G08G5/04Anti-collision systems
    • G08G5/045Navigation or guidance aids, e.g. determination of anti-collision manoeuvers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the invention relates to the field of computer technology, in particular to obstacle detection and flight positioning information and height information acquisition of an aircraft.
  • the drone is referred to as the aircraft.
  • the aircraft has many applications in the national economy and the military. At present, the aircraft has been widely used in aerial photography, electric power inspection, environmental monitoring, forest fire prevention, disaster inspection, anti-terrorism rescue, military reconnaissance, In areas such as battlefield assessment, aircraft are unmanned aircraft operated using radio remote control equipment and self-contained program control devices. There is no cockpit on the aircraft, but it is equipped with autopilot, program control device, information acquisition device and other equipment. The remote station personnel track, locate, remotely control, telemetry and digital transmission through radar and other equipment.
  • the aircraft usually adopts the following two schemes when detecting obstacles: 1.
  • the aircraft is based on the detection of obstacles by the laser radar, and the aircraft is based on ultrasonic detection of obstacles.
  • Method 1 Lidar needs to be installed in the aircraft. Lidar detects that obstacles are easily affected by sunlight. Under strong light, Lidar cannot accurately detect obstacles and reduce the accuracy of obstacle detection.
  • Method 2 an ultrasonic generator is required in the aircraft, and an obstacle is detected by the ultrasonic wave emitted therefrom. This ultrasonic detection method causes a large error in the detection of a non-vertical plane or a shaped object.
  • the above solution requires the installation of additional devices in the aircraft for obstacle detection, which is disadvantageous.
  • the miniaturization of aircraft there is also the problem of low accuracy of obstacle detection.
  • the invention provides an obstacle detection method for an aircraft, comprising:
  • the first image and the second image are obtained by real-time image acquisition of the target obstacle by the binocular camera configured by the aircraft, wherein the first image is captured by the left eye in the binocular camera, and the second image is captured by the binocular camera.
  • the right eye is taken;
  • the real-time detection of the forward obstacle is realized by the built-in binocular camera of the aircraft, and no additional device equipment is needed in the aircraft, and there is no need to limit the flight scene and the shape of the obstacle of the aircraft, and the image analysis is performed. And the calculation can accurately calculate the depth value of the binocular camera from the target obstacle, reduce the obstacle detection error of the aircraft, and improve the obstacle detection accuracy of the aircraft.
  • the invention provides an obstacle detecting device for an aircraft, comprising:
  • the image acquisition module is configured to perform real-time image acquisition on the target obstacle by the binocular camera configured by the aircraft to obtain the first image and the second image, wherein the first image is captured by the left eye of the binocular camera, and the second image is obtained.
  • the image is taken by the right eye in the binocular camera;
  • a disparity calculation module configured to determine a first pixel position of the target obstacle projected in the first image, and a second pixel position of the target obstacle projected in the second image, and according to the first pixel position and the second pixel position, Calculating a disparity value between the first pixel location and the second pixel location;
  • a depth calculation module configured to calculate a depth value of the binocular camera from the target obstacle according to a disparity value between the first pixel location and the second pixel location and a preset parallax depth mapping matrix, for detecting the flight of the aircraft Is there an obstacle blocking in the direction?
  • the invention provides an obstacle detection method for an aircraft, comprising:
  • the aircraft obtains the first image and the second image by real-time image acquisition of the target obstacle through the binocular camera of the aircraft configuration, wherein the first image is captured by the left eye of the binocular camera, and the second image is captured by the binocular camera.
  • the right eye is captured;
  • the aircraft determines a first pixel position of the target obstacle projected in the first image, and a second pixel position of the target obstacle projected in the second image, and calculates the first pixel position according to the first pixel position and the second pixel position a disparity value between the second pixel position and the second pixel position;
  • the aircraft calculates a depth value of the binocular camera from the target obstacle according to a disparity value between the first pixel position and the second pixel position and a preset parallax depth mapping matrix, for detecting whether there is an obstacle in the flight direction of the aircraft. Blocking things.
  • the present invention provides a method for acquiring flight positioning information, the method is applied to an aircraft, the aircraft includes a first camera and a second camera, wherein the first camera is configured to acquire N first real-time corresponding to N different moments.
  • the second camera is configured to acquire N second real-time images corresponding to N different moments, where N is a positive integer greater than or equal to 2, and the method includes:
  • the target flight positioning information corresponding to the ending time of the N different times is acquired.
  • the aircraft adopts a binocular camera to realize the positioning of the aircraft, and can acquire images corresponding to different times in real time, and then analyze the translation parameters between each frame of images, and the two cameras respectively use the translation parameters to obtain corresponding positioning information, and finally
  • the positioning information is corrected by using preset positioning constraints to obtain target flight positioning information closer to the true value. Without the use of an optical flow camera or a high-precision inertial sensor, accurate positioning information can still be obtained, and the error value can be reduced. It also reduces the cost of the aircraft
  • the present invention provides an aircraft comprising a first camera and a second camera, wherein the first camera is used to acquire N first real-time images corresponding to N different moments, and the second camera is used to acquire N different images.
  • N corresponding real-time images corresponding to time, N is a positive integer greater than or equal to 2
  • the aircraft includes:
  • a first determining module configured to determine (N-1) first eigen parameters according to the N first real-time images, and determine (N-1) second eigen parameters according to the N second real-time images;
  • the first eigen parameter is used to represent the translation parameters of the adjacent two frames of the N first real-time images
  • the second eigen parameter is used to represent the translation parameters of the adjacent two frames of the N second real-time images;
  • a first acquiring module configured to acquire, by using the first camera, first initial positioning information of the starting time of the N different times, and acquire, by using the second camera, second initial positioning information of the starting time of the N different times;
  • a second determining module configured to determine (N-1) first flight positioning information corresponding to (N-1) times according to the first initial positioning information and (N-1) first eigen parameters, and according to The second initial positioning information and the (N-1) second eigen parameters determine (N-1) second flight positioning information corresponding to (N-1) times; wherein (N-1) times are (N-1) moments other than the start time of N different times;
  • the second acquiring module is configured to acquire target flight positioning information corresponding to the ending time of the N different times according to the (N-1) first flight positioning information and the (N-1) second flight positioning information.
  • the present invention provides a method for acquiring flight positioning information, the method is applied to an aircraft, the aircraft includes a first camera and a second camera, wherein the first camera is configured to acquire N first real-time corresponding to N different moments.
  • the second camera is configured to acquire N second real-time images corresponding to N different moments, where N is a positive integer greater than or equal to 2, and the method includes:
  • the aircraft determines (N-1) first eigen parameters according to the N first real-time images, and determines (N-1) second eigen parameters according to the N second real-time images; wherein, the first eigenvalue The parameter is used to represent the translation parameters of the adjacent two frames of the N first real-time images, and the second eigen parameter is used to represent the translation parameters of the adjacent two frames of the N second real-time images;
  • the first camera acquires first initial positioning information of the starting time of the N different times through the first camera, and acquires the second initial positioning information of the starting time of the N different times by the second camera;
  • the aircraft determines (N-1) first flight positioning information corresponding to (N-1) times according to the first initial positioning information and (N-1) first eigen parameters, and according to the second initial positioning information (N-1) second eigen-parameters, determining (N-1) second flight positioning information corresponding to (N-1) times; wherein (N-1) times are N different times (N-1) moments other than the start time;
  • the aircraft acquires target flight location information corresponding to the end time of the N different times according to (N-1) first flight positioning information and (N-1) second flight positioning information.
  • the present invention provides a method for acquiring flight altitude information, the method being applied to an aircraft, the aircraft comprising a first camera and a second camera, wherein the first camera is used to acquire a first real-time image, and the second camera is used to acquire a first camera Two live images, the method comprising:
  • the flying height information is acquired according to the depth value corresponding to each preset area and the current flight attitude information of the aircraft.
  • the binocular camera is used to measure the altitude information of the aircraft, and the accuracy of the height information measurement is not reduced because the aircraft itself is affected by the airflow compared with the height information of the barometer.
  • the binocular camera can obtain various kinds of Complex terrain, and height information based on different terrain calculations, to improve the accuracy of the measurement, and binocular camera has a lower cost advantage than the depth camera.
  • the present invention provides an aircraft comprising a first camera and a second camera, wherein the first camera is configured to acquire a first real-time image, and the second camera is configured to acquire a second real-time image, the aircraft further comprising:
  • a first acquiring module configured to acquire a first depth image according to the first real-time image, and acquire a second depth image according to the second real-time image;
  • a first determining module configured to determine a target fused image according to the first depth image acquired by the first acquiring module and the second depth image, where the target fused image includes at least one preset region;
  • a second determining module configured to determine a depth value corresponding to each preset area in the target fused image obtained by the first determining module
  • the second acquiring module is configured to acquire the flying height information according to the depth value corresponding to each preset area determined by the second determining module and the current flight attitude information of the aircraft.
  • the present invention provides a method for acquiring flight altitude information.
  • the method is applied to an aircraft.
  • the aircraft includes a first camera and a second camera, wherein the first camera is used to acquire a first real-time image, and the second camera is used to acquire a second real-time image.
  • Images, methods include:
  • the aircraft acquires a first depth image according to the first real-time image, and acquires a second depth image according to the second real-time image;
  • the aircraft determines a target fused image according to the first depth image and the second depth image, where the target fused image includes at least one preset region;
  • the aircraft determines a depth value corresponding to each preset area in the target fused image
  • the aircraft acquires the flying height information according to the depth value corresponding to each preset area and the current flight attitude information of the aircraft.
  • the invention provides an apparatus comprising:
  • the memory is configured to store the program code and transmit the program code to the processor
  • the processor is configured to perform the method for detecting obstacles of the aircraft according to the instructions in the program code, the method for acquiring flight positioning information, and the method for acquiring flight height information.
  • the present invention provides a storage medium for storing program code for performing the above-described obstacle detection method of an aircraft, a method for acquiring flight positioning information, and a method for acquiring flight height information.
  • the present invention provides a computer program product comprising instructions, a method of causing a computer to perform an obstacle detection of the aircraft, a method of acquiring flight location information, and a method of acquiring flight height information when it is run on a computer.
  • FIG. 1 is a flow chart showing an obstacle detecting method of an aircraft according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing the entire workflow of binocular stereoscopic obstacle detection according to an embodiment of the present invention
  • FIG. 3 is a flow chart showing an image processing link in binocular stereoscopic obstacle detection according to an embodiment of the present invention
  • FIG. 4 is a flow chart showing a calculation process of a disparity value in binocular stereoscopic obstacle detection according to an embodiment of the present invention
  • FIG. 5a is a schematic structural diagram of an obstacle detecting device of an aircraft according to an embodiment of the present invention.
  • FIG. 5b is a schematic structural diagram of an obstacle detecting device of another aircraft according to an embodiment of the present invention.
  • FIG. 5c is a schematic diagram showing the structure of an obstacle detecting device of another aircraft according to an embodiment of the present invention.
  • Figure 5d is a block diagram showing the structure of an obstacle detecting device of another aircraft according to an embodiment of the present invention.
  • FIG. 5 e is a schematic structural diagram of a disparity calculation module according to an embodiment of the present invention.
  • FIG. 5f is a schematic structural diagram of a depth calculation module according to an embodiment of the invention.
  • Figure 5g is a block diagram showing the structure of an obstacle detecting device of another aircraft according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing the structure of an obstacle detecting method of an aircraft applied to an aircraft according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an embodiment of a method for acquiring flight location information according to an embodiment of the present invention.
  • Figure 8 is a schematic view of an aircraft mounted with a binocular camera in accordance with an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing positioning of a binocular camera according to an embodiment of the present invention.
  • FIG. 10 is a schematic flowchart diagram of acquiring target flight positioning information according to an embodiment of the present invention.
  • Figure 11 is a schematic diagram showing the workflow of the binocular camera in the application scenario
  • Figure 12 is a schematic view showing an embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 13 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 14 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 15 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 16 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 17 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 18 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • FIG. 19 is a schematic diagram showing an embodiment of a method for acquiring flight height information according to an embodiment of the present invention.
  • FIG. 20 is a schematic view of an aircraft equipped with a binocular camera in accordance with an embodiment of the present invention.
  • 21 is a schematic diagram showing obtaining a disparity value between left and right images according to an embodiment of the present invention.
  • FIG. 22 is a schematic flowchart diagram of acquiring image depth values according to an embodiment of the present invention.
  • Figure 23 is a schematic diagram showing the workflow of the binocular camera according to the application scenario.
  • Figure 24 is a schematic view showing an embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 25 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 26 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 27 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • FIG. 28 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 29 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 30 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 31 is a schematic view showing another embodiment of an aircraft according to an embodiment of the present invention.
  • Figure 32 is a block diagram showing the structure of an aircraft in accordance with an embodiment of the present invention.
  • One embodiment of the obstacle detection method for an aircraft provided by the present invention can be specifically applied to a target obstacle avoidance scene during flight of an aircraft.
  • the aircraft (English full name: Unmanned Aerial Vehicle, English abbreviation: UAV) is a vehicle that uses wireless remote control or program control to perform specific aviation missions. It refers to a powered air vehicle that does not carry an operator. It uses aerodynamics to provide aircraft for the aircraft. The required lift, capable of automatic flight or remote guidance, can be used both for one-time use and for recovery, as well as for carrying deadly and non-fatal payloads.
  • the aircraft may specifically be a drone, or may be a remote control aircraft, a model airplane, or the like.
  • the image of the target obstacle is captured by the binocular camera provided by the aircraft, and then the difference between the obstacle and the aircraft can be determined by calculating the disparity value and the depth value of the image taken by the left and right eyes.
  • the depth value through the analysis of the image calculation, can detect obstacles, It is not necessary to have additional components built into the aircraft, which is conducive to the miniaturization of the aircraft. Please refer to FIG. 1.
  • FIG. 1 shows an obstacle detecting method of the aircraft of the embodiment, which may include the following steps:
  • the aircraft performs real-time image acquisition on the target obstacle through the binocular camera configured by the aircraft to obtain the first image and the second image.
  • the first image is captured by the left eye of the binocular camera
  • the second image is captured by the right eye of the binocular camera.
  • the aircraft can perform real-time detection on the target obstacle appearing in the front, and the binocular camera is disposed in the aircraft, and the left and right eyes (ie, two cameras) of the binocular camera capture the target obstacle in real time, and generate
  • the aircraft can capture the target obstacle through the existing binocular camera in the aircraft.
  • the binocular camera configured by the aircraft can passively receive visible light, so it is not subject to strong light interference.
  • the depth information of the object can be well estimated, which overcomes the defects of laser radar and ultrasonic waves.
  • the binocular camera used in this embodiment is an ordinary camera, so the hardware cost is much lower than that of the laser radar.
  • each pair of cameras collects the same target obstacle at the same time, so that two images can be obtained.
  • an image captured by a left eye in the binocular camera is defined as a “first image”
  • an image captured by a right eye in the binocular camera is defined as a “second image”
  • An image and a second image are only used to distinguish images captured by the two cameras.
  • the aircraft determines a first pixel position of the target obstacle projected in the first image, and a second pixel position of the target obstacle projected in the second image, and calculates the first according to the first pixel position and the second pixel position.
  • the first image and the second image are two images obtained by the binocular camera capturing the same target obstacle at the same time, and the same target obstacle is projected to the left and right cameras of the binocular camera. There will be some differences in its location.
  • the projection position of the target obstacle in the first image is defined as “first pixel image position”
  • the projection position of the target obstacle in the second image is defined as “second pixel image position” ".
  • the projection of the same target obstacle in the camera will have one pixel position, and the pixel positions of the left and right cameras There is an offset value which is the disparity value between the first pixel position and the second pixel position.
  • binocular stereo vision can be used to calculate the disparity value between two pixel positions.
  • the camera can be used to acquire two images of the obstacle to be measured from different positions, and the three-dimensional geometric information of the object can be obtained by calculating the positional deviation between the corresponding points of the image.
  • Binocular stereo vision combines the images obtained by two cameras and observes the difference between them, can obtain a clear sense of depth, establish the correspondence between features, and map the same spatial physical points in different images. This difference It can also be called a parallax image.
  • the determining the target obstacle in step 102A is projected at a first pixel location in the first image, and the target obstacle is projected at a second pixel location in the second image, comprising:
  • A1. Determine an image selection window according to a body size image formed by the aircraft in the binocular camera.
  • the total pixel value of the image selection window is larger than the total pixel value of the body size image, and is smaller than the total pixel value of the first image, and is smaller than the first pixel value.
  • A2 using the image selection window to select a first sub-image and a second sub-image corresponding to the image selection window from the first image and the second image, respectively;
  • A3 Using a Semi-Global Block Matching (SGBM) algorithm, image points are matched to target obstacles captured by the first sub-image and the second sub-image respectively, and target obstacle projections are determined by matching successful image points. The first pixel location in the first sub-image and the second pixel location in the second sub-image are projected by the target obstacle.
  • SGBM Semi-Global Block Matching
  • the image selection window may be determined according to the body size image formed by the aircraft in the binocular camera. Obstacles outside the flight path do not affect the flight of the aircraft, and the aircraft only needs to ensure that obstacles in front of it are detected in real time in the flight direction. Therefore, in the embodiment of the present invention, an image selection window may be determined according to the size of the aircraft body, and the image selection window is configured to crop the first image and the second image, and select a first sub image corresponding to the image selection window. And the second sub-image.
  • the first sub-image is image content of the same size as the image selection window in the first image
  • the second sub-image is image content of the same size as the image selection window in the second image.
  • the size of the image selection window only needs to be larger than the actual size of the aircraft, so that the aircraft will not touch when no obstacles are detected.
  • step A3 only the disparity value in the image selection window needs to be calculated, and the disparity value outside the image selection window does not need to be calculated, so that the overhead of image processing resources can be greatly reduced.
  • the SGB algorithm may be used to perform matching of image points on the target obstacles respectively captured by the first sub-image and the second sub-image.
  • the SGBM algorithm can perform image point matching in two images based on Open CV, and combines the window selection of the original image in steps A1 and A2, so the SGBM algorithm only needs to calculate the disparity value in the image selection window.
  • other stereo matching algorithms such as the BM algorithm and the GC algorithm in OpenCV2.1, may be used, which are not limited herein.
  • the obstacle detection method of the aircraft further includes :
  • the determining that the target obstacle is projected in the first pixel position in the first image and the target obstacle projecting in the second pixel position in the second image includes:
  • the image may be pre-processed, for example, the scaling process, the cropping process, and the grayscale histogram equalization process may be performed.
  • the image obtained by the camera to acquire the target object may be respectively scaled to a ratio suitable for the target obstacle recognition, for example, the image may be enlarged or reduced.
  • cropping an image multiple pixels of the edges of the left and right images can be cropped, which reduces the amount of computation for visual processing.
  • the pixels of an image occupy a large number of gray levels and are evenly distributed, such images tend to have high contrast and variable gray tones, and the grayscale image can be equalized.
  • Histogram Henghua As an example, it is possible to utilize a transform function that can automatically achieve such a processing effect by input image histogram information.
  • the parallax calculation is performed.
  • the image required at the time is the grayscale image after the equalization processing, and the first pixel position and the second pixel position are obtained by detecting the target obstacle projected in the grayscale image acquired by the left and right cameras.
  • the step 101 detects the obstacle detection method of the aircraft provided by the embodiment of the present invention after the first image and the second image are obtained by the binocular camera of the aircraft configuration. include:
  • the internal parameter information includes: radial distortion parameters and tangential distortion parameters of the left eye, radial distortion parameters and tangential distortion parameters of the right eye, and the external parameter information includes: a rotation matrix and an offset matrix between the left eye and the right eye in the camera;
  • the image acquired by the binocular camera can also be corrected, including distortion correction of the image and alignment of the image.
  • the Remap function of Open CV can be used to correct and align the image according to the internal and external parameters obtained by the previous camera calibration.
  • the left and right eye images after the remap function satisfy the mathematical meaning.
  • the external parameter information of the binocular camera includes a rotation matrix and an offset matrix, and the first image and the second image are corrected by the rotation matrix and the offset matrix, and the first image and the second image can be aligned and corrected.
  • the first image and the second image are made to satisfy images that are above the same horizontal line.
  • the parallax depth mapping matrix of the left and right cameras can be determined in advance, Then, according to the inverse relationship between the disparity value and the depth value, the depth value of the binocular camera from the target obstacle can be calculated.
  • the depth value of the target obstacle refers to the vertical distance between the plane where the target obstacle is located and the binocular camera, and the calculated depth value can determine the distance from the aircraft in the flight direction of the aircraft, and there is an obstacle. Things.
  • the implementation Examples can also include:
  • D1 Send the depth value of the binocular camera from the target obstacle to the flight control module of the aircraft, and determine, by the flight control module, whether there is an obstacle blocking in the flight direction according to the depth value of the binocular camera from the target obstacle.
  • the flight control module can determine, according to the depth value, whether there is an obstacle blocking in the flight direction thereof, and when there is an obstacle blocking in the flight direction thereof. , the distance of the aircraft from the target obstacle.
  • step 103A calculates a double according to a disparity value between the first pixel location and the second pixel location and a preset parallax depth mapping matrix.
  • the depth value of the target camera from the target obstacle may include the following steps:
  • the image selection window is determined according to the body size image formed by the aircraft in the binocular camera, and is used for both the first image and the second image.
  • the image selection window divides the first sub image and the second sub image, respectively. Therefore, in step E1, only the depth values of each pixel in the first sub-image and the second sub-image need to be calculated, and the depth of the pixel points other than the image selection window in the first image and the second image is not required.
  • the value is calculated so that the computational resource overhead required to calculate the depth value can be greatly reduced, for example, the computational load of the Central Processing Unit (CPU) can be reduced.
  • CPU Central Processing Unit
  • the depth value of the pixel in the image selection window in step E1 may be a matrix multiplication using a disparity-to-Depth Mapping Matrix to obtain an actual three-dimensional point position.
  • the stereoRectify function provided by OpenCV can be used to obtain the depth value of the mapping matrix and pixel points.
  • the image selection window is divided into a plurality of image sub-windows, for example, divided into 4 ⁇ 4 sub-windows.
  • the depth value of the image sub-window having the smallest depth value may be selected from the depth values of all the pixel points of each image sub-window, which is indicated in the sub-value The distance within the window from the obstacle closest to the aircraft.
  • the embodiment may further include:
  • E4 Send the depth value of each image sub-window to the flight control module of the aircraft, and select, by the flight control module, the obstacle avoidance direction according to the depth value of each image sub-window, and then adjust the flight attitude of the aircraft.
  • the plurality of image sub-windows into which the image selection window is divided will calculate the depth value, and the depth values of all the image sub-windows may also be sent to the flight control module.
  • the flight control module adjusts the flight attitude of the aircraft after selecting the obstacle avoidance direction according to the depth value of each image sub-window.
  • the flight attitude of the aircraft may refer to the orientation, height and position of the aircraft, and during the implementation of the aircraft using the obstacle avoidance flight, the positional movement of the aircraft to maintain an appropriate distance from the target obstacle is mainly controlled. For example, adjusting the flight attitude may simply control the flight of the aircraft forward, or may control the flight of the aircraft to achieve tumbling.
  • the first image and the second image are obtained by real-time image acquisition of the target obstacle by the binocular camera configured by the aircraft, wherein the first image is captured by the left eye of the binocular camera.
  • the second image is taken by the right eye in the binocular camera.
  • the real-time detection of the forward obstacle is realized by the built-in binocular camera of the aircraft, and no additional device equipment is needed in the aircraft, and there is no need to limit the flight scene and the shape of the obstacle of the aircraft, and the image analysis is performed. And the calculation can accurately calculate the depth value of the binocular camera from the target obstacle, reduce the obstacle detection error of the aircraft, and improve the obstacle detection accuracy of the aircraft.
  • FIG. 2 is a schematic diagram showing the entire workflow of the binocular stereoscopic obstacle detection provided by the embodiment of the present invention. .
  • the camera can be calibrated after installing the binocular camera on the drone.
  • a single camera needs to be calibrated for the purpose of obtaining radial distortion (such as barrel distortion) and tangential distortion parameters of the camera, called intrinsic parameters.
  • Binocular stereo vision avoidance requires that the cameras of the left and right eyes are mounted on the same horizontal line and spaced between 6cm and 10cm. At intervals less than 6 cm, the disparity value of the image is too small to obtain a reasonable depth value. Objects that are too close together will not match. The installed camera is unable to mathematically achieve the exact same horizontal line. Therefore, it is necessary to perform stereo calibration on it.
  • stereo calibration can use the Zhang Zhengyou calibration method, so that the rotation matrix and the offset matrix between the two lenses can be obtained.
  • This set of values becomes the extrinsic parameters of the camera. After the image is acquired, it will be distorted using the internal parameters, and then the external parameters will be used to rotate and pan the image to the same horizontal line as required by the mathematics.
  • the drone collects real-time images of the left and right eyes through its binocular camera.
  • the real-time images of the left and right eyes are passed through the image depth calculation module to generate corresponding depth values.
  • the drone determines whether there is an obstacle blocking in the direction of flight based on the depth value. If there are obstacles blocking, the distance will be the current obstacle
  • the depth value of the object is sent to the flight control module of the drone.
  • the depth value of the obstacle calculated in this embodiment refers to the vertical distance between the plane where the obstacle is located and the binocular camera.
  • FIG. 3 is a schematic flowchart of an image processing link in binocular stereoscopic obstacle detection according to an embodiment of the present invention.
  • the drone can be used to calculate the depth information of the scene through the stereo vision module.
  • the workflow is divided into image scaling and cropping, image distortion compensation, image alignment, parallax calculation and depth value calculation. for example.
  • the image is scaled and cropped.
  • the UAV uses binocular vision to detect obstacles, high-precision pictures are not needed, so the pictures captured by the binocular camera can be scaled to 320x240 format. Because the parallax of the left and right eyes exists, the edges of the left and right images are difficult to match, and the edges of the left and right images can be cropped by about 20 pixels during processing, which can reduce the amount of computation of the visual processing.
  • Image correction is then performed, which includes distortion correction of the image and alignment of the image.
  • the remap function of openCV can be used to correct and align the image according to the internal and external parameters obtained by the previous camera calibration.
  • the left and right eye images after the remap function satisfy the image above the same horizontal line in the mathematical sense.
  • One is to correct the distortion of a single picture, and the other is to translate and rotate the two pictures to satisfy the mathematical level on the same level.
  • the depth value calculation of the binocular vision first needs to obtain the disparity value between the corresponding points of the left and right images.
  • the same object in the real world is projected into the left and right cameras, and there are some differences in pixel positions.
  • the projection of the point in the same real space in the camera will have a pixel position, and the pixel positions of the left and right cameras will have an offset value, which is the parallax.
  • FIG. 4 is a schematic flowchart of a disparity calculation link in binocular stereoscopic obstacle detection according to an embodiment of the present invention.
  • the projections of the physical point P in the left and right cameras are points XL and XR, respectively. Because binocular vision is required to be above the same horizontal line, its Y values are the same.
  • the disparity is XL-XR.
  • f denotes the focus position of the left and right cameras
  • Tx denotes the displacement between the two cameras
  • Z denotes the depth value of the P point.
  • the SGBM algorithm provided by OpenCV is taken as an example to describe the matching of image points and the calculation of disparity values.
  • the real-time performance of the image processing calculation is ensured on the embedded device.
  • the SGLM is not performed on the entire image.
  • the calculation method of the 3-dimensional projection can be utilized to obtain an image selection window, and the size of the image selection window only needs to be larger than the actual size of the drone, so that the drone can ensure that the obstacle is not detected. It will collide with the obstacle, only need to calculate the disparity value in the window, no need to calculate the disparity value outside the window, which can greatly reduce the CPU overhead.
  • the depth value is obtained by matrix multiplication using the disparity value and the parallax depth mapping matrix to obtain the actual three-dimensional point position.
  • the calculation formula is as follows:
  • x, y is the projected coordinate of the point in the actual three-dimensional space in the image, and the unit is the pixel.
  • Disparity(x, y) represents the disparity value at the pixel point (x, y)
  • the Q matrix is the parallax depth mapping matrix, which is calculated by the camera internal and external parameters.
  • the form of Q is as follows: Tx, f, Cx, Cy, where Q is obtained by calibration and calibration of the camera, Tx is the horizontal offset between the two cameras, f is the focal length, and Cx and Cy are internal parameters for indicating The position of the optical center and focus is offset.
  • the depth value of all the pixels in the image selection window (unit is the physical value unit, for example, meter) is obtained by the binocular vision module, and the image selection window is equally divided into 3x3 images. Child window.
  • the minimum value of the depth value is obtained for each sub-window, and the minimum value of the depth value of all the pixel points in the sub-window is the depth minimum of the sub-window, and the minimum value of the depth indicates the nearest to the drone in the sub-window.
  • the distance of the obstacle wherein the distance between the obstacle and the camera is a line perpendicular to the plane of the obstacle parallel to the main optical axis.
  • the distance is less than a certain threshold (for example, 1 meter), it means that the drone will collide with the obstacle.
  • a certain threshold for example, 1 meter
  • the minimum depth value for each sub-window may be different, which can help the drone flight control system determine which direction to avoid.
  • the depth values of all sub-windows can be sent to the flight control system.
  • a threshold can be set, such as 1.5 meters, so as long as there is an image sub-window with a depth value of less than 1.5 meters, it means that the drone will hit the obstacle after flying 1.5 meters. . Then, according to the situation of other image sub-windows, it can be interpreted in which direction to turn to avoid Open, for example, the left child window is 3 meters, then you can avoid obstacles to the left. If all image sub-windows are 1.5 meters, then use random steering to avoid obstacles.
  • the above obstacle avoidance strategy is only the simplest obstacle avoidance strategy, and the obstacle avoidance strategy can also be realized by combining artificial intelligence, positioning, and map.
  • the real-time detection of the forward obstacle is realized by the built-in binocular camera of the drone.
  • the flight control module of the drone is controlled to control the steering of the drone by dividing the image sub-window to obtain the depth value of the front of the drone to different positions.
  • an obstacle detection device 500A for an aircraft may include: an image acquisition module 501A, a parallax calculation module 502A, and a depth calculation module 503A.
  • the image acquisition module 501A is configured to perform real-time image acquisition on the target obstacle by the binocular camera configured by the aircraft to obtain the first image and the second image, wherein the first image is captured by the left eye of the binocular camera, The second image is taken by the right eye in the binocular camera;
  • the disparity calculation module 502A is configured to determine a first pixel position in which the target obstacle is projected in the first image, and a second pixel position in which the target obstacle is projected in the second image, and according to the first pixel position and the second pixel position Calculating a disparity value between the first pixel location and the second pixel location;
  • a depth calculation module 503A configured to calculate a depth value of the binocular camera from the target obstacle according to a disparity value between the first pixel location and the second pixel location, and a preset parallax depth mapping matrix, for detecting the aircraft Is there any obstacle blocking in the direction of flight?
  • the obstacle detecting device 500A of the aircraft further includes: an image preprocessing module 504A, wherein
  • the image pre-processing module 504A is configured to perform real-time image acquisition on the target obstacle by the binocular camera configured by the aircraft, and obtain the first image and the second image, respectively, and then scale the first image and the second image respectively. Processing and cropping processing; converting the processed first image and the second image into a first grayscale image and a second grayscale image, respectively, and performing equalization processing on the first grayscale image and the second grayscale image respectively;
  • the disparity calculation module 502A is specifically configured to determine, from the first grayscale image after the equalization processing, a first pixel position to which the target obstacle is projected, and determine a target obstacle from the second grayscale image after the equalization processing The second image position projected to.
  • the obstacle detecting device 500A of the aircraft further includes:
  • the acquiring module 504A is configured to perform real-time image acquisition on the target obstacle by the binocular camera configured by the aircraft, obtain the first image and the second image, and obtain the internal parameter information and the external parameter information of the binocular camera, the internal parameter.
  • the information includes: radial distortion parameters and tangential distortion parameters of the left eye, radial distortion parameters and tangential distortion parameters of the right eye, and external parameter information includes: rotation matrix and partial deviation between the left and right eyes in the binocular camera Shift matrix
  • the distortion compensation module 505A is configured to perform distortion compensation on the first image and the second image respectively according to the internal parameter information, to obtain a first image after the distortion compensation is completed and a second image after the distortion compensation is completed;
  • the correction module 506A is configured to perform image correction processing on the same horizontal plane on the first image after the distortion compensation is completed and the second image after the distortion compensation is completed according to the external parameter information.
  • the obstacle detecting device 500A of the aircraft further includes:
  • the first sending module 507A is configured to: after the depth calculating module 503A calculates the depth value of the binocular camera from the target obstacle according to the disparity value between the first pixel position and the second pixel position and the preset parallax depth mapping matrix, The depth value of the binocular camera from the target obstacle is sent to the flight control module of the aircraft, and the flight control module determines whether there is an obstacle blocking in the flight direction according to the depth value of the binocular camera from the target obstacle.
  • the disparity calculation module 502A includes:
  • the window determining unit 5021A is configured to determine an image selection window according to a body size image formed by the aircraft in the binocular camera, wherein a total pixel value of the image selection window is greater than a total pixel value of the body size image and smaller than a total pixel of the first image. a value that is less than a total pixel value of the second image;
  • the image area selecting unit 5022A is configured to select a first sub image and a second sub image corresponding to the image selection window from the first image and the second image respectively by using an image selection window;
  • the image matching unit 5023A is configured to perform matching of image points on the target obstacles respectively captured by the first sub-image and the second sub-image by using the global matching SGGM algorithm, and determine that the target obstacle is projected in the first sub-object by matching the succeeding image points.
  • the depth calculation module 503A includes:
  • a pixel depth value calculation unit 5031A configured to respectively calculate depth values of all pixel points corresponding to the image selection window according to a disparity value between the first pixel position and the second pixel position and a preset parallax depth mapping matrix;
  • a sub-window depth value calculation unit 5032A configured to divide the image selection window into a plurality of image sub-windows, and respectively calculate depth values of each image sub-window according to depth values of all pixel points corresponding to the image selection window;
  • the depth value determining unit 5033A is configured to select an image sub-window with the smallest depth value from the depth values of each image sub-window, and determine that the depth value of the image sub-window with the smallest depth value is the depth value of the binocular camera from the target obstacle.
  • the obstacle detection device 500A of the aircraft further includes:
  • the second sending module 508A is configured to: after the depth value determining module determines that the depth value of the image sub-window with the smallest depth value is the depth value of the target obstacle, the depth value of each image sub-window is sent to the aircraft.
  • the flight control module adjusts the flight attitude of the aircraft by selecting a obstacle avoidance direction according to the depth value of each image sub-window by the flight control module.
  • the description of the above embodiment shows that the first image and the second image are obtained by real-time image acquisition of the target obstacle by the binocular camera configured by the aircraft, wherein the first image is captured by the left eye of the binocular camera.
  • the second image is taken by the right eye in the binocular camera, then Determining a first pixel position of the target obstacle projected in the first image, and projecting the target obstacle in a second pixel position in the second image, and calculating the first pixel position and the first pixel position according to the first pixel position and the second pixel position
  • the disparity value between the two pixel positions, and finally the depth value of the binocular camera from the target obstacle is calculated according to the disparity value between the first pixel position and the second pixel position and the preset parallax depth mapping matrix.
  • the embodiment of the invention realizes the real-time detection of the forward obstacle by the built-in binocular camera of the aircraft, and does not need to add additional device equipment in the aircraft, and does not need to limit the flight scene and the shape of the obstacle of the aircraft, and analyzes the image through the image. And the calculation can accurately calculate the depth value of the binocular camera from the target obstacle, reduce the obstacle detection error of the aircraft, and improve the obstacle detection accuracy of the aircraft.
  • FIG. 6 is a schematic structural diagram of an aircraft according to an embodiment of the present invention.
  • the aircraft 1100 may have a large difference due to different configurations or performances, and may include one or more central processing units (central Processing units (CPU) 1122 (eg, one or more processors) and memory 1132, one or more storage media 1130 storing application 1142 or data 1144 (eg, one or one storage device in Shanghai), camera 1152, sensor 1162.
  • the memory 1132 and the storage medium 1130 may be short-term storage or persistent storage.
  • the program stored on storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations in the aircraft.
  • central processor 1122 can be configured to communicate with storage medium 1130 to perform a series of instruction operations in storage medium 1130 on aircraft 1100.
  • aircraft structure illustrated in Figure 6 does not constitute a definition of an aircraft, may include more or fewer components than illustrated, or combine some components, or different component arrangements.
  • Aircraft 1100 may also include one or more power sources 1126, one or more wireless network interfaces 1150, one or more input and output interfaces 1158, and/or one or more operating systems 1141, such as an Android system or the like.
  • the camera 1152 is included in the aircraft.
  • the camera may be a digital camera or an analog camera.
  • the camera 1152 is specifically a binocular camera.
  • the resolution of the camera may be selected according to actual needs.
  • the structural components of the camera may include: a lens and an image. Sensors can be configured in conjunction with specific scenarios.
  • the aircraft may also include sensors 1162, such as motion sensors and other sensors.
  • sensors 1162 such as motion sensors and other sensors.
  • the accelerometer sensor can detect the magnitude of acceleration in each direction (usually three axes), and the magnitude and direction of gravity can be detected at rest, which can be used to identify the attitude of the aircraft (eg Aircraft yaw angle, roll angle, pitch angle measurement, magnetometer attitude calibration), identification of related functions, etc.
  • the aircraft can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here No longer.
  • the obstacle detection method steps of the aircraft performed by the aircraft in the above embodiment may be based on the aircraft structure shown in FIG.
  • the aircraft due to the lack of externally assisted navigation, it is difficult for the aircraft to estimate the positioning and motion of the aircraft in an unknown environment. This critical problem needs to be solved during the autonomous navigation of the aircraft.
  • the solution to this problem is closely related to the type of aircraft airborne sensor.
  • the positioning information of the aircraft can be measured by installing a monocular camera, an optical flow camera or an inertial sensor on the aircraft body, and the positioning information is utilized. Flight control of the aircraft.
  • the embodiment provides a method for acquiring flight positioning information and an aircraft, which can obtain target flight positioning information closer to the true value, and can still obtain accurate without using an optical flow camera or a high-precision inertial sensor. Positioning information reduces the error value while also reducing the cost of the aircraft.
  • the drone positioning hover can realize the automatic hovering of the error within the range of 10 cm vertical and 1 m horizontal accuracy.
  • manual fine-tuning is required.
  • the automatic hovering of the drone is essentially to fix it in the preset height position and horizontal position. That is to say, to realize the hovering action, the position of the self is read in advance, that is, a This step of grouping three-dimensional coordinates is crucial. Precisely determining the position information of the drone is the premise and basis for the drone to complete the positioning hover.
  • the positioning of the global positioning system (English full name: Global Positioning System, English abbreviation: GPS) module.
  • the GPS After integrating the position information of at least 4 satellites, the GPS can realize the spatial positioning of the drone.
  • the use of GPS-centric, assisted positioning methods with various sensors is the mainstream positioning solution adopted by today's drones.
  • the GPS carried by the drone In order to cope with the error caused by the selection of availability technology (English full name: English abbreviation: SA) in the GPS system, the GPS carried by the drone usually uses differential GPS technology to improve the positioning accuracy.
  • the use of the positioning of the visual system Continuous shooting of the onboard camera provides continuous image frames for the navigation system.
  • the feature tracker obtains natural landmark information from two consecutive image frames and measures it in a pair of natural features. Displacement. By periodically recording new feature points and comparing the repeated feature points, the homography matrix used as a three-dimensional geometric projection between the image capturing sequences can be measured, so that the positioning of the drone can be realized.
  • the radio positioning is to receive the radio signal sent by the navigation station through the receiver at the precise position of the known navigation station, and calculate the time interval between the signal is sent to the reception to process the relative distance between the navigation station and the target. To determine the location.
  • the vision system since the vision system is free from the need to receive the GPS signal, the UAV can be kept stable by cooperation with components such as the inertial sensor without the GPS signal, so no use of the scheme is adopted.
  • Man-machines can be used in areas with obvious environmental characteristics, such as rivers, houses and other working environments.
  • the present invention mainly uses a vision system for positioning, which will be described in detail below.
  • FIG. 7 is a schematic flowchart diagram of a method for acquiring flight location information according to an embodiment of the present invention, where the method includes:
  • the aircraft including the first camera and the second camera determines (N-1) first eigen parameters according to the N first real-time images, and determines (N-1) second according to the N second real-time images.
  • An eigen parameter wherein the first camera is configured to acquire N first real-time images corresponding to N different times, and the second camera is configured to acquire N second real-time images corresponding to the N different times, where N is greater than or a positive integer equal to 2; the first eigen parameter is used to represent the translation parameters of the adjacent two frames of the N first real-time images, and the second eigen parameter is used to represent the adjacent two frames of the N second real-time images. Translation parameters.
  • the aircraft includes a set of binocular cameras, that is, two cameras, which are respectively defined as a first camera and a second camera.
  • the binocular camera can provide depth information and positioning information at the same time.
  • the depth information mainly refers to the height information.
  • the method of obtaining the depth information may be to install the binocular camera vertically downward of the aircraft, so that the height can be better captured. Variety.
  • the first camera and the second camera are respectively located at two different positions of the aircraft, and simultaneously capture N frames of images, and N is a positive integer greater than or equal to 2, so as to ensure that two frames of images are obtained before and after, so that feature comparison can be performed.
  • the real-time images respectively corresponding to the N times acquired by the first camera are referred to as a first real-time image
  • the real-time images respectively corresponding to the N times acquired by the second camera are referred to as a second real-time image.
  • the N first real-time images acquired by the first camera are respectively N-frame images corresponding to N times, and the adjacent two frames of the N-frame images are compared by features to obtain (N-1) translation parameters
  • the (N-1) translation parameters are respectively referred to as first eigen parameters.
  • the N second real-time images acquired by the second camera are also N-frame images corresponding to N times, and the adjacent two frames of the N-frame images are compared by features (N-1). Translation parameters.
  • the (N-1) translation parameters are respectively referred to as second eigen parameters.
  • the aircraft acquires first initial positioning information of the starting time of the N different times through the first camera, and acquires second initial positioning information of the starting time of the N different times by the second camera.
  • the first initial positioning information is positioning information captured by the first camera at the starting time of the N different times
  • the second initial positioning information is captured by the second camera at the starting time of the N different moments.
  • Location information Assuming that the entire space of the aircraft flight is regarded as a three-dimensional coordinate system, the first initial positioning information can be used as the position of the origin in the three-dimensional coordinate system captured by the first camera, and the second initial positioning information can be used as the three-dimensional coordinates of the second camera. The position of the origin in the system.
  • the eraser determines (N-1) first flight positioning information corresponding to (N-1) times according to the first initial positioning information and (N-1) first eigen parameters, and according to the second initial Positioning information and (N-1) second eigen parameters, determining (N-1) second flight positioning information corresponding to (N-1) moments; wherein
  • the (N-1) time points are (N-1) times other than the start time among the N different time points.
  • the aircraft has acquired the first initial positioning information, and calculates (N-1) first eigen parameters, so that the first initial positioning information and (N-1) first eigenmen can be utilized.
  • the parameter determines (N-1) first flight positioning information corresponding to (N-1) times.
  • (N-1) second flight positioning information corresponding to (N-1) times may also be determined.
  • the first initial positioning information is X 1 , that is, the positioning information at time N 1 is X 1 , N 2 time.
  • the first eigenvalue parameter is a
  • the first eigenparameter of time at time N 3 is the first eigenparameter at time b
  • N 4 is the first eigengic parameter at time c
  • N 5 is d
  • the first flight positioning information is a X 1
  • the first flight positioning information at time N 3 is ab X 1
  • the first flight positioning information at time N 4 is abc X 1
  • the first flight positioning information at time N 5 is abcdX 1
  • first flight positioning information corresponding to the times N 2 to N 5 ie, N-1) are respectively obtained.
  • the aircraft acquires target flight positioning information corresponding to the ending time of the N different times according to the (N-1) first flight positioning information and the (N-1) second flight positioning information.
  • the aircraft may use the preset positioning constraints to correct and adjust the obtained (N-1) first flight positioning information and (N-1) second flight positioning information, and the adjusted (N) The error between the first flight position information and the (N-1) second flight position information is a minimum value.
  • the optimal solution of the adjusted first flight positioning information and the second flight positioning information is calculated by using the solver, thereby obtaining target flight positioning information, and the target flight positioning information is used as flight positioning information at the end time of the N different times. .
  • the target flight location information is sent to the aircraft's flight control module to use the information to fly or hover.
  • the aircraft includes a first camera and a second camera.
  • the first camera is configured to acquire N first real-time images corresponding to N different times
  • the second camera is configured to acquire N first times corresponding to N different times.
  • Two real-time images using the aircraft to obtain flight positioning information, determining (N-1) first eigen parameters according to the N first real-time images, and determining (N-1) second according to the N second real-time images
  • the eigen parameter obtains first initial positioning information of the first camera at the starting time and second initial positioning information of the second camera, and then according to the first initial positioning information and (N-1)
  • An eigen parameter determining (N-1) first flight positioning information corresponding to (N-1) moments, and determining (N-1) second eigen parameters according to the second initial positioning information (N -1) (N-1) second flight positioning information corresponding to one time, according to (N-1) first flight positioning information and (N-1) second flight positioning information, finally adopting preset positioning constraint
  • the condition acquires the target flight location
  • the binocular camera is used to realize the positioning of the aircraft, and the images corresponding to different times can be acquired in real time, and then the translation parameters between the images of each frame are analyzed, and the two cameras respectively use the translation parameters to obtain the corresponding positioning information, and finally adopt the positioning information.
  • the preset positioning constraint corrects the positioning information to obtain the target flight positioning information closer to the true value.
  • accurate positioning information can still be obtained, and the error value can be reduced. It also reduces the cost of the aircraft.
  • the first initial positioning information of the first camera and the second initial positioning information of the second camera are obtained at the start time in an embodiment of the method for acquiring flight positioning information provided by the embodiment of the present invention. Previously, it could also include:
  • the first camera and the second camera are disposed on the same horizontal line of the aircraft within a preset camera distance range.
  • FIG. 8 is a schematic diagram of an aircraft with a binocular camera installed in an embodiment of the present invention. As shown in FIG. 8, the first camera and the second camera are mounted on the same horizontal line of the aircraft, and the separation distance between the two is within the preset camera distance range. It should be noted that the two camera positions in FIG. 8 are only one schematic and should not be construed as limiting the present invention.
  • the preset camera distance can range from 6 cm to 10 cm. In practical applications, some adjustments can also be made, which are not limited herein.
  • the two cameras installed in the actual application can not be mathematically realized to the same horizontal line. Therefore, the two cameras need to be stereo-calibrated separately, and the stereo calibration can be performed by Zhang Zhengyou calibration method.
  • the implementation process of the Zhang Zhengyou calibration method may include the following steps:
  • the parameters that the binocular camera needs to be calibrated include, but are not limited to, a parameter matrix within the camera, a matrix of distortion coefficients, an eigenmatrix, a base matrix, a rotation matrix, and a translation matrix.
  • the parameter matrix and the distortion coefficient matrix in the camera can be calibrated by a single target method.
  • the main difference between binocular camera calibration and monocular camera calibration is that the binocular camera needs to calibrate the relative relationship between the left and right camera coordinate systems.
  • the vertically downward binocular cameras are mounted on the same horizontal line, and the distance between the two cameras is within the preset camera distance. It can be understood that if the interval between the two cameras is too small, it is difficult to obtain reasonable depth information and positioning information, and if the distance between the two cameras is too large, the near objects may not be photographed, and thus the reference object is lacking.
  • the first camera and the second camera can both capture the real-time images that meet the requirements.
  • the first initial positioning information of the starting time of the N different times is acquired by the first camera, and the first Before the second camera acquires the second initial positioning information of the starting time in the N different moments, the second camera may further include:
  • the first sub-image corresponding to the first moment and the second sub-image corresponding to the second moment are acquired by the first camera, wherein the first moment and the second moment are two of the N different moments, the first sub- Both the image and the second sub-image belong to the first real-time image;
  • the first depth information and the second depth information are obtained according to the binocular stereo vision mode, wherein the first depth information is obtained according to the first sub image and the second sub image, and the second depth information is according to the third sub image. And the fourth sub-image is obtained.
  • the aircraft may further acquire the first sub-image corresponding to the first time by using the first camera, and acquire the corresponding first time at the next time, that is, the second time. Two sub-images.
  • the second camera is used to acquire the corresponding third sub-image at the first time
  • the fourth sub-image is acquired at the second time.
  • both the first sub-image and the second sub-image belong to the first real-time image
  • the third sub-image and the fourth sub-image all belong to the second real-time image.
  • binocular stereo vision is an important form of machine vision. It is based on the parallax principle and uses the imaging device to acquire two images of the measured object from different positions, and obtains the object by calculating the positional deviation between the corresponding points of the image. The method of 3D geometric information.
  • the first sub-image and the third sub-image are merged, and the image obtained by the two eyes is merged, and the first sub-image is observed.
  • the difference between the third sub-images can obtain a clear sense of depth, establish a correspondence between the first sub-image and the third sub-image feature, and associate the image points of the same spatial physical point in different images to obtain First depth information.
  • the second depth information can be obtained by comparing the second sub-image at the second moment with the fourth sub-image at the second moment.
  • the binocular stereo vision measurement method has the advantages of high efficiency, appropriate precision, simple system structure and low cost, and is very suitable for on-line, non-contact product detection and quality control in the manufacturing field.
  • the stereoscopic method is a more effective measurement method because image acquisition is done in an instant.
  • the first sub-image corresponding to the first time and the second sub-image corresponding to the second time are acquired by the first camera
  • the third sub-image corresponding to the second time and the second time are acquired by the second camera.
  • Corresponding fourth sub-image then adopting a binocular stereo vision mode, and obtaining first depth information according to the first sub image and the second sub image, and obtaining second depth information according to the third sub image and the fourth sub image, .
  • the first camera and the second camera can also acquire the depth information, that is, the height information, which overcomes the shortcomings that the monocular camera and the optical flow camera cannot provide the depth information, thereby enhancing the practicability of the solution, and at the same time, obtaining the depth information. It can also be used for terrain recognition, object recognition and height setting to enhance the diversity of the solution.
  • the first eigen parameter may include a first rotation matrix and a first translation vector, where the second eigen parameter includes a second rotation matrix and a second translation vector.
  • the first rotation matrix is used to represent the angle change of the first camera
  • the second rotation matrix is used to represent the angle change of the second camera
  • the first translation vector is used to indicate the height change of the first camera
  • the second translation vector is used for Indicates the height change of the second camera.
  • the aircraft acquires the first eigen parameter and the second eigen parameter, and the first eigen parameter and the second eigen parameter belong to the eigen parameter, and the eigen parameter includes the rotation matrix and the translation vector, and the following Introduce the rotation matrix and the translation vector separately.
  • R and T are used to describe the relative relationship between the left and right camera coordinate systems, specifically, the coordinates under the left camera are converted to the coordinates under the right camera, that is, the coordinates under the first camera are converted to the coordinates under the second camera.
  • binocular camera analysis often uses the left camera, that is, the first camera as the main coordinate system, but R and T are left-to-right conversion, so T x is negative.
  • the single-target centering camera external parameters are R l , T l , R r and T r , and the rotation matrix and R and translation matrix T can be obtained by substituting (3), and the translation vector t can be obtained according to the translation matrix T. .
  • the eigenparameters composed of the rotation matrix and the translation vector are very important to the bingebraic problem in the binocular problem, which can simplify the problem of stereo matching, etc., but the application of the level geometry to solve the problem, such as the grading line, needs to know the eigenparameter Therefore, the eigenparameters are also determined according to the rotation matrix and the R and translation matrix T in the dual target setting process.
  • the eigenvalues are usually represented by the letter E, and the physical meaning is a parameter in which the left and right coordinate systems are mutually converted, and the relationship between the corresponding points on the image planes of the left and right cameras can be described.
  • the rotation matrix and the translation vector of the binocular camera can be obtained, and the eigen parameters are constructed by using the rotation matrix and the translation vector.
  • each camera in the binocular camera needs to be calibrated to obtain a rotation matrix.
  • the translation vector to describe the relative positional relationship between the two cameras, and can also constitute the intrinsic parameters, thereby ensuring the feasibility and practicability of the scheme.
  • (N-1) first eigen parameters are determined according to the N first real-time images, and according to N
  • the second real-time image determines (N-1) second eigen parameters, which may include:
  • ⁇ 1 represents first depth information and ⁇ 2 represents second depth information
  • C represents a pre-measured internal parameter
  • R 1 represents a first rotation matrix
  • t 1 represents a first translation vector
  • ⁇ 3 represents third depth information
  • ⁇ 4 represents fourth depth information
  • R 2 represents a second rotation matrix
  • t 2 represents a second translation vector
  • FIG. 9 is a schematic diagram showing positioning of a binocular camera according to an embodiment of the present invention, wherein (N-1) first eigen parameters, that is, in FIG. R, the (N-1)th second eigenvalue, that is, L and E in Fig. 9 are preset positioning constraints.
  • a rotation extraction matrix and a translation vector of the real-time image may be calculated based on a feature extraction algorithm (English full name: ORiented Brief, English abbreviation: ORB). Firstly, the ORB feature points of each real-time image are extracted, and then matched with the ORB feature points of the real-time image of the previous frame, thereby obtaining an ORB feature point set corresponding to two of the N time instants:
  • ORB feature extraction algorithm
  • z 1 is a feature point set of the image of the previous time
  • z 2 is a feature point set of the current time image.
  • only one set of set points is used as a schematic. If z 1 and z 2 are perfectly matched, then the following formula should be satisfied between each set of points:
  • ⁇ 1 represents first depth information and ⁇ 2 represents second depth information
  • C represents a pre-measured internal parameter
  • R 1 represents a first rotation matrix
  • t 1 represents a first translation vector
  • ⁇ 3 represents third depth information
  • ⁇ 4 represents fourth depth information
  • R 2 represents a second rotation matrix
  • t 2 represents a second translation vector
  • the first eigenvalue and the second eigenvalue can be calculated, that is, the first rotation matrix and the first translation are obtained.
  • a corresponding calculation formula is provided for determining (N-1) first eigen parameters and (N-1) second eigen parameters, so that the corresponding formula can be calculated.
  • the intrinsic parameters provide a feasible basis for the realization of the scheme, thereby increasing the feasibility of the scheme.
  • the information using the preset positioning constraint to obtain the target flight positioning information corresponding to the ending time of the N different moments, may include:
  • X represents the first flight location information
  • Y represents the second flight location information
  • N represents the Nth time
  • j represents the jth time of the N times
  • X j represents the second flight location corresponding to the jth time.
  • Information Y j represents the second flight positioning information corresponding to the jth time
  • R ext represents the rotation matrix between the first camera and the second camera measured in advance
  • t ext represents the first camera and the second camera measured in advance Translation vector between.
  • N sets of adjusted flight positioning information can be obtained, for example, the first flight positioning information and the second flight positioning information together form ⁇ X1, Y1 ⁇ , ⁇ X2, Y2 ⁇ , ..., ⁇ Xn, Yn ⁇ , adjustment After each group of ⁇ X1, Y1 ⁇ , ⁇ X2, Y2 ⁇ , ..., ⁇ Xn, Yn ⁇ will be closer to the minimum value, which makes the measurement results more accurate.
  • R ext represents a rotation matrix between the first camera and the second camera measured in advance
  • t ext represents a translation vector between the first camera and the second camera measured in advance
  • R ext and t ext together serve as an external portion of the camera Parameters can be obtained by stereo calibration.
  • FIG. 10 is a schematic flowchart showing the acquisition of target flight positioning information in the embodiment of the present invention.
  • the aircraft calculates the current pose of the left and right cameras, that is, the current flight positioning information.
  • the flight positioning information may specifically include a coordinate point position and a flight direction in a three-dimensional space coordinate system;
  • the aircraft constructs a graph relationship by using a general graph optimization algorithm (English name: General Graph Optimization, English abbreviation: g2o), and uses the double
  • the objective constraint that is, the preset positioning constraint, corrects the flight positioning information.
  • g2o is an implementation of an algorithm set.
  • the most suitable algorithm is selected according to the specific problem. It is a platform, which can be added to the linear equation solver, write its own optimization objective function, and determine the update method.
  • the aircraft solves the optimal solution by using the g2o solver, and finally the aircraft uses the optimal solution in step 204B.
  • the current pose information is updated, that is, the current flight location information is updated, and the updated flight location information is the target flight location information.
  • a constraint between the binocular camera flight positioning information is established, and the optimal flight positioning information of the aircraft can be solved by the constraint.
  • the target flight location information is obtained, thereby reducing errors and improving the accuracy of the positioning.
  • the aircraft is based on (N-1) first flight positioning information and (N-1) second flights.
  • the method may further include:
  • the aircraft determines the first sub-flight positioning information corresponding to the (N+1)th time according to the target flight positioning information, and the first sub-flight positioning information is one piece of information in the target flight positioning information;
  • the aircraft uses the preset positioning constraint and the first sub-flight positioning information to obtain the second sub-flight positioning information corresponding to the (N+1)th time;
  • the aircraft adopts the preset positioning constraint and the third sub-flight positioning information to obtain the (N+2) The fourth sub-flight positioning information corresponding to the moment;
  • the aircraft calculates a first optimal solution of the first sub-flight positioning information and the third target flight positioning information, and calculates a second optimal solution of the second sub-flight positioning information and the fourth sub-flight positioning information, the first optimal solution and The second optimal solution constitutes flight positioning information at the (N+2)th time.
  • the target flight positioning information may also be used to calculate the subsequent flight positioning information.
  • the target flight positioning information includes positioning information of the first camera and positioning information of the second camera, and it is assumed that only one positioning information X1 corresponding to the (N+1)th time is selected, and X1 is called Positioning information for the first sub-flight, and then using the preset positioning constraint to reversely obtain the positioning information Y1 corresponding to the (N+1)th time, that is, the second sub-flight positioning information, and thus, the acquisition of a set of sub-flight positioning information is completed. And then start the acquisition of the next set of sub-flight positioning information.
  • the aircraft calculates the third sub-flight positioning information corresponding to the (N+2)th time, that is, X2. Similarly, using the preset positioning constraint and X2, the N+ is calculated. 2) The fourth sub-flight positioning information corresponding to the moment, that is, Y2. At this point, the next set of sub-flight positioning information is also acquired, and then the subsequent sub-flight positioning information acquisition can be continued, and no further description is made here.
  • the two cameras obtain the optimal solution according to the calculated X and Y respectively, for example, the optimal solution obtained by the least squares method, and the two optimal solutions can form the (N+2) time. Flight positioning information.
  • the target flight location information and the preset positioning constraint may be utilized to predict optimal flight positioning information for a period of time in the future.
  • the target flight location information and the preset positioning constraint may be utilized to predict optimal flight positioning information for a period of time in the future.
  • the aircraft determines the (N+2)th time according to the first sub-flight positioning information and the first eigen parameter.
  • the corresponding third sub-flight positioning information may include:
  • X N+2 represents the third sub-flight positioning information corresponding to the (N+2)th time
  • R N+1 represents the rotation matrix at the (N+1)th time of the first eigen-parameter
  • t N+1 The translation vector of the (N+1)th moment in the first eigenvalue is represented
  • X N+1 represents the first sub-flight positioning information corresponding to the (N+1)th time.
  • how to calculate the third sub-flight positioning information corresponding to the (N+2)th time is specifically introduced. Since the eigen parameters have been obtained, and the eigen-parameter includes the rotation matrix and the translation vector, the rotation is used. The matrix and the translation vector can be used to obtain the third sub-flight positioning information.
  • the third sub-flight positioning information corresponding to the (N+2)th time is calculated by the following formula:
  • X N+2 in the formula represents the third sub-flight positioning information corresponding to the (N+2)th time
  • R N+1 represents the rotation matrix at the (N+1)th time in the first eigen-parameter
  • t N+1 represents the translation vector at the (N+1)th moment of the first eigenvalue
  • XN+1 represents the first sub-flight positioning information corresponding to the (N+1)th moment.
  • the sub-flight positioning information of the current time can be calculated by using the sub-flight positioning information of the previous time each time. Then, the calculated series of sub-flight positioning information and the external parameters of the binocular camera are input into the g2o construction relationship, and then the solver of the g2o is called to obtain the optimal solution of the least squares method, and finally the target is updated with the optimal solution.
  • the flight positioning information is also sent to the flight control module of the aircraft.
  • the third sub-flight positioning information corresponding to the next moment is calculated by using the first sub-flight positioning information corresponding to the previous moment, that is, the corresponding formula can be used for calculation, and the utility of the scheme can be improved by the above manner. Sex and feasibility.
  • FIG. 11 is a schematic diagram of a workflow of a binocular camera in an application scenario, including:
  • step 301B it is assumed that the aircraft used is a drone, and firstly, the drone collects real-time images of the left and right eyes through the vertically downward binocular cameras mounted thereon;
  • step 302B the real-time image of the left and right eyes is used to calculate the depth value of the image
  • step 303B the rotation matrix and the translation vector of the left and right cameras are respectively calculated based on the feature points of the ORB image. Because the images acquired by the left and right cameras are different, the image feature points are different, so the rotation matrix and the translation vector calculated by the left and right cameras are There will be errors between them;
  • step 304B a constraint condition between the two sets of rotation matrix and the translation vector is established according to the constraint between the binocular cameras, and the least square method is used to obtain the optimal solution of the UAV pose.
  • the optimal solution is the positioning information of the drone;
  • step 305B the information is sent to the drone flight control system, so that the drone can obtain more accurate positioning information.
  • the aircraft in this embodiment includes a first camera and a second camera, wherein the first camera is used to acquire N numbers corresponding to N different times. a real-time image, the second camera is configured to acquire N second real-time images corresponding to N different moments, where N is a positive integer greater than or equal to 2, and the aircraft includes:
  • a first determining module 401B configured to determine (N-1) first eigen parameters according to the N first real-time images, and determine (N-1) second eigen parameters according to the N second real-time images;
  • the first eigen parameter is used to represent the translation parameters of the adjacent two frames of the N first real-time images
  • the second eigen parameter is used to represent the translation parameters of the adjacent two frames of the N second real-time images.
  • the first obtaining module 402B is configured to acquire first initial positioning information of the starting time of the N different times by using the first camera, and acquire second initial positioning information of the starting time of the N different times by using the second camera. .
  • the second determining module 403B is configured to determine (N-1) times corresponding to the (N-1) first eigen parameters determined by the first determining module 401B according to the first initial positioning information acquired by the first obtaining module 402B.
  • (N-1) first flight positioning information, and determined according to the second initial positioning information acquired by the first obtaining module 402B and the (N-1) second eigen parameters determined by the first determining module 401B (N -1) (N-1) second flight positioning information corresponding to one time; wherein (N-1) times are (N-1) times other than the starting time among N different times.
  • the second obtaining module 404B is configured to acquire N (N-1) first flight positioning information and (N-1) second flight positioning information determined by the second determining module 403B by using preset positioning constraints. Target flight location information corresponding to the end time at different times.
  • the first determining module 401B determines (N-1) first eigen parameters according to the N first real-time images, and determines (N-1) second eigen parameters according to the N second real-time images.
  • the first obtaining module 402B acquires first initial positioning information of the starting time of the N different times by the first camera, and acquires the second initial positioning information of the starting time of the N different times by the second camera, the second determining The module 403B determines (N-1) corresponding to (N-1) times according to the first initial positioning information acquired by the first obtaining module 402B and the (N-1) first eigen parameters determined by the first determining module 401B.
  • the first flight positioning information is determined according to the (N-1) second eigen parameters determined by the first determining module 401B according to the second initial positioning information acquired by the first obtaining module 402B, and (N-1) times are determined.
  • (N-1) second flight positioning information the second obtaining module 404B determines (N-1) first flight positioning information and (N-1) second flight positioning information according to the second determining module 403B, Obtaining the target flight corresponding to the end time of N different moments by using preset positioning constraints Bit information.
  • the aircraft can use the binocular camera to realize the positioning of the aircraft, obtain images corresponding to different times in real time, and then analyze the translation parameters between the images of each frame, and the two cameras respectively use the translation parameters to obtain the corresponding positioning information. Finally, the positioning information is corrected by the preset positioning constraint to obtain the target flight positioning information closer to the true value. Without the optical flow camera or the high-precision inertial sensor, accurate positioning information can still be obtained, and the error value can be reduced. At the same time, it also reduces the cost of the aircraft.
  • the aircraft further includes:
  • the setting module 405B is configured to acquire, by the first acquiring module 402B, the first initial positioning information of the starting time of the N different times by the first camera, and acquire the second initial positioning of the starting time of the N different times by the second camera. Before the information, the first camera and the second camera are placed on the same horizontal line of the aircraft within the preset camera distance range.
  • the vertically downward binocular cameras are required to be installed on the same horizontal line, and the distance between the two cameras is within the preset camera distance range.
  • the first camera and the second camera can both capture the real-time image that meets the requirements. If the interval between the two cameras is too small, it is difficult to obtain reasonable depth information and positioning information, and the two cameras are too far apart. It also causes near objects to be photographed, which lacks reference objects.
  • the aircraft further includes:
  • the third obtaining module 406B is configured to determine, by the first determining module 402B, the (N-1) first eigen parameters according to the N first real-time images, and determine (N-1) second according to the N second real-time images.
  • the first sub-image corresponding to the first moment and the second sub-image corresponding to the second moment are acquired by the first camera, wherein the first moment and the second moment are two of the N different moments
  • the first sub-image and the second sub-image all belong to the first real-time image;
  • the fourth obtaining module 407B is configured to acquire, by the second camera, a third sub-image corresponding to the second moment and a fourth sub-image corresponding to the second moment, where the third sub-image and the fourth sub-image all belong to the second real-time image ;
  • the measurement module 408B is configured to measure the first depth information and the second depth information by using a binocular stereo vision method, wherein the first depth information is obtained according to the first sub image and the second sub image.
  • the second depth information is obtained according to the third sub image and the fourth sub image.
  • the first sub-image corresponding to the first time and the second sub-image corresponding to the second time are acquired by the first camera
  • the third sub-image corresponding to the second time and the second time are acquired by the second camera.
  • Corresponding fourth sub-image then measuring first depth information of the first sub-image based on binocular stereo vision, second depth information of the second sub-image, third depth information of the third sub-image, and fourth sub-image The fourth depth information of the image.
  • the first camera and the second camera can also acquire the depth information, that is, the height information, which overcomes the shortcomings that the monocular camera and the optical flow camera cannot provide the depth information, thereby enhancing the practicability of the solution, and at the same time, obtaining the depth information. It can also be used for terrain recognition, object recognition and height setting to enhance the diversity of the solution.
  • the first eigen parameter includes a first rotation matrix and a first translation vector
  • the second eigen parameter includes a second rotation matrix and a a translation vector
  • the first rotation matrix is used to represent the angle change of the first camera
  • the second rotation matrix is used to represent the angle change of the second camera
  • the first translation vector is used to indicate the height change of the first camera
  • the second The translation vector is used to indicate the height change of the second camera.
  • the binocular camera can acquire the rotation matrix and the translation vector, and utilize the rotation moment.
  • the array and translation vectors are constructed to obtain the intrinsic parameters.
  • each camera in the binocular camera needs to be calibrated separately, and the rotation matrix and the translation vector are obtained to describe the relative positional relationship between the two cameras, and the eigen parameters can also be formed, thereby ensuring the feasibility and practicality of the scheme. Sex.
  • FIG. 15 in another embodiment of the aircraft provided by the embodiment of the present invention,
  • the first determining module 401B includes:
  • the first calculating unit 4011B is configured to calculate the (N-1)th first eigenparameters as follows:
  • ⁇ 1 represents first depth information and ⁇ 2 represents second depth information
  • C represents a pre-measured internal parameter
  • R 1 represents a first rotation matrix
  • t 1 represents a first translation vector
  • ⁇ 3 represents third depth information
  • ⁇ 4 represents fourth depth information
  • R 2 represents a second rotation matrix
  • t 2 represents a second translation vector
  • a corresponding calculation formula is provided for determining (N-1) first eigen parameters and (N-1) second eigen parameters, so that the eigen parameters can be calculated by using corresponding formulas. It provides a feasible basis for the realization of the program, thereby increasing the feasibility of the program.
  • FIG. 16 in another embodiment of the aircraft provided by the embodiment of the present invention,
  • the second obtaining module 404B includes:
  • the second calculating unit 4041B is configured to calculate a minimum variance between the second flight positioning information and the first flight positioning information under the preset positioning constraint condition as follows:
  • X represents the first flight location information and Y represents the second flight location information.
  • N represents the Nth time
  • j represents the jth time of the N times
  • X j represents the The second flight positioning information corresponding to the j moments
  • Y j represents the second flight positioning information corresponding to the jth moment
  • R ext represents the rotation matrix between the first camera and the second camera measured in advance
  • t ext represents the pre-measurement a translation vector between the first camera and the second camera;
  • the third calculating unit 4042B is configured to calculate target flight positioning information according to the variance minimum calculated by the second calculating unit 4041B.
  • a constraint between the binocular camera flight positioning information is established, and the optimal flight positioning information of the aircraft can be solved by the constraint.
  • the target flight location information is obtained, thereby reducing errors and improving the accuracy of the positioning.
  • the aircraft further includes:
  • the third determining module 4091B is configured to acquire, by the second acquiring module 404B, the N positioning times according to the (N-1) first flight positioning information and the (N-1) second flight positioning information After the target flight positioning information corresponding to the end time, the first sub-flight positioning information corresponding to the (N+1)th time is determined according to the target flight positioning information, and the first sub-flight positioning information is a piece of information in the target flight positioning information. ;
  • the fifth obtaining module 4092B is configured to acquire the second sub-flight positioning information corresponding to the (N+1)th time by using the preset positioning constraint and the first sub-flight positioning information determined by the third determining module 4091B;
  • the fourth determining module 4093B is configured to determine the third sub-flight positioning information corresponding to the (N+2)th time according to the first sub-flight positioning information and the first eigen-parameter determined by the third determining module 4091B;
  • the sixth obtaining module 4094B is configured to acquire the fourth sub-flight positioning information corresponding to the (N+2)th time by using the preset positioning constraint and the third sub-flight positioning information determined by the fourth determining module 4093B;
  • the calculating module 4095B is configured to calculate a first optimal solution that the third determining module 4091B determines the first sub-flight positioning information and the third target flight positioning information determined by the fourth determining module 4093B, and calculates the first acquired by the fifth obtaining module 4092B.
  • the second sub-flight positioning information and the second optimal solution of the fourth sub-flight positioning information acquired by the sixth obtaining module 4094B, the first optimal solution and the second optimal solution constitute flight positioning information at the (N+2)th time.
  • the target flight location information and the preset positioning constraint may be utilized to predict optimal flight positioning information for a period of time in the future.
  • the target flight location information and the preset positioning constraint may be utilized to predict optimal flight positioning information for a period of time in the future.
  • it provides a feasible means for obtaining accurate flight positioning information, thereby increasing the flexibility of the solution.
  • the subsequent acquired flight positioning information is more focused on global considerations, which is beneficial to The positioning information of the aircraft is determined in the global coordinate system.
  • FIG. 18 in another embodiment of the aircraft provided by the embodiment of the present invention,
  • the fourth determining module 4093B includes:
  • the fourth calculating unit 4093B1 is configured to calculate the third sub-flight positioning information corresponding to the (N+2)th time as follows:
  • X N+2 represents the third sub-flight positioning information corresponding to the (N+2)th time
  • R N+1 represents the rotation matrix at the (N+1)th time of the first eigen-parameter
  • t N+1 The translation vector of the (N+1)th moment in the first eigenvalue is represented
  • X N+1 represents the first sub-flight positioning information corresponding to the (N+1)th time.
  • the third sub-flight positioning information corresponding to the next moment is calculated by using the first sub-flight positioning information corresponding to the previous moment, that is, the corresponding formula can be used for calculation, and the utility of the scheme can be improved by the above manner. Sex and feasibility.
  • the altitude information of the aircraft can be measured by installing a barometer, an ultrasonic device or a depth camera on the aircraft fuselage, and the height information pair is utilized.
  • the aircraft is controlled by flight.
  • the use of a barometer to measure the flying height is affected by the airflow generated by the flight of the aircraft itself, resulting in a highly variable condition, resulting in poor measurement accuracy.
  • the ultrasonic device has high measurement accuracy, when it encounters a complicated terrain such as a bump or a slope on the ground, the ultrasonic device will not receive it, resulting in inaccurate measurement results.
  • the use of a depth camera will increase the cost of the aircraft.
  • the embodiment of the invention further provides a method for acquiring flight height information, which can improve the accuracy of height information measurement.
  • the binocular camera can acquire various complex terrains and calculate height information according to different terrains, thereby improving measurement accuracy, and the binocular camera has a lower cost advantage than the depth camera.
  • the flying height information measured by the solution may be a real height. It should be noted that the flying height information may also be an absolute height, a standard air pressure height or a relative height.
  • the absolute height represents the vertical distance from the aircraft to the sea level.
  • the radar can be used to measure the absolute altitude directly at sea.
  • the standard barometric altitude represents the vertical distance from the air to the standard air pressure plane (ie, the horizontal pressure of the atmospheric pressure equals 760 mm Hg), called the standard air pressure altitude. Atmospheric pressure often changes, so the distance between the standard air pressure plane and the sea level is often changed. If the standard air pressure plane coincides with the sea level, the standard air pressure height is equal to the absolute height. Civil aviation aircraft need to use the standard air pressure when flying on the route and military aircraft, so as not to collide with the aircraft.
  • the relative height indicates the vertical distance of the aircraft to a specified horizontal plane (airport, shooting range, battlefield, etc.).
  • the air pressure scale of the altimeter is adjusted to the air pressure value of the airport, that is, the field pressure.
  • the relative height of the aircraft from the airport can be displayed by the altimeter.
  • the true height represents the vertical distance of the aircraft from the air to the ground target directly below.
  • bombing and photographic reconnaissance you must know the true height of the aircraft. You need to know the true height when performing tasks such as bombing, ground attack, photographic reconnaissance, search and rescue, and agriculture and forestry operations.
  • the true height can be measured with a movie theodolite or a radar altimeter. Certain aircraft can only fly within a certain pre-designed height range.
  • FIG. 19 is a schematic flowchart of a method for acquiring flight height information according to an embodiment of the present invention, including:
  • the aircraft including the first camera and the second camera acquires a first depth image according to the first real-time image, and acquires a second depth image according to the second real-time image, where the first camera is used to acquire the first real-time image, and the second camera is used for the second camera. Obtaining a second real-time image;
  • the aircraft includes a set of binocular cameras, that is, two cameras, which are respectively defined as a first camera and a second camera.
  • the binocular camera can capture the image in real time, and at a certain moment, the first camera captures the first real-time image, and the second camera captures the second real-time image.
  • the binocular camera can still acquire two real-time images on the left and right in one time.
  • the flight height information of the current time aircraft can be calculated by using two real-time images corresponding to a certain moment.
  • the two real-time images are processed to obtain a first depth image corresponding to the first real-time image and a second depth image corresponding to the second real-time image.
  • the aircraft determines a target fused image according to the first depth image and the second depth image, where the target fused image includes at least one preset region;
  • the first depth image and the second depth image are not symmetric images due to the deviation of the left and right viewing angles, and processing is required to make the two depth images Combine the two into one and get a target fusion image.
  • the target fused image includes a plurality of pixel points, and the target fused image can be divided into at least one preset area, so that the number of pixels in the preset area is reduced.
  • the aircraft determines a depth value corresponding to each preset area in the target fused image
  • the aircraft needs to separately calculate the depth value corresponding to each preset area in the target fused image.
  • the aircraft acquires the flying height information according to the depth value corresponding to each preset area and the current flight attitude information of the aircraft.
  • the binocular camera mounted on the aircraft is not perpendicular to the ground. Therefore, the aircraft also needs to pass
  • the device such as the sensor acquires the current flight attitude information, such as the pitch angle and the roll angle, and uses the current flight attitude information and the depth value of each preset area to calculate the flight height information of each preset area. After the flight altitude information in the area is calculated, all the flight altitude information can be sent to the aircraft control module, and the flight control module controls the flight of the aircraft according to the flight altitude information.
  • the aircraft includes a first camera and a second camera.
  • the first camera acquires the first real-time image
  • the second camera acquires the second real-time image.
  • the specific process is: the aircraft acquires the first depth image according to the first real-time image, and Obtaining a second depth image according to the second real-time image, and then determining the target fused image according to the first depth image and the second depth image, and then the aircraft may determine a depth value corresponding to each preset region in the target fused image, and finally according to each The depth value corresponding to the preset area and the current flight attitude information of the aircraft acquire flight height information.
  • the binocular camera is used to measure the altitude information of the aircraft, and the accuracy of the height information measurement is not reduced because the aircraft itself is affected by the airflow, and the binocular camera can obtain various kinds of information.
  • Complex terrain, and height information based on different terrain calculations, to improve the accuracy of the measurement, and binocular camera has a lower cost advantage than the depth camera.
  • the method for acquiring the flying height information obtained by the embodiment of the present invention obtains the first depth image according to the first real-time image, and before acquiring the second depth image according to the second real-time image, It can also include:
  • the first camera and the second camera are disposed on the same horizontal line of the aircraft within a preset camera distance range.
  • FIG. 20 is a schematic diagram of an aircraft equipped with a binocular camera according to an embodiment of the present invention. As shown in FIG. 20, the first camera and the second camera need to be installed on the same horizontal line of the aircraft. Above, and to ensure that the separation distance between the two meets the preset camera distance range, and the two camera positions in FIG. 20 are only one indication, and should not be construed as limiting the present invention.
  • the preset camera distance range is usually 6 cm to 10 cm. In practical applications, some adjustments may also be made, which are not limited herein.
  • the two cameras installed in the actual application can not be mathematically realized to the same horizontal line. Therefore, the two cameras need to be stereo-calibrated separately, and the stereo calibration can be performed by Zhang Zhengyou calibration method.
  • the parameters that the binocular camera needs to be calibrated include, but are not limited to, a parameter matrix within the camera, a matrix of distortion coefficients, an eigenmatrix, a base matrix, a rotation matrix, and a translation matrix.
  • the parameter matrix and the distortion coefficient matrix in the camera can be calibrated by a single target method.
  • the main difference between binocular camera calibration and monocular camera calibration is that the binocular camera needs to calibrate the relative relationship between the left and right camera coordinate systems.
  • the vertically downward binocular cameras are required to be installed on the same horizontal line, and the distance between the two cameras is within the preset camera distance range.
  • the first camera and the second camera can be made both. It is possible to capture real-time images that meet the requirements. If the interval between the two cameras is too small, it is difficult to obtain reasonable depth information and positioning information. If the distance between the two cameras is too large, the near objects will not be captured, and the reference object will be lacking. Therefore, a more reasonable image can be obtained by using a preset camera distance range.
  • the aircraft acquires the first depth image according to the first real-time image, and acquires the second depth image according to the second real-time image.
  • the aircraft performs scaling processing on the first real-time image and the second real-time image according to a preset image specification
  • the aircraft performs image correction on the scaled first real-time image and the second real-time image by using pre-acquired internal parameters and external parameters, and obtains a first depth image and a second depth image.
  • the aircraft may perform the following two steps in the process of converting the first implementation image and the second real-time image into the first depth image and the second depth image, specifically:
  • the aircraft uses binocular vision to calculate the flight height information, high-precision pictures are usually not required, so the real-time images captured by the binocular camera are first scaled according to the preset image specifications.
  • the preset image specification may be 320 ⁇ 240, wherein 320 ⁇ 240 refers to resolution, 240 represents 240 pixels, and 320 represents 320 pixels. Because the left and right cameras have parallax, the edges of the two real-time images can not be matched.
  • the first real-time image and the second real-time image edge can be trimmed according to certain pixels, for example, 20 pixels of each trimming edge. In practical applications, other reasonable pixels can also be tailored, which is not limited herein.
  • image correction may be further performed on the first real-time image and the second real-time image after the scaling process, the image correction includes image distortion correction and image alignment correction, respectively, using internal parameters obtained by calibrating the camera and external
  • the image can be corrected by the parameter, and the first depth image and the second depth image are obtained after the correction, wherein the first depth image and the second depth image are all images that can be used to calculate the depth value.
  • the aircraft should also process the first real-time image and the second real-time image after acquiring the first real-time image and the second real-time image.
  • the first real-time image and the second real-time image are scaled according to the preset image specification, and then pre-acquired.
  • the internal parameters and the external parameters to the image are corrected for the first real-time image and the second real-time image after the scaling process.
  • scaling and cropping the real-time image can reduce the image edge mismatch, and can also reduce the amount of computation of the visual processing, thereby improving the processing efficiency.
  • correcting the real-time image can obtain images on the same horizontal plane. , thereby improving the accuracy of image processing.
  • the aircraft uses the pre-acquired internal parameters and external parameters, and the scaled processed first real-time image and Performing image correction on the second real-time image may include:
  • the aircraft uses the pre-acquired internal parameters to perform distortion compensation on the scaled first real-time image and the second real-time image, wherein the internal parameters include the barrel distortion parameter and the tangential distortion parameter of the first camera, and the second Barrel distortion parameters and tangential distortion parameters of the camera;
  • the aircraft rotates and translates the scaled first real-time image and the second real-time image by using pre-acquired external parameters, wherein the external parameters include a translation parameter and a rotation parameter of the first camera, and a translation of the second camera Parameters and rotation parameters.
  • the external parameters include a translation parameter and a rotation parameter of the first camera, and a translation of the second camera Parameters and rotation parameters.
  • the aircraft can perform image correction on the real-time image by using internal parameters and external parameters, including:
  • the aircraft uses internal parameters to scale the first real-time image and the second real-time image.
  • the internal parameters are the parameters obtained after the calibration of the single camera in the binocular camera.
  • the barrel distortion parameter and the tangential distortion parameter of the first camera are obtained, and the second camera is obtained after the second camera is obtained.
  • the first real-time image is corrected by using the barrel distortion parameter and the tangential distortion parameter of the first camera respectively
  • the second real-time image is corrected by the barrel distortion parameter and the tangential distortion parameter of the second camera.
  • the rotation matrix and the translation matrix between the two cameras are external
  • the image correction is performed on the real-time image, that is, the pre-acquired internal parameters are used to perform distortion compensation on the first real-time image and the second real-time image after the scaling process, and the pre-acquired external is used.
  • the parameter rotates and translates the scaled first real-time image and the second real-time image.
  • the real-time image can be corrected and aligned according to the internal parameters and external parameters obtained by the camera calibration, so that the real-time image satisfies the requirements of the same horizontal line in a mathematical sense, thereby facilitating acquisition of the two cameras in subsequent processing.
  • the resulting image is fused to obtain a target fused image.
  • the determining, by the aircraft, the target fused image according to the first depth image and the second depth image may include:
  • the aircraft determines a disparity value between the first depth image and the second depth image using a stereo vision algorithm
  • the aircraft combines the first depth image and the second depth image into a target fused image according to the disparity value.
  • the depth image is a real-time image. After the processing, it is possible to use the depth image to synthesize the desired target fused image.
  • the depth value calculation of the binocular vision first requires the disparity value between the corresponding points of the left and right images, and the same object in the real space is projected into the left and right cameras, and the position thereof may have some differences.
  • FIG. 21 is a schematic diagram of obtaining a disparity value between left and right images according to an embodiment of the present invention.
  • a physical point P (X, Y, Z) is projected on two left and right cameras.
  • the dimension of f is the pixel point
  • the dimension of Tx is determined by the actual size of the checkerboard grid.
  • the dimension of Z is the same as T, and the following relationship is satisfied between d and Z:
  • the process of determining the target fused image by the aircraft further includes: first determining a disparity value between the first depth image and the second depth image by using a stereo vision algorithm, and then, according to the disparity value, the first depth image and the first The two depth images are synthesized into a target fused image.
  • the target fused image can be synthesized according to the calculated disparity value, thereby improving the accuracy of the target fused image.
  • the method for obtaining the flying height information provided by the embodiment of the present invention the determining, by the aircraft, the depth value corresponding to each preset area in the target fused image may include:
  • the aircraft determines a depth value of each pixel in the target fused image according to the disparity value
  • the aircraft determines the depth value corresponding to each preset area according to the depth value of each pixel.
  • the aircraft may further determine the depth value of each pixel of the target fused image by using the obtained disparity value of each pixel, and calculate each preset area according to the depth value of each pixel. Depth value.
  • the aircraft can obtain depth values (in units of physical values, such as meters) of all pixels in the image through the binocular vision module. Because the terrain is more complicated, the image does not have a consistent depth value, so the image is divided into multiple meshes, which are divided into multiple preset areas, such as a 6x6 grid, each grid is calculated separately for a depth. value.
  • the depth value of each grid is calculated using the median average filtering method to calculate its depth value. For example, the depth value of all valid points in the grid can be removed from the top 5% maximum and the last 5% minimum, and then averaged. In the case where the mesh is sufficiently small, the resulting mean can accurately describe the height of the terrain.
  • the determining, by the aircraft, the depth value corresponding to each preset area in the target fused image may be further divided into two steps. First, determining the depth value of each pixel in the target fused image according to the disparity value, and then according to each The depth values of the pixels determine the depth values corresponding to each of the preset regions, respectively.
  • the depth value corresponding to each preset area is predicted by the minimum unit pixel depth value, and the obtained depth value corresponding to each preset area is more accurate, thereby improving the feasibility and practicability of the scheme. .
  • the determining, by the aircraft, the depth value of each pixel in the target fused image according to the disparity value may include:
  • the depth value of each pixel is calculated as follows:
  • x represents the projected abscissa of the pixel in the target fused image in three-dimensional space
  • y represents the projected ordinate of the pixel in the target fused image in three-dimensional space
  • disparity(x, y) represents at the pixel (x) , y) disparity value
  • Q represents the parallax depth mapping matrix
  • [XYZW] T represents the target matrix
  • [X Y Z W] is the transposed matrix of the target matrix
  • Z (x, y) represents the depth of the pixel point (x, y)
  • Z is a sub-matrix composed of the third column in the transposed matrix
  • W is a sub-matrix composed of the fourth column in the transposed matrix.
  • the depth value is obtained by matrix multiplication using a disparity value and a disparity depth mapping matrix (disparity-to-depth mapping matrix) to obtain an actual three-dimensional point position. Its calculation formula is as follows:
  • x, y is the projected coordinates of the point in the actual three-dimensional space in the image, in pixels.
  • Disparity(x, y) represents the disparity value at the pixel point (x, y).
  • the Q matrix is a parallax depth mapping matrix, which is calculated by the internal and external parameters of the camera.
  • the mapping matrix is obtained by using the stereoRectify function provided by OpenCV in this scheme.
  • the parameters to be obtained are the focal length f, the parallax d, and the camera center distance Tx. If it is also necessary to obtain the X coordinate and the Y coordinate, then it is necessary to additionally know the offsets cx and cy of the coordinate system of the left and right image planes and the origin in the solid coordinate system.
  • f, Tx, cx and cy can obtain the initial value by stereo calibration, and optimize by stereo calibration, so that the two cameras are placed in parallel completely mathematically, and the cx, cy and f of the left and right cameras are the same.
  • the work done by stereo matching is to obtain the last variable, the disparity value d, on the basis of the previous one. This completes the preparatory work required to find a three-dimensional coordinate.
  • FIG. 22 is a schematic flowchart of acquiring image depth values according to an embodiment of the present invention, as shown in FIG. 22:
  • step 201C the aircraft firstly scales and crops the collected real-time images corresponding to the left and right eyes to obtain an image of a certain pixel size
  • step 202C the aircraft obtains internal parameters by calibrating a single camera, and performs distortion compensation on the real-time image by using internal parameters;
  • step 203C the aircraft obtains external parameters by stereo calibration of the binocular camera, performs alignment correction on the real-time image by using external parameters, and steps 201 to 202 are used to map the real-time image.
  • steps 201 to 202 are used to map the real-time image.
  • step 204C the aircraft uses the SGBM algorithm provided by OpenCV to implement image point matching and disparity value calculation;
  • step 205C the aircraft calculates a depth value of the image using a parallax depth transformation matrix.
  • the aircraft acquires the flying height information according to the depth value corresponding to each preset area and the current flight attitude information of the aircraft.
  • the flight height information is calculated as follows:
  • represents a tilt angle formed by the ground and the normal of the aircraft
  • represents a roll angle in the current flight attitude information
  • represents a pitch angle in the current flight attitude information
  • d represents a depth value corresponding to each preset region
  • h represents the flight height information.
  • the pitch angle ⁇ and the roll angle ⁇ of the aircraft can be obtained from the aircraft control module, and the angle ⁇ can be calculated by the following formula:
  • the calculated height values of all preset areas are sent to the aircraft control module for processing.
  • FIG. 23 is a schematic diagram of a workflow of a binocular camera in an application scenario, including:
  • step 301C the drone collects real-time images of the left and right eyes respectively through the vertically downward binocular camera mounted thereon;
  • the real-time image of the left and right eyes can be used to generate a depth image after image scaling and cropping, and image correction processing, and the depth image of the left and right eyes is subjected to parallax processing to obtain a target fused image, and the target fusion is calculated.
  • step 303C the body posture information of the current drone is acquired, and information such as the number of pitch angles and the number of roll angles is used;
  • step 304C the current attitude angle and the image depth value of the drone are used to calculate the altitude value of the drone, because the terrain of the ground may be complicated, so that a single height value is not obtained, and the image is divided into multiple Grid, which calculates the height of the grid separately, so that you can get a rough terrain height value.
  • step 305C the set of height values is finally sent to the flight control system of the drone.
  • the aircraft in the embodiment of the present invention includes a first camera and a second camera, wherein the first camera is used to acquire the first real-time image, and the second The camera is used to acquire a second real-time image, and the aircraft 40C includes:
  • the first obtaining module 401C is configured to acquire a first depth image according to the first real-time image, and acquire a second depth image according to the second real-time image.
  • the first determining module 402C is configured to determine a target fused image according to the first depth image acquired by the first acquiring module 401 and the second depth image, where the target fused image includes at least one preset region.
  • the second determining module 403C is configured to determine a depth value corresponding to each preset area in the target fused image obtained by the first determining module 402.
  • the second obtaining module 404C is configured to acquire the flying height information according to the depth value corresponding to each preset area determined by the second determining module 403 and the current flight attitude information of the aircraft.
  • the aircraft includes a first camera and a second camera, wherein the first camera is configured to acquire a first real-time image, the second camera is configured to acquire a second real-time image, and the first acquiring module 401C is configured according to the first real-time image.
  • the first determining module 402C determining the target fused image according to the first depth image acquired by the first acquiring module 401C and the second depth image, where the target fused image includes at least a preset area
  • the second determining module 403C determines a depth value corresponding to each preset area in the target fused image obtained by the first determining module 402C
  • the second obtaining module 404C determines each preset area according to the second determining module 403C.
  • the corresponding depth value and the current flight attitude information of the aircraft acquire flight height information.
  • the aircraft includes a first camera and a second camera.
  • the first camera acquires the first real-time image
  • the second camera acquires the second real-time image.
  • the specific process may be: the aircraft acquires the first depth image according to the first real-time image. And acquiring a second depth image according to the second real-time image, and then determining the target fused image according to the first depth image and the second depth image, and then the aircraft may determine a depth value corresponding to each preset region in the target fused image, and finally according to each The depth value corresponding to the preset area and the current flight attitude information of the aircraft acquire the flying height information.
  • the binocular camera is used to measure the altitude information of the aircraft, and the accuracy of the height information measurement is not reduced because the aircraft itself is affected by the airflow, and the binocular camera can obtain various kinds of information.
  • Complex terrain, and height information based on different terrain calculations, to improve the accuracy of the measurement, and binocular camera has a lower cost advantage than the depth camera.
  • the aircraft 40C further includes:
  • the setting module 405C is configured to: acquire, by the first acquiring module 401C, the first depth image according to the first real-time image, and obtain the first camera and the second within a preset camera distance range before acquiring the second depth image according to the second real-time image.
  • the camera is placed on the same horizontal line of the aircraft.
  • the vertically downward binocular cameras are required to be mounted on the same horizontal line, and The distance between the two cameras is within the preset camera distance.
  • the first camera and the second camera can both capture the real-time image that meets the requirements. If the interval between the two cameras is too small, it is difficult to obtain reasonable depth information and positioning information, and the two cameras are too far apart.
  • the near object can not be photographed, and thus the reference object is lacking, so that a more reasonable image can be obtained by using the preset camera distance range.
  • the first obtaining module 401C includes:
  • the scaling unit 4011C is configured to perform scaling processing on the first real-time image and the second real-time image according to a preset image specification
  • the correcting unit 4012C is configured to perform image correction on the first real-time image and the second real-time image that have been scaled by the scaling unit 4011C by using the pre-acquired internal parameters and the external parameters, and obtain the first depth image and the second depth image. .
  • the aircraft should also process the first real-time image and the second real-time image after acquiring the first real-time image and the second real-time image.
  • the first real-time image and the second real-time image are scaled according to the preset image specification, and then pre-acquired.
  • the internal parameters and the external parameters to the image are corrected for the first real-time image and the second real-time image after the scaling process.
  • scaling and cropping the real-time image can reduce the image edge mismatch, and can also reduce the amount of computation of the visual processing, thereby improving the processing efficiency.
  • correcting the real-time image can obtain images on the same horizontal plane. , thereby improving the accuracy of image processing.
  • the correcting unit 4012C includes:
  • the first processing sub-unit 40121C is configured to perform distortion compensation on the scaled first real-time image and the second real-time image by using pre-acquired internal parameters, where the internal parameters include a barrel distortion parameter of the first camera and Tangential distortion parameters, and barrel distortion parameters and tangential distortion parameters of the second camera;
  • the second processing sub-unit 40122C is configured to rotate and translate the scaled first real-time image and the second real-time image by using pre-acquired external parameters, where the external parameters include translation parameters and rotation of the first camera Parameters, as well as translation parameters and rotation parameters for the second camera.
  • the image correction is performed on the real-time image, that is, the pre-acquired internal parameters are used to perform distortion compensation on the first real-time image and the second real-time image after the scaling process, and the pre-acquired external is used.
  • the parameter rotates and translates the scaled first real-time image and the second real-time image.
  • the real-time image can be corrected and aligned according to the internal parameters and external parameters obtained by the camera calibration, so that the real-time image satisfies the requirements of the same horizontal line in a mathematical sense, thereby facilitating acquisition of the two cameras in subsequent processing.
  • the resulting image is fused to obtain a target fused image.
  • the first determining module 402C includes:
  • a first determining unit 4021C configured to determine, by using a stereo vision algorithm, a disparity value between the first depth image and the second depth image;
  • the synthesizing unit 4022C is configured to synthesize the first depth image and the second depth image into a target fused image according to the disparity value determined by the first determining unit 4021C.
  • the process of determining, by the aircraft, the target fused image further includes: first determining, by using a stereo vision algorithm, a disparity value between the first depth image and the second depth image, and then, according to the disparity value, the first depth image and the second The depth image is synthesized into a target fused image.
  • the target fused image can be synthesized according to the calculated disparity value, thereby improving the accuracy of the target fused image.
  • the second determining module 403C includes:
  • a second determining unit 4031C configured to determine a depth value of each pixel in the target fused image according to the disparity value
  • the third determining unit 4032C is configured to respectively determine a depth value corresponding to each preset area according to the depth value of each pixel determined by the second determining unit 4031.
  • the determining, by the aircraft, the depth value corresponding to each preset area in the target fused image may be further divided into two steps. First, determining the depth value of each pixel in the target fused image according to the disparity value, and then according to each The depth values of the pixels determine the depth values corresponding to each of the preset regions, respectively.
  • the depth value corresponding to each preset area is predicted by the minimum unit pixel depth value, and the obtained depth value corresponding to each preset area is more accurate, thereby improving the feasible scheme. Sex and practicality.
  • the second determining unit 4031C includes:
  • the calculation subunit 40311C is configured to calculate the depth value of each pixel point as follows:
  • x is the projected abscissa of the pixel in the target fused image in the three-dimensional space
  • y is the projected ordinate of the pixel in the target fused image in the three-dimensional space
  • disparity(x, y) is expressed at the pixel (x, y)
  • the disparity value of Q, Q represents the parallax depth mapping matrix
  • [XYZW] T represents the target matrix
  • [X Y Z W] is the transposed matrix of the target matrix
  • Z (x, y) represents the depth value of the pixel point (x, y)
  • Z is a sub-matrix composed of the third column in the transposed matrix
  • W is a sub-matrix composed of the fourth column in the transposed matrix.
  • the second obtaining module 404C includes:
  • the calculating unit 4041C is configured to calculate the flying height information as follows:
  • represents a tilt angle formed by the ground and the normal of the aircraft
  • represents a roll angle in the current flight attitude information
  • represents a pitch angle in the current flight attitude information
  • d represents a depth value corresponding to each preset region
  • h represents the flight height information.
  • an embodiment of the present invention further provides an apparatus, including:
  • the memory is configured to store the program code and transmit the program code to the processor
  • the processor is configured to perform the method for detecting obstacles of the aircraft according to the instructions in the program code, the method for acquiring flight positioning information, and the method for acquiring flight height information.
  • an embodiment of the present invention further provides a storage medium for storing program code, the program code is used to execute the method for detecting obstacles of the aircraft, the method for acquiring flight positioning information, and the method for acquiring flight height information. .
  • an embodiment of the present invention further provides a computer program product including instructions, when it is run on a computer, causing a computer to perform the method for detecting obstacles of the aircraft, acquiring a method for acquiring flight position information, and acquiring flight height information. method.
  • an embodiment of the present invention provides another aircraft, as shown in FIG. 32.
  • FIG. 32 For the convenience of description, only parts related to the embodiment of the present invention are shown. For details not disclosed, refer to the method of the embodiment of the present invention. section. Take the aircraft as an unmanned aerial vehicle as an example:
  • FIG. 32 is a block diagram showing a portion of the structure of a drone associated with an aircraft provided by an embodiment of the present invention.
  • the drone includes: radio frequency (English full name: Radio Frequency, English abbreviation: RF) circuit 510, memory 520, input unit 530, display unit 540, sensor 550, audio circuit 560, wireless fidelity (English full name: Wireless fidelity, abbreviation: WiFi) module 570, processor 580, and power supply 590 and other components.
  • radio frequency English full name: Radio Frequency, English abbreviation: RF
  • memory 520 input unit 530
  • display unit 540 sensor
  • audio circuit 560 audio circuit 560
  • wireless fidelity English full name: Wireless fidelity, abbreviation: WiFi
  • the RF circuit 510 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. Specifically, after receiving the downlink information of the aircraft control device, the processor 580 processes the data; and, in addition, transmits the designed uplink data to the aircraft control device.
  • RF circuit 510 includes, but is not limited to, an antenna, to One less amplifier, transceiver, coupler, low noise amplifier (English name: Low Noise Amplifier, English abbreviation: LNA), duplexer, etc.
  • RF circuitry 510 can also communicate with the network and other devices via wireless communication.
  • the above wireless communication may use any communication standard or protocol, including but not limited to the global mobile communication system (English full name: Global System of Mobile communication, English abbreviation: GSM), general packet radio service (English full name: General Packet Radio Service, GPRS) ), code division multiple access (English full name: Code Division Multiple Access, English abbreviation: CDMA), wideband code division multiple access (English full name: Wideband Code Division Multiple Access, English abbreviation: WCDMA), long-term evolution (English full name: Long Term Evolution, English abbreviation: LTE), e-mail, short message service (English full name: Short Messaging Service, SMS).
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 520 can be used to store software programs and modules, and the processor 580 executes various functional applications and data processing of the drone by running software programs and modules stored in the memory 520.
  • the memory 520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of drones (such as audio data, phone books, etc.).
  • memory 520 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 530 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the drone.
  • the input unit 530 can include a touch panel 531 and other input devices 532.
  • the touch panel 531 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 531 or near the touch panel 531. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 531 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 580 is provided and can receive commands from the processor 580 and execute them.
  • the touch panel 531 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • input unit 530 may also include other input devices 532.
  • other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 540 can be used to display information input by the user or information provided to the user as well as various menus of the drone.
  • the display unit 540 can include a display panel 541.
  • a liquid crystal display (English name: Liquid Crystal Display, English abbreviation: LCD), an organic light emitting diode (English name: Organic Light-Emitting Diode, English abbreviation) can be used.
  • the display panel 541 is configured in the form of OLED or the like.
  • the touch panel 531 can cover the display panel 541. When the touch panel 531 detects a touch operation on or near it, the touch panel 531 transmits to the processor 580 to determine the type of the touch event, and then the processor 580 according to the type of the touch event.
  • a corresponding visual output is provided on display panel 541.
  • the touch panel 531 and the display panel 541 are used as two independent components to implement the input and input functions of the mobile phone in FIG. 13, in some embodiments, the touch panel 531 and the display panel 541 may be integrated. Realize the input and output functions of the phone.
  • the drone may also include at least one type of sensor 550, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 541 according to the brightness of the ambient light, and the proximity sensor may turn off the display when the drone moves to the light. Panel 541 and/or backlight.
  • the accelerometer sensor can detect the acceleration of each direction (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the attitude of the drone (such as horizontal and vertical screen switching).
  • the mobile phone can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here No longer.
  • Audio circuit 560, speaker 561, and microphone 562 provide an audio interface between the user and the drone.
  • the audio circuit 560 can transmit the converted electrical data of the received audio data to the speaker 561, and convert it into a sound signal output by the speaker 561.
  • the microphone 562 converts the collected sound signal into an electrical signal, and the audio circuit 560 is used by the audio circuit 560. After receiving, it is converted into audio data, and then processed by the audio data output processor 580, sent to the other mobile phone via the RF circuit 510, or outputted to the memory 520 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the UAV can help users send and receive e-mail, browse web pages and access streaming media through the WiFi module 570. It provides users with wireless broadband Internet access.
  • FIG. 13 shows the WiFi module 570, it can be understood that it does not belong to the essential configuration of the mobile phone, and can be omitted as needed within the scope of not changing the essence of the invention.
  • the processor 580 is the control center of the drone, interconnecting various portions of the entire drone using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 520, and recalling stored in the memory 520.
  • processor 580 can include one or more processing units; for example, processor 580 can integrate an application processor and a modem processor, where the application processor primarily processes an operating system, user interface, and applications Etc.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 580.
  • the drone also includes a power source 590 (such as a battery) that supplies power to various components.
  • a power source 590 such as a battery
  • the power source can be logically coupled to the processor 580 through a power management system to manage functions such as charging, discharging, and power management through the power management system. .
  • the drone may also include a camera, a Bluetooth module, etc., and will not be described herein.
  • the processor 580 included in the terminal further has a function corresponding to the method for detecting obstacles of the aircraft and/or a method for acquiring flight positioning information and/or a method for acquiring flight height information.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another The system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (English full name: Read-Only Memory, English abbreviation: ROM), a random access memory (English full name: Random Access Memory, English abbreviation: RAM), magnetic A variety of media that can store program code, such as a disc or a disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Astronomy & Astrophysics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种飞行器的障碍物检测方法和装置,用于减少飞行器的障碍物检测误差,提高飞行器的障碍物检测精度,该方法包括:通过双目摄像头对目标障碍物进行图像采集,得到第一图像和第二图像(101A);确定目标障碍物投影在第一图像中的第一像素位置,以及投影在第二图像中的第二像素位置,并计算第一像素位置和所述第二像素位置之间的视差值(102A);根据该视差值、预置的视差深度映射矩阵计算双目摄像头距离目标障碍物的深度值,以用于检测飞行器的飞行方向上是否有障碍物阻挡(103A)。以及一种获取飞行定位信息及获取飞行高度信息的方法和飞行器,分别用于得到精确的定位信息以及得到精度的高度信息。

Description

飞行器的信息获取方法、装置及设备
本申请要求于2016年11月24日提交中国专利局、申请号为201611045197.6、发明名称为“一种飞行器的障碍物检测方法和装置”的中国专利申请的优先权,以及要求于2016年12月01日提交中国专利局、申请号为201611100259.9、发明名称为“一种获取飞行定位信息的方法及飞行器”的中国专利申请的优先权,以及要求于2016年12月01日提交中国专利局、申请号为201611100232.X、发明名称为“一种获取飞行高度信息的方法及飞行器”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机技术领域,尤其涉及飞行器的障碍物检测及飞行定位信息、高度信息获取。
背景技术
无人驾驶飞机简称为飞行器,飞行器在国民经济、军事上都有很多应用,目前飞行器己被广泛应用于航拍摄影、电力巡检、环境监测、森林防火、灾情巡查、防恐救生、军事侦察、战场评估等领域,飞行器是利用无线电遥控设备和自备的程序控制装置操纵的不载人飞机。机上无驾驶舱,但安装有自动驾驶仪、程序控制装置、信息采集装置等设备,遥控站人员通过雷达等设备,对其进行跟踪、定位、遥控、遥测和数字传输。
飞行器在检测障碍物时通常采用如下两种方案:1、飞行器基于激光雷达进行障碍物的检测,2、飞行器基于超声波检测障碍物。对于方法1,飞行器中需要安装激光雷达,激光雷达检测障碍物易受到太阳光的影响,在强光下面,激光雷达无法准确探测障碍物,降低了障碍物检测的精确度。对于方法2,飞行器中需要安装超声波发生器,通过其发射的超声波来检测障碍物,这种超声波检测的方式对于非垂直平面或者异形物体的检测会出现很大的误差。
综上,上述方案需要在飞行器安装额外的器件用于障碍物检测,这不利 于飞行器的小型化发展,并且还存在障碍物检测精确度低的问题。
发明内容
有鉴于此,本发明实施例提供如下技术方案:
本发明提供了一种飞行器的障碍物检测方法,包括:
通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,第一图像由双目摄像头中的左眼拍摄得到,第二图像由双目摄像头中的右眼拍摄得到;
确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,并根据第一像素位置和第二像素位置,计算第一像素位置和第二像素位置之间的视差值;
根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵,计算双目摄像头距离目标障碍物的深度值,以用于检测飞行器的飞行方向上是否有障碍物阻挡。
上述过程中,通过飞行器的内置双目摄像头实现前向障碍物的实时检测,不需要在飞行器中增加额外的器件设备,对于飞行器的飞行场景和障碍物的形状都不需要限制,通过图像的分析和计算可以准确的计算出双目摄像头距离目标障碍物的深度值,减少飞行器的障碍物检测误差,提高飞行器的障碍物检测精度。
本发明提供了一种飞行器的障碍物检测装置,包括:
图像采集模块,用于通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,第一图像由双目摄像头中的左眼拍摄得到,第二图像由双目摄像头中的右眼拍摄得到;
视差计算模块,用于确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,并根据第一像素位置和第二像素位置,计算第一像素位置和第二像素位置之间的视差值;
深度计算模块,用于根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵,计算双目摄像头距离目标障碍物的深度值,以用于检测飞行器的飞行方向上是否有障碍物阻挡。
该实施方式的有益效果,参见上述与之对应的方法的有益效果。
本发明提供了一种飞行器的障碍物检测方法,包括:
飞行器通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,第一图像由双目摄像头中的左眼拍摄得到,第二图像由双目摄像头中的右眼拍摄得到;
飞行器确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,并根据第一像素位置和第二像素位置,计算第一像素位置和第二像素位置之间的视差值;
飞行器根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵,计算双目摄像头距离目标障碍物的深度值,以用于检测飞行器的飞行方向上是否有障碍物阻挡。
该实施方式的有益效果,参见上述与之对应的方法的有益效果。
本发明提供了一种获取飞行定位信息的方法,该方法应用于飞行器,该飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,第二摄像头用于获取N个不同时刻对应的N个第二实时图像,N为大于或等于2的正整数,该方法包括:
根据N个第一实时图像,确定(N-1)个第一本征参数,以及根据N个第二实时图像,确定(N-1)个第二本征参数;其中,第一本征参数用于表征N个第一实时图像中相邻两帧图像的平移参数,第二本征参数用于表征N个第二实时图像中相邻两帧图像的平移参数;
通过第一摄像头,获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头,获取N个不同时刻中起始时刻的第二初始定位信息;
根据第一初始定位信息与(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,以及根据第二初始定位信息与(N-1)个第二本征参数,确定(N-1)个时刻对应的(N-1)个第二飞行定位信息;其中,(N-1)个时刻为N个不同时刻中除起始时刻之外的(N-1)个时刻;
根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,获取N个不同时刻中结束时刻所对应的目标飞行定位信息。
通过上述过程,飞行器采用双目摄像头实现飞行器定位,可以实时获取多个不同时刻对应的图像,进而分析得到每帧图像之间的平移参数,两个摄像头分别利用平移参数获取对应的定位信息,最后采用预置定位约束条件修正定位信息,以得到更接近真实值的目标飞行定位信息,在不采用光流摄像头或者高精度惯性传感器的情况下,仍然可以得到精确的定位信息,减小误差值,同时还减少了飞行器的成本
本发明提供了一种飞行器,该飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,第二摄像头用于获取N个不同时刻对应的N个第二实时图像,N为大于或等于2的正整数,该飞行器包括:
第一确定模块,用于根据N个第一实时图像,确定(N-1)个第一本征参数,以及根据N个第二实时图像,确定(N-1)个第二本征参数;其中,第一本征参数用于表征N个第一实时图像中相邻两帧图像的平移参数,第二本征参数用于表征N个第二实时图像中相邻两帧图像的平移参数;
第一获取模块,用于通过第一摄像头,获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头,获取N个不同时刻中起始时刻的第二初始定位信息;
第二确定模块,用于根据第一初始定位信息与(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,以及根据第二初始定位信息与(N-1)个第二本征参数,确定(N-1)个时刻对应的(N-1)个第二飞行定位信息;其中,(N-1)个时刻为N个不同时刻中除起始时刻之外的(N-1)个时刻;
第二获取模块,用于根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,获取N个不同时刻中结束时刻所对应的目标飞行定位信息。
该实施方式的有益效果,参见上述与之对应的方法的有益效果。
本发明提供了一种获取飞行定位信息的方法,该方法应用于飞行器,该飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,第二摄像头用于获取N个不同时刻对应的N个第二实时图像,N为大于或等于2的正整数,该方法包括:
飞行器根据N个第一实时图像,确定(N-1)个第一本征参数,以及根据N个第二实时图像,确定(N-1)个第二本征参数;其中,第一本征参数用于表征N个第一实时图像中相邻两帧图像的平移参数,第二本征参数用于表征N个第二实时图像中相邻两帧图像的平移参数;
飞行器通过第一摄像头,获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头,获取N个不同时刻中起始时刻的第二初始定位信息;
飞行器根据第一初始定位信息与(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,以及根据第二初始定位信息与(N-1)个第二本征参数,确定(N-1)个时刻对应的(N-1)个第二飞行定位信息;其中,(N-1)个时刻为N个不同时刻中除起始时刻之外的(N-1)个时刻;
飞行器根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,获取N个不同时刻中结束时刻所对应的目标飞行定位信息。
该实施方式的有益效果,参见上述与之对应的方法的有益效果。
本发明提供了一种获取飞行高度信息的方法,该方法应用于飞行器,该飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取第一实时图像,第二摄像头用于获取第二实时图像,该方法包括:
根据第一实时图像获取第一深度图像,以及根据第二实时图像获取第二深度图像;
根据第一深度图像以及第二深度图像确定目标融合图像,目标融合图像中包含至少一个预设区域;
确定目标融合图像中每个预设区域对应的深度值;
根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息,获取飞行高度信息。
通过上述过程,采用双目摄像头测量飞行器的高度信息,与气压计测量高度信息相比,不会因为飞行器自身受到气流影响而导致高度信息测量的精度降低,此外,双目摄像头可以获取到各种复杂地形,并根据不同地形计算得到高度信息,从而提升测量的准确性,而且双目摄像头与深度摄像头相比,还具有成本较低的优势。
本发明提供了一种飞行器,该飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取第一实时图像,第二摄像头用于获取第二实时图像,该飞行器还包括:
第一获取模块,用于根据第一实时图像获取第一深度图像,以及根据第二实时图像获取第二深度图像;
第一确定模块,用于根据第一获取模块获取的第一深度图像以及第二深度图像确定目标融合图像,目标融合图像中包含至少一个预设区域;
第二确定模块,用于确定第一确定模块得到的目标融合图像中每个预设区域对应的深度值;
第二获取模块,用于根据第二确定模块确定的每个预设区域对应的深度值以及飞行器的当前飞行姿态信息,获取飞行高度信息。
该实施方式的有益效果,参见上述与之对应的方法的有益效果。
本发明提供了一种获取飞行高度信息的方法,方法应用于飞行器,飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取第一实时图像,第二摄像头用于获取第二实时图像,方法包括:
飞行器根据第一实时图像获取第一深度图像,以及根据第二实时图像获取第二深度图像;
飞行器根据第一深度图像以及第二深度图像确定目标融合图像,目标融合图像中包含至少一个预设区域;
飞行器确定目标融合图像中每个预设区域对应的深度值;
飞行器根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息,获取飞行高度信息。
该实施方式的有益效果,参见上述与之对应的方法的有益效果。
本发明提供了一种设备,包括:
处理器以及存储器;
存储器用于存储程序代码,并将程序代码传输给处理器;
处理器用于根据该程序代码中的指令执行上述飞行器的障碍物检测的方法,获取飞行定位信息的方法,获取飞行高度信息的方法。
本发明提供了一种存储介质,存储介质用于存储程序代码,该程序代码用于执行上述飞行器的障碍物检测的方法,获取飞行定位信息的方法,获取飞行高度信息的方法。
本发明提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述飞行器的障碍物检测的方法,获取飞行定位信息的方法,获取飞行高度信息的方法。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域的技术人员来讲,还可以根据这些附图获得其他的附图。
图1所示为根据本发明实施例的一种飞行器的障碍物检测方法的流程示意图;
图2所示为根据本发明实施例的双目立体视觉障碍物检测的整个工作流程示意图;
图3所示为根据本发明实施例的双目立体视觉障碍物检测中图像处理环节的流程示意图;
图4所示为根据本发明实施例的双目立体视觉障碍物检测中视差值计算环节的流程示意图;
图5a所示为根据本发明实施例的一种飞行器的障碍物检测装置的组成结构示意图;
图5b所示为根据本发明实施例的另一种飞行器的障碍物检测装置的组成结构示意图;
图5c所示为根据本发明实施例的另一种飞行器的障碍物检测装置的组成结构示意图;
图5d所示为根据本发明实施例的另一种飞行器的障碍物检测装置的组成结构示意图;
图5e所示为根据本发明实施例的一种视差计算模块的组成结构示意图;
图5f所示为根据本发明实施例的一种深度计算模块的组成结构示意图;
图5g所示为根据本发明实施例的另一种飞行器的障碍物检测装置的组成结构示意图;
图6所示为根据本发明实施例的飞行器的障碍物检测方法应用于飞行器的组成结构示意图。
图7所示为根据本发明实施例的获取飞行定位信息的方法一个实施例示意图;
图8所示为根据本发明实施例的安装有双目摄像头的飞行器示意图;
图9所示为根据本发明实施例的双目摄像头进行定位的示意图;
图10所示为根据本发明实施例的获取目标飞行定位信息的一个流程示意图;
图11所示为应用场景中双目摄像头的工作流程示意图;
图12所示为根据本发明实施例的飞行器一个实施例示意图;
图13所示为根据本发明实施例的飞行器另一个实施例示意图;
图14所示为根据本发明实施例的飞行器另一个实施例示意图;
图15所示为根据本发明实施例的飞行器另一个实施例示意图;
图16所示为根据本发明实施例的飞行器另一个实施例示意图;
图17所示为根据本发明实施例的飞行器另一个实施例示意图;
图18所示为根据本发明实施例的飞行器另一个实施例示意图;
图19所示为根据本发明实施例中获取飞行高度信息的方法一个实施例示意图;
图20所示为根据本发明实施例中安装有双目摄像头的飞行器示意图;
图21所示为根据本发明实施例中获取左右图像之间视差值的一个示意图;
图22所示为根据本发明实施例中获取图像深度值的一个流程示意图;
图23所示为根据应用场景中双目摄像头的工作流程示意图;
图24所示为根据本发明实施例的飞行器一个实施例示意图;
图25所示为根据本发明实施例的飞行器另一个实施例示意图;
图26所示为根据本发明实施例的飞行器另一个实施例示意图;
图27所示为根据本发明实施例的飞行器另一个实施例示意图;
图28所示为根据本发明实施例的飞行器另一个实施例示意图;
图29所示为根据本发明实施例的飞行器另一个实施例示意图;
图30所示为根据本发明实施例的飞行器另一个实施例示意图;
图31所示为根据本发明实施例的飞行器另一个实施例示意图;
图32所示为根据本发明实施例的飞行器一个结构示意图。
具体实施方式
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,下面所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域的技术人员所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及上述附图中的术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、***、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
以下分别进行详细说明。
本发明提供的飞行器的障碍物检测方法的一个实施例,具体可以应用于飞行器飞行过程中的目标障碍物避障场景中。
其中,飞行器(英文全称:Unmanned Aerial Vehicle,英文缩写:UAV)是利用无线遥控或程序控制来执行特定航空任务的飞行器,指不搭载操作人员的一种动力空中飞行器,采用空气动力为飞行器提供所需的升力,能够自动飞行或远程引导,既能一次性使用也能进行回收,又能够携带致命性和非致命性有效负载。
需要说明的是,飞行器具体可以是无人机、也可以是遥控飞机、航模飞机等。本实施例中,通过飞行器自带的双目摄像头实现对目标障碍物的图像拍摄,再通过对左右眼拍摄的图像进行视差值、深度值的计算,就可以确定出障碍物与飞行器之间的深度值,通过图像的分析计算就可以检测出障碍物, 无需在飞行器中内置额外器件,有利于飞行器的小型化发展。请参阅图1,图1示出了本实施例的飞行器的障碍物检测方法,可以包括如下步骤:
101A、飞行器通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像。
其中,第一图像由双目摄像头中的左眼拍摄得到,第二图像由双目摄像头中的右眼拍摄得到。
在本实施例中,飞行器可以对前方出现的目标障碍物进行实时的检测,飞行器中设置有双目摄像头,该双目摄像头的左右眼(即两个摄像头)实时拍摄目标障碍物,并生成在不同时刻拍摄的图像,本实施例中,飞行器可以通过飞行器中已有的双目摄像头拍摄目标障碍物。飞行器配置的双目摄像头可以被动接收可见光,因此不会受到强光干扰,在复杂场景下,也可以很好的估算物体的深度信息,很好的克服了激光雷达和超声波的缺陷。同时,本实施例中使用的双目摄像头是普通的摄像头,因此其硬件成本比激光雷达要低很多。
本实施例中每对摄像头在同一时刻采集相同的目标障碍物,从而可以得到两个图像。其中,为了区别上述两个图像,由双目摄像头中的左眼拍摄得到的图像定义为“第一图像”,由双目摄像头中的右眼拍摄的图像定义为得到“第二图像”,第一图像和第二图像只是用于区分两个摄像头分别拍摄到的图像。
102A、飞行器确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,并根据第一像素位置和第二像素位置,计算第一像素位置和第二像素位置之间的视差值。
在本实施例中,第一图像和第二图像,是双目摄像头对同一时刻的同一个目标障碍物进行拍摄而得到的两个图像,同一目标障碍物投影到双目摄像头的左右两个摄像头中,其位置会有一些差别。为了区别上述两个投影位置,将目标障碍物在第一图像中的投影位置定义为“第一像素图像位置”,将目标障碍物在第二图像中的投影位置定义为“第二像素图像位置”。针对同一目标障碍物在摄像头中的投影会有一个像素位置,左右两个摄像头的像素位 置会有一个偏移值,这个偏移值就是第一像素位置和第二像素位置之间的视差值。
本实施例中可以使用双目立体视觉(Binocular Stereo Vision)来计算两个像素位置之间的视差值。基于视差原理,可以利用摄像头从不同的位置获取被测目标障碍物的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息。双目立体视觉融合两个摄像头获得的图像并观察它们之间的差别,可以获得明显的深度感,建立特征间的对应关系,将同一空间物理点在不同图像中的映像点对应起来,这个差别也可以称作视差图像。
在一些可能的实施方式中,步骤102A中的确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,包括:
A1、根据飞行器在双目摄像头中形成的机身尺寸图像确定图像选择窗口,图像选择窗口的总像素值大于机身尺寸图像的总像素值、且小于第一图像的总像素值、且小于第二图像的总像素值;
A2、使用图像选择窗口分别从第一图像、第二图像中选择出与图像选择窗口对应的第一子图像和第二子图像;
A3、使用全局匹配(Semi-Global Block Matching,SGBM)算法,对第一子图像和第二子图像分别拍摄到的目标障碍物进行图像点的匹配,通过匹配成功的图像点确定目标障碍物投影在第一子图像中的第一像素位置,以及目标障碍物投影在第二子图像中的第二像素位置。
其中,为了提高对图像的处理速度,满足飞行器的实时计算需求,可以根据飞行器在双目摄像头中形成的机身尺寸图像确定图像选择窗口。在飞行轨迹以外的障碍物不会影响到飞行器的飞行,飞行器在飞行方向上只需要保证实时的探测正前方的障碍物。因此,本发明实施例中可以预先根据飞行器的机身尺寸确定出图像选择窗口,该图像选择窗口用于对第一图像、第二图像进行裁剪,选择出与图像选择窗口对应的第一子图像和第二子图像。其中,第一子图像是第一图像中与图像选择窗口相同大小的图像内容,第二子图像是第二图像中与图像选择窗口相同大小的图像内容。图像选择窗口的大小只需要大于飞行器的实际大小均可,即可保证飞行器在未探测到障碍时不会碰 撞到障碍,则在步骤A3中,只需要计算该图像选择窗口内的视差值,不需要计算图像选择窗口之外的视差值,从而可以大大减少图像处理资源的开销。
本实施例中,在步骤A3中可以使用SGBM算法对第一子图像和第二子图像分别拍摄到的目标障碍物进行图像点的匹配。SGBM算法可以基于Open CV完成两个图像中的图像点匹配,并结合步骤A1和步骤A2中对原有图像的窗口选择,因此SGBM算法只需要计算图像选择窗口内的视差值即可。需要说明的是,在其它一些可能的实施方式中,还可以使用其它的立体匹配算法,例如OpenCV2.1中的BM算法和GC算法,此处不做限定。
在一些可能的实施方式中,步骤101A通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,本实施例提供的飞行器的障碍物检测方法还包括:
B1、对第一图像和第二图像分别进行缩放处理和裁剪处理;
B2、将处理后的第一图像、第二图像分别转换为第一灰度图和第二灰度图,并对第一灰度图和第二灰度图分别进行均衡化处理;
在执行步骤B1和B2的实现场景下,步骤102中的确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,包括:
B3、从均衡化处理后的第一灰度图中确定出目标障碍物投影到的第一像素位置,以及从均衡化处理后的第二灰度图中确定出目标障碍物投影到的第二图像位置。
其中,对于双目摄像头采集目标对象得到的图像,若存在干扰情况,还可以对图像进行预处理,例如可以进行缩放处理、裁剪处理和灰度直方图的均衡化处理。
其中,在对图像进行缩放处理时,可以将摄像头采集目标对象得到的图像分别缩放到一个适合进行目标障碍物识别的比例,例如可以放大图像也可以缩小图像。在对于图像进行裁剪处理时,可以剪裁掉左右两幅图像边缘的多个像素点,这样可以减少视觉处理的计算量。在一些实施方式中,如果一副图像的像素占有很多的灰度级而且分布均匀,那么这样的图像往往有高对比度和多变的灰度色调,则可以对灰度图进行均衡化处理,也称为直方图均 衡化。作为一种示例,可以利用一种能仅靠输入图像直方图信息自动达到这种处理效果的变换函数来实现。它的基本思想是对图像中像素个数多的灰度级进行展宽,而对图像中像素个数少的灰度进行压缩,从而扩展像素取值的动态范围,提高了对比度和灰度色调的变化,使图像更加清晰。通过前述对图像的预处理,还可以使图像的光照均衡,图像大小适合移动设备处理。
在本实施例执行前述步骤B1和步骤B2的实现场景下,对于飞行器中目标摄像头实时采集目标障碍物得到的图像,若先对该图像转换得到的灰度图进行了均衡化处理,则视差计算时需要的图像就是均衡化处理后的灰度图,对左右两个摄像头采集到的灰度图中投影到的目标障碍物进行检测可以得到第一像素位置和第二像素位置。
在一些可能的实施方式中,步骤101通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,本发明实施例提供的飞行器的障碍物检测方法还包括:
C1、获取双目摄像头的内参信息和外参信息,内参信息包括:左眼的径向畸变参数和切向畸变参数、右眼的径向畸变参数和切向畸变参数,外参信息包括:双目摄像头中左眼和右眼之间的旋转矩阵和偏移矩阵;
C2、根据内参信息分别对第一图像和第二图像进行畸变补偿,得到畸变补偿完成后的第一图像和畸变补偿完成后的第二图像;
C3、根据外参信息,对畸变补偿完成后的第一图像和畸变补偿完成后的第二图像进行同一水平面上的图像校正处理。
其中,为了提高对图像计算的准确度,若双目摄像头没有进行提前标定,还可以对双目摄像头采集的图像进行校正,其中包括图像的畸变校正和图像的对准。例如,在对图像进行裁剪之后,可以使用Open CV的remap函数根据之前摄像头标定得到的内参和外参,对图像做畸变校正和对准,经过remap函数后的左右眼图像就满足数学意义上在同一水平线上面的图像。其中,双目摄像头的外参信息中包括旋转矩阵和偏移矩阵,通过旋转矩阵和偏移矩阵对第一图像和第二图像的校正,可以对第一图像和第二图像进行对准校正,使得第一图像和第二图像满足是处于同一水平线上面的图像。
103A、根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵,计算双目摄像头距离目标障碍物的深度值,以用于检测飞行器的飞行方向上是否有障碍物阻挡。
在本实施例中,计算出第一像素位置和第二像素位置之间的视差值之后,通过对双目摄像头的摄像头参数进行计算,可以预先确定出左右两个摄像头的视差深度映射矩阵,然后再根据视差值和深度值之间的反比关系,可以计算出双目摄像头距离目标障碍物的深度值。其中,目标障碍物的深度值是指目标障碍物所在的平面与双目摄像头的之间的垂直距离,通过计算出的深度值可以在飞行器的飞行方向上确定距离飞行器多远的距离会出现障碍物。
在一些可能的实施方式中,在步骤103A根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵计算双目摄像头距离目标障碍物的深度值之后,本实施例还可以包括:
D1、将双目摄像头距离目标障碍物的深度值发送给飞行器的飞行控制模块,由飞行控制模块根据双目摄像头距离目标障碍物的深度值判断在其飞行方向上是否有障碍物阻挡。
其中,通过步骤103A计算出双目摄像头距离目标障碍物的深度值之后,飞行控制模块可以根据该深度值判断在其飞行方向上是否有障碍物阻挡,以及在其飞行方向上存在障碍物阻挡时,飞行器距离目标障碍物的距离。
作为一种示例性的实现方式,在前述执行步骤A1至步骤A3的实现场景下,步骤103A根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵计算双目摄像头距离目标障碍物的深度值,可以包括如下步骤:
E1、根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵分别计算出与图像选择窗口对应的所有像素点的深度值;
E2、将图像选择窗口划分为多个图像子窗口,根据与图像选择窗口对应的所有像素点的深度值分别计算出每个图像子窗口的深度值;
E3、从每个图像子窗口的深度值中选择深度值最小的图像子窗口,确定深度值最小的图像子窗口的深度值为双目摄像头距离目标障碍物的深度值。
其中,在前述步骤A1至步骤A3的实现场景下,根据飞行器在双目摄像头中形成的机身尺寸图像确定图像选择窗口,对于第一图像和第二图像都使用 图像选择窗口分别划分出第一子图像和第二子图像。因此在步骤E1中,只需要对第一子图像和第二子图像中每个像素点的深度值进行计算,不需要对第一图像和第二图像中处于图像选择窗口以外的像素点的深度值计算,从而可以大大减少计算深度值所需要的计算资源开销,例如可以减少中央处理器(Central Processing Unit,CPU)的计算负荷。
其中,步骤E1中计算图像选择窗口内的像素点的深度值可以是用视差值和视差深度映射矩阵(Disparity-to-Depth Mapping Matrix)做矩阵乘法来获得实际的三维点位置。作为一种示例,可以使用OpenCV提供的stereoRectify函数来获得该映射矩阵和像素点的深度值。计算出与图像选择窗口对应的所有像素点的深度值之后,将图像选择窗口划分为多个图像子窗口,例如将其等分成4×4的子窗口。
在步骤E2中计算每个图像子窗口的深度值时,可以从每个图像子窗口的所有像素点的深度值中,选择深度值最小的作为该图像子窗口的深度值,这表示在该子窗口内距离飞行器最近的障碍物的距离。
在一些可能的实施方式中,在步骤E3确定深度值最小的图像子窗口的深度值为双目摄像头距离目标障碍物的深度值之后,本实施例还可以包括:
E4、将每个图像子窗口的深度值均发送给飞行器的飞行控制模块,由飞行控制模块根据每个图像子窗口的深度值选择避障方向后再调整飞行器的飞行姿态。
其中,在前述执行步骤E1至步骤E3的实现场景下,将图像选择窗口划分成的多个图像子窗口都会计算出深度值,则也可以将所有图像子窗口的深度值发送给飞行控制模块,由飞行控制模块根据每个图像子窗口的深度值选择避障方向后再调整飞行器的飞行姿态。其中,飞行器的飞行姿态可以指的是飞行器的朝向,高度和位置,在使用飞行器避障飞行的实现过程中,主要控制飞行器与目标障碍物保持适当距离进行的位置移动。例如,调整飞行姿态可以只是控制飞行器往前飞行,也可以指控制飞行器实现翻滚等飞行动作。
通过以上实施例的描述可知,首先通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,第一图像由双目摄像头中的左眼拍摄得到,第二图像由双目摄像头中的右眼拍摄得到, 然后确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,并根据第一像素位置和第二像素位置计算第一像素位置和第二像素位置之间的视差值,最后根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵计算双目摄像头距离目标障碍物的深度值。上述过程中,通过飞行器的内置双目摄像头实现前向障碍物的实时检测,不需要在飞行器中增加额外的器件设备,对于飞行器的飞行场景和障碍物的形状都不需要限制,通过图像的分析和计算可以准确的计算出双目摄像头距离目标障碍物的深度值,减少飞行器的障碍物检测误差,提高飞行器的障碍物检测精度。
为便于更好的理解和实施本发明实施例的技术方案,下面以结合应用场景为例来进行具体说明。
以飞行器具体为无人机为例说明本实施例中的障碍物检测方法,请一并参阅图2,图2示出了本发明实施例提供的双目立体视觉障碍物检测的整个工作流程示意图。
需要说明的是,在无人机上安装双目摄像头后可以进行摄像头的标定。对单个摄像头需要做标定,其目的是为了得到摄像头的径向畸变(例如桶形畸变)和切向畸变的参数,称为内参(intrinsic parameters)。双目立体视觉避障要求左右两个眼的摄像头安装在同一个水平线上面,并且间隔在6cm~10cm左右。小于6cm的间隔,图像的视差值太小不能得到合理的深度值。间隔太大近处的物体会无法匹配。安装好的摄像头因为无法在数学上实现精确的同一水平线。因此需要对其做立体标定。作为一种示例,立体标定可以使用张正友标定法,这样可以求出两个镜头之间的旋转矩阵和偏移矩阵,这组值成为摄像头的外参(extrinsic parameters)。图像在被采集到后会使用内参对其进行畸变补偿,然后使用外参来旋转和平移图像使其到达数学上面要求的同一水平线上面。
接下来介绍双目立体视觉障碍物检测的工作流程。
无人机通过其搭载的双目摄像头采集左右眼的实时图像。左右眼的实时图像会经过图像深度计算模块来生成对应的深度值。无人机根据深度值来确定其飞行方向上是否有障碍物阻挡。如果有障碍物阻挡,会把距离当前障碍 物的深度值发送给无人机的飞行控制模块。本实施例中计算出的障碍物的深度值,是指障碍物所在的平面与双目摄像头的之间的垂直距离。
如图3所示,图3为本发明实施例的双目立体视觉障碍物检测中图像处理环节的流程示意图。无人机中可以通过立体视觉模块负责计算场景的深度信息,其工作流程分为图像的缩放和裁剪,图像畸变补偿,图像的对准,视差计算和深度值计算,接下来分别对各个过程进行举例说明。
首先说明图像的缩放与裁剪,无人机使用双目视觉探测障碍物时,不需要高精度图片,因此双目摄像头采集的图片可以缩放到320x240的格式。因为左右眼的视差存在,所以左右两幅图像的边缘是难以匹配的,在处理时可以剪裁掉左右两幅图像边缘20个像素左右,这样可以减少视觉处理的计算量。
然后进行图像校正,图像校正包括图像的畸变校正和图像的对准。在图像裁剪以后,可以使用openCV的remap函数根据之前摄像头标定得到的内参和外参对图像做畸变校正和对准。经过remap函数后的左右眼图像就满足数学意义上面的在同一水平线上面的图像。一个是对单张图片进行畸变校正,另外一个是对两张图片做平移和旋转使其满足数学意义上面的在同一个水平面上。
接下来说明视差值的计算过程。双目视觉的深度值计算要先求取左右图像对应点之间的视差值。实际世界中的同一物体投影到左右两个摄像头中,其像素位置会有一些差别。针对同一个实际空间中的点在摄像头中的投影会有一个像素位置,左右两个摄像头的像素位置会有一个偏移值,这个值就是视差。
如图4所示,图4为本发明实施例的双目立体视觉障碍物检测中视差值计算环节的流程示意图,物理点P在左右摄像头中的投影分别是点XL和XR。因为双目视觉要求在同一水平线上面,所以其Y值都相同。视差值(disparity)即是XL-XR。图4中,f表示左右摄像头的焦点位置,Tx表示两个摄像头之间的位移,Z就是P点的深度值。在本实施例中,以使用OpenCV提供的SGBM算法为例,来说明图像点的匹配和视差值的计算。
为了减少SGBM算法的运算量、提高处理速度,在嵌入式设备上保证图像处理计算的实时性,本实施例中没有对整张图像做SGBM。根据无人机运 动的特性,由于在飞行轨迹以外的障碍物不会影响到无人机的飞行,因此可以只对其飞行轨迹的正前方的障碍物做探测。作为一种示例,可以利用3维投影的计算方法,可以得到一个图像选择窗口,该图像选择窗口的大小只需要大于无人机的实际大小,即可保证无人机在未探测到障碍时不会碰撞到障碍,只需要计算该窗口内的视差值,不需要计算窗口之外的视差值,这可以大大减少CPU的开销。
深度值是用视差值和视差深度映射矩阵做矩阵乘法来获得实际的三维点位置,其计算公式如下:
[X Y Z W]T=Q*[xydisparity(x,y)1]T
其中,x,y是实际三维空间中的点在图像中的投影坐标,单位是像素。disparity(x,y)表示在像素点(x,y)处的视差值,Q矩阵是视差深度映射矩阵,它是通过摄像头内参和外参计算得到。Q的形式如下:Tx,f,Cx,Cy,其中,Q是摄像头的标定和校准获得的,Tx是两个摄像头之间的水平偏移,f是焦距,Cx和Cy是内参,用于表示光心和焦点的位置偏移。
在本实施例中,可以使用OpenCV提供的stereoRectify函数来获得该映射矩阵。通过矩阵乘法得到的是实际的三维点的齐次坐标,计算出的深度值是Zc=Z/W。
最后对无人机的障碍检测进行说明,通过双目视觉模块得到了图像选择窗口中的所有像素点的深度值(单位是物理值单位,例如米),将图像选择窗口其等分成3x3的图像子窗口。对每个子窗口求其深度值的最小值,则子窗口中所有像素点的深度值的最小值就是该子窗口的深度最小值,该深度最小值表示在该子窗口内离无人机最近的障碍物的距离,其中,障碍物与摄像头的距离是与主光轴平行的垂直于障碍物平面的连线。如果距离小于某个门限值(例如1米),那么表示无人机将会碰撞到该障碍物。每个子窗口的最小深度值可能不一样,这可以帮助无人机飞控***来决定该往哪个方向避障。作为一种示例,可以将所有子窗口的深度值都发送给飞控***。
对于障碍物的探测过程举例说明如下:可以设定一个阈值,比如1.5米,那么只要有一个图像子窗口的深度值小于1.5米,就意味着无人机再飞1.5米就会碰到障碍物。则可以根据其它图像子窗口情况来判读向哪个方向转向来避 开,比如左边的子窗口是3米,那么可以向左避障,如果所有图像子窗口都是1.5米,那么采用随机转向来避障。上述避障策略只是最简单的避障策略,避障策略还可以结合人工智能,定位,地图等来实现。
本实施例中,通过无人机的内置双目摄像头实现前向障碍物的实时检测。通过设定图像选择窗口来减少双目匹配算法的运算量,达到无人机障碍物探测的实时性要求。通过划分图像子窗口来获取无人机前向不同位置的深度值来帮助无人机的飞行控制模块控制无人机的转向。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
为便于更好的实施本发明实施例的上述方案,下面还提供用于实施上述方案的相关装置。
请参阅图5a所示,本发明实施例提供的一种飞行器的障碍物检测装置500A,可以包括:图像采集模块501A、视差计算模块502A、深度计算模块503A,其中,
图像采集模块501A,用于通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,第一图像由双目摄像头中的左眼拍摄得到,第二图像由双目摄像头中的右眼拍摄得到;
视差计算模块502A,用于确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,并根据第一像素位置和第二像素位置计算第一像素位置和第二像素位置之间的视差值;
深度计算模块503A,用于根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵计算双目摄像头距离目标障碍物的深度值,以用于检测所述飞行器的飞行方向上是否有障碍物阻挡。
在一些可能的实施方式中,请参阅图5b所示,飞行器的障碍物检测装置500A还包括:图像预处理模块504A,其中,
图像预处理模块504A,用于图像采集模块501A通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,对第一图像和第二图像分别进行缩放处理和裁剪处理;将处理后的第一图像、第二图像分别转换为第一灰度图和第二灰度图,并对第一灰度图和第二灰度图分别进行均衡化处理;
视差计算模块502A,具体用于从均衡化处理后的第一灰度图中确定出目标障碍物投影到的第一像素位置,从均衡化处理后的第二灰度图中确定出目标障碍物投影到的第二图像位置。
在本发明的一些实施例中,请参阅图5c所示,相对于图5a所示,飞行器的障碍物检测装置500A,还包括:
获取模块504A,用于图像采集模块501通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,获取双目摄像头的内参信息和外参信息,内参信息包括:左眼的径向畸变参数和切向畸变参数、右眼的径向畸变参数和切向畸变参数,外参信息包括:双目摄像头中左眼和右眼之间的旋转矩阵和偏移矩阵;
畸变补偿模块505A,用于根据内参信息分别对第一图像和第二图像进行畸变补偿,得到畸变补偿完成后的第一图像和畸变补偿完成后的第二图像;
校正模块506A,用于根据外参信息对畸变补偿完成后的第一图像和畸变补偿完成后的第二图像进行同一水平面上的图像校正处理。
在一些可能的实施方式中,请参阅图5d所示,相对于图5a所示,飞行器的障碍物检测装置500A,还包括:
第一发送模块507A,用于深度计算模块503A根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵计算双目摄像头距离目标障碍物的深度值之后,将双目摄像头距离目标障碍物的深度值发送给飞行器的飞行控制模块,由飞行控制模块根据双目摄像头距离目标障碍物的深度值判断在其飞行方向上是否有障碍物阻挡。
在一些可能的实施方式中,请参阅图5e所示,视差计算模块502A,包括:
窗口确定单元5021A,用于根据飞行器在双目摄像头中形成的机身尺寸图像确定图像选择窗口,图像选择窗口的总像素值大于机身尺寸图像的总像素值、且小于第一图像的总像素值、且小于第二图像的总像素值;
图像区域选择单元5022A,用于使用图像选择窗口分别从第一图像、第二图像中选择出与图像选择窗口对应的第一子图像和第二子图像;
图像匹配单元5023A,用于使用全局匹配SGBM算法对第一子图像和第二子图像分别拍摄到的目标障碍物进行图像点的匹配,通过匹配成功的图像点确定目标障碍物投影在第一子图像中的第一像素位置,以及目标障碍物投影在第二子图像中的第二像素位置。
在一些可能的实施方式中,请参阅图5f所示,深度计算模块503A,包括:
像素点深度值计算单元5031A,用于根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵分别计算出与图像选择窗口对应的所有像素点的深度值;
子窗口深度值计算单元5032A,用于将图像选择窗口划分为多个图像子窗口,根据与图像选择窗口对应的所有像素点的深度值分别计算出每个图像子窗口的深度值;
深度值确定单元5033A,用于从每个图像子窗口的深度值中选择深度值最小的图像子窗口,确定深度值最小的图像子窗口的深度值为双目摄像头距离目标障碍物的深度值。
在一些可能的实施方式中,在深度计算模块503A如图5f的实现场景下,请参阅图5g所示,相对于如图5a所示,飞行器的障碍物检测装置500A,还包括:
第二发送模块508A,用于深度值确定模块确定深度值最小的图像子窗口的深度值为双目摄像头距离目标障碍物的深度值之后,将每个图像子窗口的深度值都发送给飞行器的飞行控制模块,由飞行控制模块根据每个图像子窗口的深度值选择避障方向后再调整飞行器的飞行姿态。
上述实施例的描述可知,首先通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,第一图像由双目摄像头中的左眼拍摄得到,第二图像由双目摄像头中的右眼拍摄得到,然后 确定目标障碍物投影在第一图像中的第一像素位置,以及目标障碍物投影在第二图像中的第二像素位置,并根据第一像素位置和第二像素位置计算第一像素位置和第二像素位置之间的视差值,最后根据第一像素位置和第二像素位置之间的视差值、预置的视差深度映射矩阵计算双目摄像头距离目标障碍物的深度值。本发明实施例通过飞行器的内置双目摄像头实现前向障碍物的实时检测,不需要在飞行器中增加额外的器件设备,对于飞行器的飞行场景和障碍物的形状都不需要限制,通过图像的分析和计算可以准确的计算出双目摄像头距离目标障碍物的深度值,减少飞行器的障碍物检测误差,提高飞行器的障碍物检测精度。
请参阅图6,图6,示出了本发明实施例的一种飞行器的结构示意图,该飞行器1100可因配置或性能不同而产生比较大的差异,可以包括一个或一个以***处理器(central processing units,CPU)1122(例如,一个或一个以上处理器)和存储器1132,一个或一个以上存储应用程序1142或数据1144的存储介质1130(例如一个或一个以上海量存储设备)、摄像头1152、传感器1162。其中,存储器1132和存储介质1130可以是短暂存储或持久存储。存储在存储介质1130的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对飞行器中的一系列指令操作。此外,中央处理器1122可以设置为与存储介质1130通信,在飞行器1100上执行存储介质1130中的一系列指令操作。本领域技术人员可以理解,图6中示出的飞行器结构并不构成对飞行器的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
飞行器1100还可以包括一个或一个以上电源1126,一个或一个以上无线网络接口1150,一个或一个以上输入输出接口1158,和/或,一个或一个以上操作***1141,例如安卓***等等。
飞行器中包括的摄像头1152,该摄像头具体可以是数字摄像头,也可以是模拟摄像头,摄像头1152具体为双目摄像头,摄像头的分辨率可以根据实际需要来选择,摄像头的结构组件可以包括:镜头、图像传感器可以结合具体场景来配置。
飞行器还可以包括:传感器1162,比如运动传感器以及其他传感器。作为一种示例,作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别飞行器姿态的应用(比如飞行器偏航角、横滚角、俯仰角的测算、磁力计姿态校准)、识别相关功能等;至于飞行器还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
上述实施例中由飞行器所执行的飞行器的障碍物检测方法步骤可以基于该图6所示的飞行器结构。
此外,由于缺乏外界辅助导航,飞行器难以在未知环境下估计飞行器的定位与运动,飞行器自主导航的过程中需要解决这个关键问题。而这个问题的解决方法与飞行器机载传感器的类型紧密联系,现有方案中,可以通过在飞行器机身上安装单目摄像头、光流摄像头或者惯性传感器测量得到飞行器的定位信息,利用该定位信息对飞行器进行飞行控制。
然而,在实际应用中,单目摄像头和惯性传感器的定位的精度较差,累积误差大,而光流摄像头或者高精度的惯性传感器通常成本较高,从而导致飞行器的成本的增加,不利于飞行器应用的普遍性。
因此,本实施例提供了一种获取飞行定位信息的方法及飞行器,可以得到更接近真实值的目标飞行定位信息,在不采用光流摄像头或者高精度惯性传感器的情况下,仍然可以得到精确的定位信息,减小误差值,同时还减少了飞行器的成本。
现如今,无人机定位悬停可实现误差在垂直10厘米,水平1米精度范围内的自动悬停,当需要更高精度时,便需手动来进行微调。无人机实现自动悬停实质上便是将其固定在预先设定好的高度位置与水平位置上,这也就是说,要实现悬停这一动作,事先读取自身的位置,即产生一组三维坐标这一步显得至关重要。比较准确地确定无人机的位置信息是无人机完成定位悬停这一动作的前提与基础。
无人机所采用的定位技术中较为常见的有如下几种:
一、以全球定位***(英文全称:Global Positioning System,英文缩写:GPS)模块为主的定位。GPS在综合至少4颗卫星的位置信息后,可实现无人机的空间定位。利用以GPS为中心,辅助以各种传感器的定位方法是如今无人机所采用的主流定位方案。为了应对GPS***中选择可用性技术(英文全称:Selective availability,英文缩写:SA)造成的误差,无人机所搭载的GPS通常利用差分GPS技术来提高定位精度。
二、运用视觉***的定位。机载摄像机的持续拍摄,为导航***提供连续的图像帧,在图像特征匹配的计算程序中,特征追踪器从连续的两个图像帧中获取自然地标信息,并在一对自然特征中测出位移。通过周期性地记录新特征点,并比较重复的特征点,便可以测算出各图像捕捉序列之间用作三维几何投影的单应性矩阵,从而可以实现对无人机的定位。
三、无线电加激光定点的高精度定位方案。无线电定位是在已知导航台的精确位置下,通过接收器对导航台所发出的无线电信号进行接收,计算信号发出到接收之间间隔的时间,以处理得到导航台至目标物之间的相对距离来达成位置的确定。
然后,这三种方式中,由于视觉***摆脱了需要接收GPS信号的束缚,可以在没有GPS信号的情况下通过与惯性传感器等部件的配合,保持无人机的稳定,所以使用该方案的无人机可以运用在一些环境特征明显的地区,如附近有河流,房屋等一些工作环境中。本发明主要采用视觉***进行定位,下面将详细进行说明。
请一并参阅图7,图7示出了本发明实施例的获取飞行定位信息的方法流程示意图,该方法包括:
101B、飞行器包含第一摄像头以及第二摄像头的飞行器根据N个第一实时图像确定(N-1)个第一本征参数,并根据N个第二实时图像确定(N-1)个第二本征参数,其中,第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,第二摄像头用于获取该N个不同时刻对应的N个第二实时图像,N为大于或等于2的正整数;第一本征参数用于表征N个第一实时图像中相邻两帧图像的平移参数,第二本征参数用于表征N个第二实时图像中相邻两帧图像 的平移参数。
本实施例中,飞行器包括了一组双目摄像头,即具有两个摄像头,分别定义为第一摄像头和第二摄像头。双目摄像头可以同时提供深度信息以及定位信息,其中,深度信息主要是指高度信息,获取深度信息的方法可以是将双目摄像头安装在飞行器的垂直向下处,这样就能更好地捕捉高度变化。
第一摄像头和第二摄像头分别位于飞行器的两个不同位置,并且同时抓拍N帧图像,N是大于或等于2的正整数,这样才能保证得到前后时刻的两帧图像,从而可以进行特征比对。对于第一摄像头获取到的N个时刻分别对应的实时图像均称之为第一实时图像,而第二摄像头获取到的N个时刻分别对应的实时图像均称之为第二实时图像。
第一摄像头获取到的N个第一实时图像分别为N个时刻对应的N帧图像,将N帧图像中相邻前后两帧图像通过特征比对后得到(N-1)个平移参数,本实施例中将该(N-1)个平移参数分别称为第一本征参数。同样地,第二摄像头获取到的N个第二实时图像也分别为N个时刻对应的N帧图像,将N帧图像中相邻前后两帧图像通过特征比对后也得到(N-1)个平移参数。同样的,本实施例将该(N-1)个平移参数分别称为第二本征参数。
102B、飞行器通过第一摄像头获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头获取N个不同时刻中起始时刻的第二初始定位信息。
其中,第一初始定位信息是第一摄像头在该N个不同时刻中起始时刻所拍摄得到的定位信息,第二初始定位信息是第二摄像头在该N个不同时刻中起始时刻所拍摄得到的定位信息。假设将飞行器飞行的整个空间看作是一个三维坐标系,那么第一初始定位信息可以作为第一摄像头拍摄的三维坐标系中原点的位置,第二初始定位信息可以作为第二摄像头拍摄的三维坐标系中原点的位置。
103B、飞信器根据第一初始定位信息与(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,并根据第二初始定位信息与(N-1)个第二本征参数,确定(N-1)个时刻对应的(N-1)个第二飞行定位信息;其中, 该(N-1)个时刻为N个不同时刻中除起始时刻之外的(N-1)个时刻。
本实施例中,飞行器已经获取到第一初始定位信息,并计算得到了(N-1)个第一本征参数,从而可以利用第一初始定位信息和(N-1)个第一本征参数确定(N-1)个时刻对应的(N-1)个第一飞行定位信息。同样地,根据第二初始定位信息与(N-1)个第二本征参数,也可以确定(N-1)个时刻对应的(N-1)个第二飞行定位信息。
作为一种示例,以获取第一飞行定位信息为例,假设N为5,包括N1~N5,第一初始定位信息为X1,即N1时刻的定位信息为X1,N2时刻的第一本征参数为a,N3时刻的第一本征参数为b、N4时刻的第一本征参数为c和N5时刻的第一本征参数为d,那么N2时刻的第一飞行定位信息为a X1,N3时刻的第一飞行定位信息为ab X1,N4时刻的第一飞行定位信息为abc X1,N5时刻的第一飞行定位信息为abcdX1,从而得到N2~N5(即N-1)时刻分别对应的第一飞行定位信息。
104B、飞行器根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,获取N个不同时刻中结束时刻所对应的目标飞行定位信息。
本实施例中,飞行器可以采用预置定位约束条件,对得到的(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息进行修正和调整,调整后的(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息之间的误差为最小值。最后利用求解器对调整后的第一飞行定位信息和第二飞行定位信息进行最优解的计算,从而得到目标飞行定位信息,该目标飞行定位信息作为N个不同时刻中结束时刻的飞行定位信息。
将目标飞行定位信息发送给飞行器的飞控模块,使其利用该信息进行飞行或者悬停。
本实施例中,飞行器包括第一摄像头以及第二摄像头,第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,第二摄像头用于获取N个不同时刻对应的N个第二实时图像,利用上述飞行器可以获取飞行定位信息,根据N个第一实时图像确定(N-1)个第一本征参数,并根据N个第二实时图像确定(N-1)个第二本征参数,获取起始时刻第一摄像头的第一初始定位信息以及第二摄像头的第二初始定位信息,然后根据第一初始定位信息与(N-1)个第 一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,并根据第二初始定位信息与(N-1)个第二本征参数,确定(N-1)个时刻对应的(N-1)个第二飞行定位信息,根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,最后采用预置定位约束条件获取N个不同时刻中结束时刻所对应的目标飞行定位信息。通过上述方式,采用双目摄像头实现飞行器定位,可以实时获取多个不同时刻对应的图像,进而分析得到每帧图像之间的平移参数,两个摄像头分别利用平移参数获取对应的定位信息,最后采用预置定位约束条件修正定位信息,以得到更接近真实值的目标飞行定位信息,在不采用光流摄像头或者高精度惯性传感器的情况下,仍然可以得到精确的定位信息,减小误差值,同时还减少了飞行器的成本。
在一些可能的实施方式中,本发明实施例提供的获取飞行定位信息的方法的一个实施例中,在获取起始时刻第一摄像头的第一初始定位信息以及第二摄像头的第二初始定位信息之前,还可以包括:
在预置摄像头距离范围内,将第一摄像头与第二摄像头设置于飞行器的同一水平线上。
本实施例中,请一并参阅图8,图8为本发明实施例中安装有双目摄像头的飞行器示意图。如图8所示,第一摄像头和第二摄像头安装在飞行器的同一水平线上,且两者之间的间隔距离满足预设摄像头距离范围之内。需要说明的是,图8中的两个摄像头位置仅为一个示意,不应理解为对本案的限定。
一般情况下,预置摄像头距离范围可以为6厘米至10厘米,在实际应用中,也可以进行一些调整,此处不做限定。
然而,在实际应用中安装好的两个摄像头无法在数学上真正实现精确到同一水平线上,因此需要分别对两个摄像头进行立体标定,立体标定可以采用张正友标定法。
作为一种示例,张正友标定法的实施过程可以包括以下步骤:
1、打印一张棋盘格,把它贴在一个平面上,作为标定物;
2、通过调整标定物或摄像机的方向,为标定物拍摄一些不同方向的照片;
3、从照片中提取特征点(如角点);
4、估算理想无畸变的情况下,五个内参和所有外参;
5、应用最小二乘法估算。实际存在径向畸变下的畸变系数。
6、极大似然法,优化估计,提升估计精度。
通过这样的过程,就获得了具有高估计精度的五个内参,三个外参和两个畸变系数。利用这些信息,可以进行畸变矫正、图像校正和最终的三维信息恢复。
双目摄像机需要标定的参数包括但不限于摄像机内参数矩阵、畸变系数矩阵、本征矩阵、基础矩阵、旋转矩阵以及平移矩阵。其中摄像机内参数矩阵和畸变系数矩阵可以通过单目标定的方法标定出来。双目摄像机标定和单目摄像机标定最主要的区别就是双目摄像机需要标定出左右摄像机坐标系之间的相对关系。
其次,本实施例中,垂直向下的双目摄像头安装在同一个水平线上,并且两个摄像头间隔的距离在预置摄像头距离范围内。可以理解,如果两个摄像头间隔太小,则难以得到合理的深度信息以及定位信息,而两个摄像头间隔太大又会导致近处的物体拍摄不到,从而缺乏参照物。通过上述安装方式,可以使得第一摄像头和第二摄像头都能够拍摄到符合要求的实时图像,
在一些可能的实施方式中,本发明实施例提供的获取飞行定位信息的方法的另一个实施例中,通过第一摄像头获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头获取N个不同时刻中起始时刻的第二初始定位信息之前,还可以包括:
飞行器通过第一摄像头获取第一时刻对应的第一子图像以及第二时刻对应的第二子图像,其中,第一时刻与第二时刻均为N个不同时刻中的两个时刻,第一子图像与第二子图像均属于第一实时图像;
飞行器通过第二摄像头获取第一时刻对应的第三子图像以及第二时刻对应的第四子图像,其中,第三子图像与第四子图像均属于第二实时图像;
飞行器采用基于双目立体视觉方式,测量得到第一深度信息以及第二深度信息,其中,第一深度信息为根据第一子图像和第二子图像得到,第二深度信息为根据第三子图像和第四子图像得到。
本实施例中,飞行器在获取第一初始定位信息和第二初始定位信息之前,还可以利用第一摄像头获取第一时刻对应的第一子图像,在下一个时刻,即第二时刻获取对应的第二子图像。同样地,利用第二摄像头在第一时刻获取对应的第三子图像,并在第二时刻获取第四子图像。当然,第一子图像与第二子图像均属于第一实时图像,且第三子图像与第四子图像均属于第二实时图像。
然后采用基于双目立体视觉方式,可以分别测量得到第一深度信息和第二深度信息。其中,双目立体视觉是机器视觉的一种重要形式,它是基于视差原理并利用成像设备从不同的位置获取被测物体的两幅图像,通过计算图像对应点间的位置偏差,来获取物体三维几何信息的方法。
作为一种示例,对比第一时刻的第一子图像和第一时刻的第三子图像,将第一子图像和第三子图像融合,融合两只眼睛获得的图像后观察第一子图像和第三子图像之间的差别,可以获得明显的深度感,建立第一子图像和第三子图像特征间的对应关系,将同一空间物理点在不同图像中的映像点对应起来,即可得到第一深度信息。同样地,对比第二时刻的第二子图像和第二时刻的第四子图像,可得到第二深度信息。
需要说明的是,双目立体视觉测量方法具有效率高、精度合适、***结构简单且成本低等优点,非常适合于制造现场的在线、非接触产品检测和质量控制。对运动物体测量中,由于图像获取是在瞬间完成的,因此立体视觉方法是一种更有效的测量方法。
本实施例中,飞行器通过第一摄像头获取第一时刻对应的第一子图像以及第二时刻对应的第二子图像,且通过第二摄像头获取第二时刻对应的第三子图像以及第二时刻对应的第四子图像,然后采用基于双目立体视觉方式,并根据第一子图像和第二子图像得到第一深度信息,以及根据第三子图像和第四子图像得到第二深度信息,。通过上述方式,第一摄像头和第二摄像头还可以获取深度信息,即高度信息,克服了单目摄像头和光流摄像头无法提供深度信息的缺点,从而增强了方案的实用性,同时,得到深度信息后还可用于地形识别、物体识别以及定高,以此提升方案的多样性。
在一些可能的实施方式中,本发明实施例提供的获取飞行定位信息的方法的又一个实施例中,第一本征参数可以包括第一旋转矩阵以及第一平移向量,第二本征参数包括第二旋转矩阵以及第二平移向量。其中,第一旋转矩阵用于表示第一摄像头的角度变化,第二旋转矩阵用于表示第二摄像头的角度变化,第一平移向量用于表示第一摄像头的高度变化,第二平移向量用于表示第二摄像头的高度变化。
本实施例中,飞行器获取第一本征参数和第二本征参数,第一本征参数与第二本征参数均属于本征参数,且本征参数包括了旋转矩阵和平移向量,下面将分别介绍旋转矩阵和平移向量。
任意两个坐标系之间的相对位置关系都可以通过两个矩阵来描述:旋转矩阵R和平移矩阵T。此处用R和T来描述左右两个摄像机坐标系的相对关系,具体为将左摄像机下的坐标转换到右摄像机下的坐标,即将第一摄像机下的坐标转换到第二摄像机下的坐标。
假设空间中有一点P,其在坐标系下的坐标为PW,r表示左摄像头,l表示右摄像头其在左右摄像机坐标系下的坐标可以表示为:
Figure PCTCN2017111577-appb-000001
其中,Pl和pτ又具有如下的关系:
Pr=RPl+T     (2)
其中,双目摄像机分析中往往以左摄像机,即第一摄像头为主坐标系,但是R和T却是左向右转换,所以Tx为负数。综合(1)和(2)两式,可以推导得出下式:
Figure PCTCN2017111577-appb-000002
单目标定中相机外参数就是此处的Rl,Tl,Rr和Tr,代入(3)式就可以求出旋转矩阵和R和平移矩阵T,根据平移矩阵T可以得到平移向量t。
由旋转矩阵和平移向量构成的本征参数对级几何在双目问题中非常的重要,可以简化立体匹配等问题,而要应用对级几何去解决问题,比如求级线,需要知道本征参数,因此双目标定过程中也会把本征参数根据旋转矩阵和R和平移矩阵T确定出来。
本征参数常用字母E来表示,其物理意义是左右坐标系相互转换的参数,可以描述左右摄像机图像平面上对应点之间的关系。
本实施例中,可以获取到双目摄像头的旋转矩阵和平移向量,利用旋转矩阵和平移向量构建得到本征参数,通过上述方式,分别需要对双目摄像头中每个摄像头进行标定,得到旋转矩阵和平移向量来描述两个摄像头之间相对位置关系,并且还可以构成本征参数,从而保证方案的可行性和实用性。
在一些可能的实施方式中,本发明实施例提供的获取飞行定位信息的方法的再一个实施例中,根据N个第一实时图像确定(N-1)个第一本征参数,并根据N个第二实时图像确定(N-1)个第二本征参数,可以包括:
按照如下方式计算任一个第一本征参数:
Figure PCTCN2017111577-appb-000003
Figure PCTCN2017111577-appb-000004
其中,λ1表示第一深度信息,λ2表示第二深度信息,
Figure PCTCN2017111577-appb-000005
表示第一子图像中目标点Xj的三维空间,
Figure PCTCN2017111577-appb-000006
表示所述第二子图像中目标点Xj的三维空间,C表示预先测量的内部参数,R1表示第一旋转矩阵,t1表示第一平移向量;
按照如下方式计算任一个第二本征参数:
Figure PCTCN2017111577-appb-000007
Figure PCTCN2017111577-appb-000008
其中,λ3表示第三深度信息,λ4表示第四深度信息,
Figure PCTCN2017111577-appb-000009
表示第三子图像中目标点Yk的三维空间,
Figure PCTCN2017111577-appb-000010
表示第四子图像中目标点Yk的三维空间,R2 表示第二旋转矩阵,t2表示第二平移向量。
本实施例中,请一并参阅图9,图9示出了本发明实施例的双目摄像头进行定位的示意图,其中,第(N-1)个第一本征参数,即图9中的R,第(N-1)个第二本征参数,即图9中的L,E为预置定位约束条件。
作为一种示例,每个摄像头拍摄的各时刻对应的实时图像中,可以采用基于特征提取算法(英文全称:ORiented Brief,英文缩写:ORB)来计算实时图像的旋转矩阵和平移向量。首先提取每帧实时图像的ORB特征点,然后与上一帧实时图像的ORB特征点做匹配,由此可以得到N个时刻内的其中两个时刻分别对应的ORB特征点集合:
Figure PCTCN2017111577-appb-000011
Figure PCTCN2017111577-appb-000012
z1为前一个时刻图像的特征点集合,z2为当前个时刻图像的特征点集合。在实际应用中会有n组匹配的点,此处仅用一组集合点作为示意,如果z1和z2是完美匹配,那么每组点之间应该满足如下公式:
Figure PCTCN2017111577-appb-000013
Figure PCTCN2017111577-appb-000014
其中,λ1表示第一深度信息,λ2表示第二深度信息,
Figure PCTCN2017111577-appb-000015
表示第一子图像中目标点Xj的三维空间,
Figure PCTCN2017111577-appb-000016
表示所述第二子图像中目标点Xj的三维空间,C表示预先测量的内部参数,R1表示第一旋转矩阵,t1表示第一平移向量。
当然,在第二摄像头中同样采用上述方式确定每组点之间满足如下公式:
Figure PCTCN2017111577-appb-000017
Figure PCTCN2017111577-appb-000018
其中,λ3表示第三深度信息,λ4表示第四深度信息,
Figure PCTCN2017111577-appb-000019
表示第三子图像中目标点Yk的三维空间,
Figure PCTCN2017111577-appb-000020
表示第四子图像中目标点Yk的三维空间,R2表示第二旋转矩阵,t2表示第二平移向量。
结合式(6)、式(7)、式(8)以及式(9)组成的方程组,可以计算得到第一本征参数和第二本征参数,即获取第一旋转矩阵、第一平移向量、第二旋转矩阵以及第二平移向量。
需要说明的是,本实施例中,为确定(N-1)个第一本征参数和(N-1)个第二本征参数提供了相应的计算公式,从而通过相应的公式可以计算得到本征参数,为方案的实现提供了可行的依据,进而增加方案的可行性。
在一些可能的实施方式中,本发明实施例提供的获取飞行定位信息的方法的再一个实施例中,根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,采用预置定位约束条件获取N个不同时刻中结束时刻所对应的目标飞行定位信息,可以包括:
按照如下方式,计算在满足预置定位约束条件下,同一时刻对应的第二飞行定位信息与第一飞行定位信息之间的方差最小值:
Figure PCTCN2017111577-appb-000021
其中,X表示第一飞行定位信息,Y表示第二飞行定位信息,N表示第N个时刻,j表示N个时刻中的第j个时刻,Xj表示第j个时刻对应的第二飞行定位信息,Yj表示第j个时刻对应的第二飞行定位信息,Rext表示预先测量的第一摄像头与第二摄像头之间的旋转矩阵,text表示预先测量的第一摄像头与第二摄像头之间的平移向量。
也就是可以得到N组被调整过的飞行定位信息,比如第一飞行定位信息与第二飞行定位信息共同构成{X1,Y1},{X2,Y2},……,{Xn,Yn},调整后每组的{X1,Y1},{X2,Y2},……,{Xn,Yn}会更接近极小值,从而使得测量结果也更为准确。
其中,Rext表示预先测量的第一摄像头与第二摄像头之间的旋转矩阵,text表示预先测量的第一摄像头与第二摄像头之间的平移向量,Rext和text共同作为摄像头的外部参数,可以通过立体标定来获取。
为了便于介绍,请一并参阅图10,图10示出了本发明实施例中获取目标飞行定位信息的流程示意图,步骤201B中,飞行器分别计算左右摄像头当前的位姿,即当前的飞行定位信息,飞行定位信息具体可以包括三维空间坐标系中的坐标点位置以及飞行方向;步骤202B中,飞行器利用通用图优化算法(英文全称:General Graph Optimization,英文缩写:g2o)构造图关系,并利用双目约束,即预置定位约束条件修正飞行定位信息,其中,g2o是一个算法集的实现,根据求解非线性最小二乘的理论,根据具体的问题选用最合适的算法。它是一个平台,可以加入线性方程求解器,编写自己的优化目标函数,确定更新的方式;步骤203B中,飞行器采用g2o的求解器求解得到最优解,最后在步骤204B中飞行器利用最优解更新当前位姿信息,即更新当前的飞行定位信息,更新后的飞行定位信息就是目标飞行定位信息。
本实施例中,基于双目摄像头分别测量得到的第一飞行定位信息和第二飞行定位信息,建立双目摄像头飞行定位信息之间的约束,通过该约束就能够求解飞行器的最佳飞行定位信息,即得到目标飞行定位信息,从而减少误差,提升定位的准确性。
在一些可能的实施方式中,本发明实施例提供的获取飞行定位信息的方法的再一个实施例中,飞行器根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,获取N个不同时刻中结束时刻所对应的目标飞行定位信息之后,还可以包括:
飞行器根据目标飞行定位信息,确定第(N+1)时刻所对应的第一子飞行定位信息,第一子飞行定位信息为目标飞行定位信息中的一个信息;
飞行器采用预置定位约束条件以及第一子飞行定位信息,获取第(N+1)时刻所对应的第二子飞行定位信息;
飞行器根据第一子飞行定位信息以及第一本征参数,确定第(N+2)时刻所对应的第三子飞行定位信息;
飞行器采用预置定位约束条件以及第三子飞行定位信息,获取第(N+2) 时刻所对应的第四子飞行定位信息;
飞行器计算第一子飞行定位信息与第三目标飞行定位信息的第一最优解,并计算第二子飞行定位信息与第四子飞行定位信息的第二最优解,第一最优解与第二最优解构成第(N+2)时刻的飞行定位信息。
本实施例中,在飞行器采用预置定位约束条件获取N个不同时刻中结束时刻所对应的目标飞行定位信息之后,还可以采用目标飞行定位信息来计算出后续的飞行定位信息。
作为一种示例,已知目标飞行定位信息中包括了第一摄像头的定位信息,以及第二摄像头的定位信息,假设只选择第(N+1)时刻所对应其中一个定位信息X1,X1称之为第一子飞行定位信息,然后采用预置定位约束条件倒推得到第(N+1)时刻所对应的定位信息Y1,即第二子飞行定位信息,至此,一组子飞行定位信息获取完毕,进而开始下一组子飞行定位信息的获取。
飞行器根据XI以及第一本征参数,计算得到第(N+2)时刻所对应的第三子飞行定位信息,即X2,同样地,采用预置定位约束条件以及X2,计算出第(N+2)时刻所对应的第四子飞行定位信息,即Y2,至此,下一组子飞行定位信息也获取完毕,于是还可以继续进行后续子飞行定位信息的获取,此处不做赘述。
在实际应用中,两个摄像头分别根据计算得到的X和Y求得最优解,例如采用最小二乘法来求的最优解,两个最优解即可构成第(N+2)时刻的飞行定位信息。
此外,本实施例中,在得到最优的目标飞行定位信息之后,可以利用该目标飞行定位信和预置定位约束条件来预测未来一段时间内最优的飞行定位信息。通过上述方式,一方面为获取准确飞行定位信息的方式提供了一种可行的手段,以此增加方案的灵活性,另一方面,后续获取的飞行定位信息更侧重于全局性的考虑,有利于在全局坐标系中确定飞行器的定位信息。
在一些可能的实施方式中,本发明实施例提供的获取飞行定位信息的方法的再一个实施例中,飞行器根据第一子飞行定位信息以及第一本征参数,确定第(N+2)时刻所对应的第三子飞行定位信息,可以包括:
按照如下方式计算第(N+2)时刻所对应的第三子飞行定位信息:
XN+2=RN+1XN+1+tN+1
其中,XN+2表示第(N+2)时刻所对应的第三子飞行定位信息,RN+1表示第一本征参数中第(N+1)时刻的旋转矩阵,tN+1表示第一本征参数中第(N+1)时刻的平移向量,XN+1表示第(N+1)时刻所对应的第一子飞行定位信息。
本实施例中,将具体介绍如何计算第(N+2)时刻所对应的第三子飞行定位信息,由于已经得到了本征参数,且本征参数中包括了旋转矩阵和平移向量,利用旋转矩阵和平移向量即可进行得到第三子飞行定位信息。
采用如下公式计算第(N+2)时刻所对应的第三子飞行定位信息:
XN+2=RN+1XN+1+tN+1    (11)
其中,公式中的XN+2表示第(N+2)时刻所对应的第三子飞行定位信息,RN+1表示第一本征参数中第(N+1)时刻的旋转矩阵,tN+1表示第一本征参数中第(N+1)时刻的平移向量,XN+1表示第(N+1)时刻所对应的第一子飞行定位信息。
通过上述方式,即可每次都利用上一个时刻的子飞行定位信息计算得到当前的时刻的子飞行定位信息。然后,将计算出来的一系列子飞行定位信息与双目摄像头的外部参数输入至g2o构建关系,然后调用g2o的求解器求得其最小二乘法的最优解,最后用该最优解更新目标飞行定位信息,同时,也将最优解发送至飞行器的飞控模块。
本实施例中,利用上一时刻对应的第一子飞行定位信息计算得到后一个时刻对应的第三子飞行定位信息,即采用相应的公式即可进行计算,通过上述方式,可以提升方案的实用性和可行性。
为便于理解,下面以一个具体应用场景对本发明中一种获取飞行定位信息的方法进行详细描述,请一并参阅图11,图11为应用场景中双目摄像头的工作流程示意图,包括:
步骤301B中,假设采用的飞行器即为无人机,首先无人机通过其搭载的垂直向下的双目摄像头分别采集左右眼的实时图像;
步骤302B中,利用左右眼的实时图像来计算图像的深度值;
步骤303B中,基于ORB图像特征点分别计算左右两个摄像头的旋转矩阵和平移向量,因为左右摄像头采集的图像不同,其图像特征点就会不同,因此左右摄像头计算出来的旋转矩阵和平移向量之间会有误差;
步骤304B中,根据双目摄像头之间的约束建立两组旋转矩阵和平移向量之间的限制条件,采用最小二乘法的方法来求得其无人机位姿的最优解。该最优解即是无人机的定位信息;
步骤305B中,把该信息发送给无人机飞控***,从而使得无人机可以得到更准确的定位信息。
下面对本发明中的飞行器进行详细描述,请一并参阅图12,本实施例中的飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,第二摄像头用于获取N个不同时刻对应的N个第二实时图像,N为大于或等于2的正整数,飞行器包括:
第一确定模块401B,用于根据N个第一实时图像确定(N-1)个第一本征参数,以及根据N个第二实时图像,确定(N-1)个第二本征参数;其中,第一本征参数用于表征N个第一实时图像中相邻两帧图像的平移参数,第二本征参数用于表征N个第二实时图像中相邻两帧图像的平移参数。
第一获取模块402B,用于通过第一摄像头,获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头,获取N个不同时刻中起始时刻的第二初始定位信息。
第二确定模块403B,用于根据第一获取模块402B获取的第一初始定位信息与第一确定模块401B确定的(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,并根据第一获取模块402B获取的第二初始定位信息与第一确定模块401B确定的(N-1)个第二本征参数,确定(N-1)个时刻对应的(N-1)个第二飞行定位信息;其中,(N-1)个时刻为N个不同时刻中除起始时刻之外的(N-1)个时刻。
第二获取模块404B,用于根据第二确定模块403B确定的(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,采用预置定位约束条件获取N个 不同时刻中结束时刻所对应的目标飞行定位信息。
本实施例中,第一确定模块401B根据N个第一实时图像确定(N-1)个第一本征参数,并根据N个第二实时图像确定(N-1)个第二本征参数,第一获取模块402B通过第一摄像头获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头获取N个不同时刻中起始时刻的第二初始定位信息,第二确定模块403B根据第一获取模块402B获取的第一初始定位信息与第一确定模块401B确定的(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,并根据第一获取模块402B获取的第二初始定位信息与第一确定模块401B确定的(N-1)个第二本征参数,确定(N-1)个时刻对应的(N-1)个第二飞行定位信息,第二获取模块404B根据第二确定模块403B确定的(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,采用预置定位约束条件获取N个不同时刻中结束时刻所对应的目标飞行定位信息。
本实施例中,飞行器可以采用双目摄像头实现飞行器定位,实时获取多个不同时刻对应的图像,进而分析得到每帧图像之间的平移参数,两个摄像头分别利用平移参数获取对应的定位信息,最后采用预置定位约束条件修正定位信息,以得到更接近真实值的目标飞行定位信息,在不采用光流摄像头或者高精度惯性传感器的情况下,仍然可以得到精确的定位信息,减小误差值,同时还减少了飞行器的成本。
在一些可能的实施方式中,请一并参阅图13,本发明实施例提供的飞行器的另一实施例中,飞行器还包括:
设置模块405B,用于第一获取模块402B通过第一摄像头获取N个不同时刻中起始时刻的第一初始定位信息,以及通过第二摄像头获取N个不同时刻中起始时刻的第二初始定位信息之前,在预置摄像头距离范围内,将第一摄像头与第二摄像头设置于飞行器的同一水平线上。
本发明实施例中,垂直向下的双目摄像头要求安装在同一个水平线上,并且两个摄像头间隔的距离在预置摄像头距离范围内。通过上述安装方式,可以使得第一摄像头和第二摄像头都能够拍摄到符合要求的实时图像,如果两个摄像头间隔太小,则难以得到合理的深度信息以及定位信息,而两个摄像头间隔太大又会导致近处的物体拍摄不到,从而缺乏参照物。
在一些可能的实施方式中,请一并参阅图14,本发明实施例提供的飞行器的另一实施例中,飞行器还包括:
第三获取模块406B,用于第一确定模块402B根据N个第一实时图像确定(N-1)个第一本征参数,并根据N个第二实时图像确定(N-1)个第二本征参数之前,通过第一摄像头获取第一时刻对应的第一子图像以及第二时刻对应的第二子图像,其中,第一时刻与第二时刻均为N个不同时刻中的两个时刻,第一子图像与第二子图像均属于第一实时图像;
第四获取模块407B,用于通过第二摄像头获取第二时刻对应的第三子图像以及第二时刻对应的第四子图像,其中,第三子图像与第四子图像均属于第二实时图像;
测量模块408B,用于采用基于双目立体视觉方式测量得到第一深度信息以及第二深度信息,其中,所述第一深度信息为根据所述第一子图像和所述第二子图像得到,所述第二深度信息为根据所述第三子图像和所述第四子图像得到。
本实施例中,飞行器通过第一摄像头获取第一时刻对应的第一子图像以及第二时刻对应的第二子图像,且通过第二摄像头获取第二时刻对应的第三子图像以及第二时刻对应的第四子图像,然后采用基于双目立体视觉方式测量得到第一子图像的第一深度信息,第二子图像的第二深度信息,第三子图像的第三深度信息以及第四子图像的第四深度信息。通过上述方式,第一摄像头和第二摄像头还可以获取深度信息,即高度信息,克服了单目摄像头和光流摄像头无法提供深度信息的缺点,从而增强了方案的实用性,同时,得到深度信息后还可用于地形识别、物体识别以及定高,以此提升方案的多样性。
在一些可能的实施方式中,本发明实施例提供的飞行器的另一实施例中,第一本征参数包括第一旋转矩阵以及第一平移向量,第二本征参数包括第二旋转矩阵以及第二平移向量,其中,第一旋转矩阵用于表示第一摄像头的角度变化,第二旋转矩阵用于表示第二摄像头的角度变化,第一平移向量用于表示第一摄像头的高度变化,第二平移向量用于表示第二摄像头的高度变化。
本实施例中,双目摄像头可以获取到旋转矩阵和平移向量,利用旋转矩 阵和平移向量构建得到本征参数。通过上述方式,分别需要对双目摄像头中每个摄像头进行标定,得到旋转矩阵和平移向量来描述两个摄像头之间相对位置关系,并且还可以构成本征参数,从而保证方案的可行性和实用性。
在一些可能的实施方式中,请一并参阅图15,本发明实施例提供的飞行器另一实施例中,
第一确定模块401B包括:
第一计算单元4011B,用于按照如下方式计算第(N-1)个第一本征参数:
Figure PCTCN2017111577-appb-000022
Figure PCTCN2017111577-appb-000023
其中,λ1表示第一深度信息,λ2表示第二深度信息,
Figure PCTCN2017111577-appb-000024
表示第一子图像中目标点Xj的三维空间,
Figure PCTCN2017111577-appb-000025
表示第二子图像中目标点Xj的三维空间,C表示预先测量的内部参数,R1表示第一旋转矩阵,t1表示第一平移向量;
按照如下方式计算第(N-1)个第二本征参数:
Figure PCTCN2017111577-appb-000026
Figure PCTCN2017111577-appb-000027
其中,λ3表示第三深度信息,λ4表示第四深度信息,
Figure PCTCN2017111577-appb-000028
表示第三子图像中目标点Yk的三维空间,
Figure PCTCN2017111577-appb-000029
表示第四子图像中目标点Yk的三维空间,R2表示第二旋转矩阵,t2表示第二平移向量。
本发明实施例中,为确定(N-1)个第一本征参数和(N-1)个第二本征参数提供了相应的计算公式,从而通过相应的公式可以计算得到本征参数,为方案的实现提供了可行的依据,进而增加方案的可行性。
在一些可能的实施方式中,请一并参阅图16,本发明实施例提供的飞行器的另一实施例中,
第二获取模块404B包括:
第二计算单元4041B,用于按照如下方式计算在满足预置定位约束条件下第二飞行定位信息与第一飞行定位信息之间的方差最小值:
Figure PCTCN2017111577-appb-000030
其中,X表示第一飞行定位信息,Y表示第二飞行定位信息,
Figure PCTCN2017111577-appb-000031
表示在满足预置定位约束条件下第二飞行定位信息与第一飞行定位信息之间的方差最小值,N表示第N个时刻,j表示N个时刻中的第j个时刻,Xj表示第j个时刻对应的第二飞行定位信息,Yj表示第j个时刻对应的第二飞行定位信息,Rext表示预先测量的第一摄像头与第二摄像头之间的旋转矩阵,text表示预先测量的第一摄像头与第二摄像头之间的平移向量;
第三计算单元4042B,用于根据第二计算单元4041B计算的方差最小值计算目标飞行定位信息。
本实施例中,基于双目摄像头分别测量得到的第一飞行定位信息和第二飞行定位信息,建立双目摄像头飞行定位信息之间的约束,通过该约束就能够求解飞行器的最佳飞行定位信息,即得到目标飞行定位信息,从而减少误差,提升定位的准确性。
在一些可能的实施方式中,请一并参阅图17,本发明实施例提供的飞行器的另一实施例中,飞行器还包括:
第三确定模块4091B,用于第二获取模块404B根据(N-1)个第一飞行定位信息以及(N-1)个第二飞行定位信息,采用预置定位约束条件获取N个不同时刻中结束时刻所对应的目标飞行定位信息之后,根据目标飞行定位信息,确定第(N+1)时刻所对应的第一子飞行定位信息,第一子飞行定位信息为目标飞行定位信息中的一个信息;
第五获取模块4092B,用于采用预置定位约束条件以及第三确定模块4091B确定的第一子飞行定位信息,获取第(N+1)时刻所对应的第二子飞行定位信息;
第四确定模块4093B,用于根据第三确定模块4091B确定的第一子飞行定位信息以及第一本征参数,确定第(N+2)时刻所对应的第三子飞行定位信息;
第六获取模块4094B,用于采用预置定位约束条件以及第四确定模块4093B确定的第三子飞行定位信息,获取第(N+2)时刻所对应的第四子飞行定位信息;
计算模块4095B,用于计算第三确定模块4091B确定第一子飞行定位信息与第四确定模块4093B确定的第三目标飞行定位信息的第一最优解,并计算第五获取模块4092B获取的第二子飞行定位信息与第六获取模块4094B获取的第四子飞行定位信息的第二最优解,第一最优解与第二最优解构成第(N+2)时刻的飞行定位信息。
本实施例中,在得到最优的目标飞行定位信息之后,可以利用该目标飞行定位信和预置定位约束条件来预测未来一段时间内最优的飞行定位信息。通过上述方式,一方面为获取准确飞行定位信息的方式提供了一种可行的手段,以此增加方案的灵活性,另一方面,后续获取的飞行定位信息更侧重于全局性的考虑,有利于在全局坐标系中确定飞行器的定位信息。
在一些可能的实施方式中,请一并参阅图18,本发明实施例提供的飞行器的另一实施例中,
第四确定模块4093B包括:
第四计算单元4093B1,用于按照如下方式计算第(N+2)时刻所对应的第三子飞行定位信息:
XN+2=RN+1XN+1+tN+1
其中,XN+2表示第(N+2)时刻所对应的第三子飞行定位信息,RN+1表示第一本征参数中第(N+1)时刻的旋转矩阵,tN+1表示第一本征参数中第(N+1)时刻的平移向量,XN+1表示第(N+1)时刻所对应的第一子飞行定位信息。
本实施例中,利用上一时刻对应的第一子飞行定位信息计算得到后一个时刻对应的第三子飞行定位信息,即采用相应的公式即可进行计算,通过上述方式,可以提升方案的实用性和可行性。
此外,由于飞行器难以在未知环境下估计飞行器距离地面的高度,现有方案中,可以通过在飞行器机身上安装气压计、超声波装置或者深度摄像头测量得到飞行器的高度信息,并且利用该高度信息对飞行器进行飞行控制。
然而,在实际应用中,采用气压计测量飞行高度会受到飞行器本身飞行产生的气流影响,从而会出现高度变化的情况,导致测量精度较差。超声波装置虽然测量精度较高,但是遇到地面有凸起或者斜面等复杂地形时,超声波装置会接收不到,导致测量结果不准确。而采用深度摄像头则会使得飞行器的成本提升。
因此,本发明实施例还提供了一种获取飞行高度信息的方法,可以提升高度信息测量的精度。而且,双目摄像头可以获取到各种复杂地形,并根据不同地形计算得到高度信息,从而提升测量的准确性,并且双目摄像头与深度摄像头相比,还具有成本较低的优势。
本方案所测定的飞行高度信息具体可以为真实高度,需要说明的是,飞行高度信息也可以是绝对高度、标准气压高度或者相对高度。
其中,绝对高度表示飞行器到海平面的垂直距离。在海上飞行用雷达可直接测出绝对高度。
标准气压高度表示飞行器从空中到标准气压平面(即大气压力等于760毫米汞柱的水平面)的垂道距离,叫做标准气压高度。大气压力经常发生变化,因此,标准气压平面与海平面的垂道距离也经常改变。如果标准气压平面恰好与海平面相重合,则标准气压高度等于绝对高度。民航飞机在航线上飞行和军用飞机转场飞行时,都需要利用标准气压高度,以免飞机相撞。
相对高度表示飞行器到某指定的水平面(机场、靶场或者战场等)的垂直距离。飞机在起飞和着陆时需要知道飞机对机场的相对高度,这时把高度表的气压刻度调到该机场的气压值即场压,飞机距机场的相对高度即可由高度表显示出来。
真实高度表示飞行器从空中到正下方地面目标的垂道距离。进行轰炸和照相侦察时,必须知道飞机的真实高度。在执行轰炸、对地攻击、照相侦察、搜索和救援以及农林作业等任务时需要知道真实高度。真实高度可用电影经纬仪或雷达高度表测出。一定的飞行器只能在预先设计的某高度范围内飞行。
下面将以飞行器的角度介绍获取飞行高度信息的方式,请一并参阅图19, 图19示出了本发明实施例的获取飞行高度信息的方法流程示意图,包括:
101C、包含第一摄像头以及第二摄像头的飞行器根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像,第一摄像头用于获取第一实时图像,第二摄像头用于获取第二实时图像;
本实施例中,飞行器包括了一组双目摄像头,即具有两个摄像头,分别定义为第一摄像头和第二摄像头。双目摄像头可以实时抓拍到图像,在某一个时刻第一摄像头拍摄到第一实时图像,而第二摄像头拍摄到第二实时图像。此外,在后续的时间里双目摄像头仍旧可以在一个时刻内采集到左右两幅实时图像,在本发明中,采用某一时刻对应的两幅实时图像即可计算当前时刻飞行器的飞行高度信息。
在飞行器采集到第一实时图像和第二实时图像后,将这两幅实时图像进行处理,得到第一实时图像对应的第一深度图像,以及第二实时图像对应的第二深度图像。
102C、飞行器根据第一深度图像以及第二深度图像确定目标融合图像,目标融合图像中包含至少一个预设区域;
本实施例中,飞行器获取到第一深度图像和第二深度图像后,由于左右视角的偏差,因此,第一深度图像和第二深度图像并非对称图像,还需要进行处理才能使得两幅深度图像合二为一,得到一幅目标融合图像。其中,目标融合图像包括了很多像素点,可以将目标融合图像划分为至少一个预设区域,这样的话,预设区域内的像素点就会变少。
103C、飞行器确定目标融合图像中每个预设区域对应的深度值;
本实施例中,飞行器需要分别计算目标融合图像中每个预设区域所对应的深度值。
104C、飞行器根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息获取飞行高度信息。
可以理解,由于飞行器在飞行时不一定是垂直于地面飞行的,所以安装在飞行器上的双目摄像头也并非与地面保持垂直。因此,飞行器还需要通过 传感器等装置获取当前飞行姿态信息,例如俯仰角以及翻滚角等,并利用这些当前飞行姿态信息以及每个预设区域的深度值,可以计算得到每个预设区域的飞行高度信息,当所有预设区域内的飞行高度信息都计算完毕后,可以将所有的飞行高度信息发送给飞行器控制模块,由飞行控制模块根据飞行高度信息对飞行器进行飞行控制。
本实施例中,飞行器包括第一摄像头以及第二摄像头,第一摄像头获取第一实时图像,第二摄像头获取第二实时图像,具体过程为,飞行器根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像,然后根据第一深度图像以及第二深度图像确定目标融合图像,接下来飞行器可以确定目标融合图像中每个预设区域对应的深度值,最后根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息获取飞行高度信息。通过上述方式,采用双目摄像头测量飞行器的高度信息,与气压计测量高度信息相比,不会因为飞行器自身受到气流影响而导致高度信息测量的精度降低,此外,双目摄像头可以获取到各种复杂地形,并根据不同地形计算得到高度信息,从而提升测量的准确性,而且双目摄像头与深度摄像头相比,还具有成本较低的优势。
在一些可能的实施方式中,本发明实施例提供的获取飞行高度信息的方法的一个实施例中,根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像之前,还可以包括:
在预置摄像头距离范围内,将第一摄像头与第二摄像头设置于飞行器的同一水平线上。
本实施例中,请一并参阅图20,图20为本发明实施例中安装有双目摄像头的飞行器示意图,如图20所示,需要将第一摄像头和第二摄像头安装在飞行器的同一水平线上,且保证两者之间的间隔距离满足预设摄像头距离范围之内,而图20中的两个摄像头位置仅为一个示意,不应理解为对本案的限定。
需要说明的是,预置摄像头距离范围通常为6厘米至10厘米,在实际应用中,也可以进行一些调整,此处不做限定。
然而,在实际应用中安装好的两个摄像头无法在数学上真正实现精确到同一水平线上,因此需要分别对两个摄像头进行立体标定,立体标定可以采用张正友标定法。
通过这样的过程,可以获得了具有高估计精度的五个内参,三个外参和两个畸变系数。利用这些信息,可以进行畸变矫正、图像校正和最终的三维信息恢复。
双目摄像机需要标定的参数包括但不限于摄像机内参数矩阵、畸变系数矩阵、本征矩阵、基础矩阵、旋转矩阵以及平移矩阵。其中摄像机内参数矩阵和畸变系数矩阵可以通过单目标定的方法标定出来。双目摄像机标定和单目摄像机标定最主要的区别就是,双目摄像机需要标定出左右摄像机坐标系之间的相对关系。
本实施例中,垂直向下的双目摄像头要求安装在同一个水平线上,并且两个摄像头间隔的距离在预置摄像头距离范围内,通过上述安装方式,可以使得第一摄像头和第二摄像头都能够拍摄到符合要求的实时图像,如果两个摄像头间隔太小,则难以得到合理的深度信息以及定位信息,而两个摄像头间隔太大又会导致近处的物体拍摄不到,从而缺乏参照物,因此采用预置摄像头距离范围可以获取到更合理的图像。
在一些可能的实施方式中,本发明实施例提供的获取飞行高度信息的方法的另一个实施例中,飞行器根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像,可以包括:
飞行器按照预设图像规格对第一实时图像以及第二实时图像进行缩放处理;
飞行器采用预先获取到的内部参数以及外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行图像校正,并得到第一深度图像以及第二深度图像。
本实施例中,飞行器在将第一实施图像和第二实时图像转换为第一深度图像以及第二深度图像的过程中,还可以进行如下两个步骤,具体为:
由于飞行器使用双目视觉计算飞行高度信息时,通常情况下不需要高精度图片,因此双目摄像头采集的实时图像首先会按照预设图像规格进行缩放, 例如预设图像规格可以是320×240,其中,320×240是指分辨率,240表示240个像素点,320表示320个像素点。因为左右摄像头存在视差,所以左右两幅实时图像的边缘也是不能匹配的,在处理时还可以按照一定的像素剪裁掉第一实时图像以及第二实时图像边缘,例如各剪裁边缘的20个像素,在实际应用中,还可以剪裁其他合理的像素,此处不作限定。
接下来,可以进一步对经过缩放处理后的第一实时图像以及第二实时图像进行图像校正,图像校正包括图像的畸变校正以及图像的对准校正,分别利用对摄像头标定后得到的内部参数和外部参数即可实现图像校正,校正后即得到第一深度图像以及第二深度图像,其中,第一深度图像以及第二深度图像均为可以用于计算深度值的图像。
本实施例中,飞行器在获取到第一实时图像和第二实时图像后还应该对其进行处理,首先需要按照预设图像规格对第一实时图像以及第二实时图像进行缩放,然后采用预先获取到的内部参数以及外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行图像校正。通过上述方式,对实时图像进行缩放和剪裁可以降低图像边缘不匹配的情况,同时还可以减少视觉处理的计算量,从而提升处理的效率,此外,对实时图像进行校正能够得到同一水平面上的图像,由此提升图像处理的准确性。
在一些可能的实施方式中,本发明实施例提供的获取飞行高度信息的方法的又一个实施例中,飞行器采用预先获取到的内部参数以及外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行图像校正,可以包括:
飞行器采用预先获取到的内部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行畸变补偿,其中,内部参数包含第一摄像头的桶形畸变参数和切向畸变参数,以及第二摄像头的桶形畸变参数和切向畸变参数;
飞行器采用预先获取到的外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行旋转和平移,其中,外部参数包含第一摄像头的平移参数和旋转参数,以及第二摄像头的平移参数和旋转参数。
本实施例中,飞行器利用内部参数和外部参数可以对实时图像进行图像校正,包括:
飞行器采用内部参数对经过缩放处理后的第一实时图像以及第二实时图 像进行畸变补偿,内部参数是对双目摄像头中单个摄像头做标定后得到的参数,标定第一摄像头后得到第一摄像头的桶形畸变参数和切向畸变参数,标定第二摄像头后得到第二摄像头的桶形畸变参数和切向畸变参数。分别采用第一摄像头的桶形畸变参数和切向畸变参数对第一实时图像进行畸变校正,采用第二摄像头的桶形畸变参数和切向畸变参数对第二实时图像进行畸变校正。
其中,在物平面内放置均匀方格,把它照亮作为物,若把光阑放在物和透镜之间,可以看出,远离光轴区域的放大率比光轴附近的低,在像平面内出现图中所示的外凸情景,称为桶形畸变。而切向畸变就是矢量端点沿切线方向发生的变化。
采用外部参数对经过缩放处理后的第一实时图像以及第二实时图像进行对准校正,通过对第一摄像头和第二摄像头进行立体标定,两个摄像头之间的旋转矩阵和平移矩阵即为外部参数,其中,旋转参数即为旋转矩阵,而平移参数即为平移矩阵。
本实施例中,具体说明了如何对实时图像进行图像校正,即采用预先获取到的内部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行畸变补偿,采用预先获取到的外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行旋转和平移。通过上述方式,根据摄像头标定得到的内部参数和外部参数可以对实时图像进行校正和对准,使得实时图像在数学意义上满足处于同一水平线的要求,从而便于在后续的处理中对两个摄像头获取到的图像进行融合,以得到目标融合图像。
在一些可能的实施方式中,本发明实施例提供的获取飞行高度信息的方法的再一个实施例中,飞行器根据第一深度图像以及第二深度图像确定目标融合图像,可以包括:
飞行器采用立体视觉算法确定第一深度图像以及第二深度图像之间的视差值;
飞行器根据视差值将第一深度图像以及第二深度图像合成为目标融合图像。
本实施例中,通过上述实施例描述的内容可知,深度图像是实时图像经 过处理后得到的,也就是可以利用深度图像来合成所需的目标融合图像。
需要说明的是,双目视觉的深度值计算先要求取左右图像对应点之间的视差值,在实际空间中的同一物体投影到左右摄像头中,其位置会有一些差别。针对同一个实际空间中的点在摄像头中的投影会有一个像素位置,左右两个摄像头的像素位置会存在一个偏移值,即视差值。
请一并参阅图21,图21为本发明实施例中获取左右图像之间视差值的一个示意图,如图21所示,物理点P(X,Y,Z)在左右两个摄像头的投影分别为Xl和Xr,因为双目视觉要求在同一个水平线上面,所以其Y值都相同,视差值即为d=Xl-Xr。
如图21所示,在开源计算机视觉库(英文全称:Open Source Computer Vision Library,英文缩写:OpenCV)中,f的量纲是像素点,Tx的量纲由定标板棋盘格的实际尺寸和用户输入值确定,一般是以毫米为单位(当然为了精度提高也可以设置为0.1毫米量级),d=Xl-Xr的量纲也是像素点。因此分子分母约去,Z的量纲与T相同,d与Z之间满足下列关系:
Figure PCTCN2017111577-appb-000032
采用OpenCV提供的半全局匹配和互信息(英文全称:Semiglobal Matching and Mutual Information,英文缩写:SGBM)算法计算第一深度图像以及第二深度图像之间的视差值,然后根据视差值可以将第一深度图像以及第二深度图像合成为目标融合图像。
本实施例中,飞行器在确定目标融合图像的过程还包括,先采用立体视觉算法确定第一深度图像以及第二深度图像之间的视差值,然后根据视差值将第一深度图像以及第二深度图像合成为目标融合图像。通过上述方式,可以根据计算得到的视差值来合成目标融合图像,从而提升目标融合图像的准确性。
在一些可能的实施方式中,本发明实施例提供的获取飞行高度信息的方 法的再一个实施例中,飞行器确定目标融合图像中每个预设区域对应的深度值,可以包括:
飞行器根据视差值确定目标融合图像中每个像素点的深度值;
飞行器根据每个像素点的深度值分别确定每个预设区域对应的深度值。
本实施例中,飞行器还可以进一步利用获取的每个像素点的视差值,确定出目标融合图像各个像素点的深度值,根据每个像素点的深度值分别计算每个预设区域对应的深度值。
作为一种示例,飞行器可以通过双目视觉模块得到了图像中的所有像素点的深度值(单位是物理值单位,例如米)。因为地形会比较复杂,所以图像不会有一个一致的深度值,因此将图像划分成多个网格,即划分为多个预设区域,例如6x6的网格,每个网格单独计算一个深度值。
每个网格的深度值采用中位值平均滤波法来计算其深度值。例如,可以把网格内所有有效点的深度值去掉其前5%的最大值和后5%的最小值,再求平均。在网格划分得足够小的情况下,得到的均值可以准确的描述地形的高度。
本实施例中,飞行器确定目标融合图像中每个预设区域对应的深度值具体可以分为两个步骤,首先根据视差值确定目标融合图像中每个像素点的深度值,然后根据每个像素点的深度值分别确定每个预设区域对应的深度值。通过上述方法,由最小单位的像素点深度值来预计每个预设区域所对应的深度值,所得到的每个预设区域对应的深度值更为准确,从而提升方案的可行性和实用性。
在一些可能的实施方式中,本发明实施例提供的获取飞行高度信息的方法的再一个实施例中,飞行器根据视差值确定目标融合图像中每个像素点的深度值,可以包括:
按照如下方式计算所述每个像素点的深度值,:
[X Y Z W]T=Q×[x y disparity(x,y)1]T
Z(x,y)=Z/W;
其中,x表示三维空间中像素点在目标融合图像中的投影横坐标,y表示三维空间中所述像素点在目标融合图像中的投影纵坐标,disparity(x,y)表 示在像素点(x,y)的视差值,Q表示视差深度映射矩阵,[X Y Z W]T表示目标矩阵,[X Y Z W]为目标矩阵的转置矩阵,Z(x,y)表示像素点(x,y)的深度值,Z为转置矩阵中第三列组成的子矩阵,W为转置矩阵中第四列组成的子矩阵。
本实施例中,深度值是用视差值和视差深度映射矩阵(英文全称:disparity-to-depth mapping matrix)做矩阵乘法来获得实际的三维点位置。其计算公式如下:
[X Y Z W]T=Q×[x y disparity(x,y)1]T
x,y是实际三维空间中的点在图像中的投影坐标,单位是像素。disparity(x,y)表示在像素点(x,y)处的视差值。Q矩阵是视差深度映射矩阵,它是通过摄像头内参和外参计算得到。在本方案中使用OpenCV提供的stereoRectify函数来获得该映射矩阵。通过矩阵乘法得到的[X Y Z W]T即是实际的三维点的齐次坐标,其实际的深度值是Z(x,y)=Z/W。
其中,为了精确地求得某个点在三维空间里的距离Z,需要获得的参数有焦距f、视差d和摄像头中心距Tx。如果还需要获得X坐标和Y坐标的话,那么还需要额外知道左右像平面的坐标系与立体坐标系中原点的偏移cx和cy。其中f、Tx、cx和cy可以通过立体标定获得初始值,并通过立体校准优化,使得两个摄像头在数学上完全平行放置,并且左右摄像头的cx,cy和f相同。而立体匹配所做的工作,就是在之前的基础上,求取最后一个变量,即视差值d。从而最终完成求一个点三维坐标所需要的准备工作。
为了便于介绍,请一并参阅图22,图22为本发明实施例中获取图像深度值的一个流程示意图,如图22所示:
步骤201C中,飞行器首先对采集到的左右眼对应的实时图像进行缩放和剪裁,得到一定像素大小的图像;
步骤202C中,飞行器通过对单个摄像头进行标定后得到内部参数,利用内部参数对实时图像进行畸变补偿;
步骤203C中,飞行器通过对双目摄像头进行立体标定后得到外部参数,利用外部参数对实时图像进行对准校正,步骤201至步骤202用于对实时图 像进行初步处理,并得到可以用于计算深度值的深度图像;
步骤204C中,飞行器使用OpenCV提供的SGBM算法来实现图像点的匹配和视差值的计算;
步骤205C中,飞行器采用视差深度变换矩阵计算图像的深度值。
本实施例中,介绍了如何根据视差值,计算得到目标融合图像中每个像素点的深度值的方法,即采用相关的公式即可计算出所需的结果,由此可以提升方案的实用性和可行性,增加方案的可操作性。
在一些可能的实施方式中,本发明实施例提供的获取飞行高度信息的方法的再一个实施例中,飞行器根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息获取飞行高度信息,可以包括:
按照如下方式计算所述飞行高度信息:
Figure PCTCN2017111577-appb-000033
h=d sinβ;
其中,β表示地面与飞行器的法线所构成的倾斜角,α表示当前飞行姿态信息中的翻滚角,γ表示当前飞行姿态信息中的俯仰角,d表示每个预设区域对应的深度值,h表示飞行高度信息。
本实施例中,因为飞行器在飞行时垂直向下的摄像头并不是垂直于地面的,地面与飞行器机身上摄像头的法线有一个倾斜的角度β,因此图像的深度值d还需要做一次三角函数的变换来求得每个网格的实际高度值h。其计算公式如下:
h=d sinβ
可以从飞行器控制模块中获取飞行器的俯仰角γ和翻滚角α,角度β可以通过如下公式来计算:
Figure PCTCN2017111577-appb-000034
计算得到的所有预设区域的高度值后会送给飞行器控制模块进行处理。
本实施例中,介绍了如何根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息,计算得到飞行高度信息的方法,即采用相关的公式即可计算出所需的结果,由此提升方案的实用性和可行性,增加方案的可操作性。
为便于理解,下面以一个具体应用场景对本发明中一种获取飞行定位信息的方法进行详细描述,请一并参阅图23,图23为应用场景中双目摄像头的工作流程示意图,包括:
步骤301C中,无人机通过其搭载的垂直向下的双目摄像头,分别采集左右眼的实时图像;
步骤302C中,接下来可以利用左右眼的实时图像,经过图像缩放和裁剪,以及图像校正的处理后生成深度图像,左右眼的深度图像在经过视差处理后得到目标融合图像,并计算出目标融合图像中各个像素点的深度值;
步骤303C中,获取当前无人机的机身姿态信息,利用俯仰角度数以及翻滚角度数等信息;
步骤304C中,利用无人机当前的姿态角和图像深度值来计算无人机的高度值,因为地面的地形可能会很复杂,因此不会得到一个单一的高度值,将图像划分成多个网格,分别计算网格内的高度,这样可以得到一个粗略的地形高度值。
步骤305C中,最后把这组高度值送给无人机的飞行控制***。
下面对本发明实施例中的飞行器进行详细描述,请一并参阅图24,本发明实施例中的飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取第一实时图像,第二摄像头用于获取第二实时图像,飞行器40C包括:
第一获取模块401C,用于根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像.
第一确定模块402C,用于根据第一获取模块401获取的第一深度图像以及第二深度图像确定目标融合图像,目标融合图像中包含至少一个预设区域.
第二确定模块403C,用于确定第一确定模块402得到的目标融合图像中每个预设区域对应的深度值.
第二获取模块404C,用于根据第二确定模块403确定的每个预设区域对应的深度值以及飞行器的当前飞行姿态信息获取飞行高度信息。
本实施例中,飞行器包括第一摄像头以及第二摄像头,其中,第一摄像头用于获取第一实时图像,第二摄像头用于获取第二实时图像,第一获取模块401C根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像,第一确定模块402C根据第一获取模块401C获取的第一深度图像以及第二深度图像确定目标融合图像,目标融合图像中包含至少一个预设区域,第二确定模块403C确定第一确定模块402C得到的目标融合图像中每个预设区域对应的深度值,第二获取模块404C根据第二确定模块403C确定的每个预设区域对应的深度值以及飞行器的当前飞行姿态信息获取飞行高度信息。
本实施例中,飞行器包括第一摄像头以及第二摄像头,第一摄像头获取第一实时图像,第二摄像头获取第二实时图像,具体过程可以为,飞行器根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像,然后根据第一深度图像以及第二深度图像确定目标融合图像,接下来飞行器可以确定目标融合图像中每个预设区域对应的深度值,最后根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息获取飞行高度信息。通过上述方式,采用双目摄像头测量飞行器的高度信息,与气压计测量高度信息相比,不会因为飞行器自身受到气流影响而导致高度信息测量的精度降低,此外,双目摄像头可以获取到各种复杂地形,并根据不同地形计算得到高度信息,从而提升测量的准确性,而且双目摄像头与深度摄像头相比,还具有成本较低的优势。
在一些可能的实施方式中,请一并参阅图25,本发明实施例提供的飞行器的另一实施例中,飞行器40C还包括:
设置模块405C,用于第一获取模块401C根据第一实时图像获取第一深度图像,并根据第二实时图像获取第二深度图像之前,在预置摄像头距离范围内,将第一摄像头与第二摄像头设置于飞行器的同一水平线上。
本实施例中,垂直向下的双目摄像头要求安装在同一个水平线上,并且 两个摄像头间隔的距离在预置摄像头距离范围内。通过上述安装方式,可以使得第一摄像头和第二摄像头都能够拍摄到符合要求的实时图像,如果两个摄像头间隔太小,则难以得到合理的深度信息以及定位信息,而两个摄像头间隔太大又会导致近处的物体拍摄不到,从而缺乏参照物,因此采用预置摄像头距离范围可以获取到更合理的图像。
在一些可能的实施方式中,请一并参阅图26,本发明实施例提供的飞行器的另一实施例中,第一获取模块401C包括:
缩放单元4011C,用于按照预设图像规格对第一实时图像以及第二实时图像进行缩放处理;
校正单元4012C,用于采用预先获取到的内部参数以及外部参数,对经过缩放单元4011C缩放处理后的第一实时图像以及第二实时图像进行图像校正,并得到第一深度图像以及第二深度图像。
本实施例中,飞行器在获取到第一实时图像和第二实时图像后还应该对其进行处理,首先需要按照预设图像规格对第一实时图像以及第二实时图像进行缩放,然后采用预先获取到的内部参数以及外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行图像校正。通过上述方式,对实时图像进行缩放和剪裁可以降低图像边缘不匹配的情况,同时还可以减少视觉处理的计算量,从而提升处理的效率,此外,对实时图像进行校正能够得到同一水平面上的图像,由此提升图像处理的准确性。
在一些可能的实施方式中,请一并参阅图27,本发明实施例提供的飞行器的另一实施例中,校正单元4012C包括:
第一处理子单元40121C,用于采用预先获取到的内部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行畸变补偿,其中,内部参数包含第一摄像头的桶形畸变参数和切向畸变参数,以及第二摄像头的桶形畸变参数和切向畸变参数;
第二处理子单元40122C,用于采用预先获取到的外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行旋转和平移,其中,外部参数包含第一摄像头的平移参数和旋转参数,以及第二摄像头的平移参数和旋转参数。
本实施例中,具体说明了如何对实时图像进行图像校正,即采用预先获取到的内部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行畸变补偿,采用预先获取到的外部参数,对经过缩放处理后的第一实时图像以及第二实时图像进行旋转和平移。通过上述方式,根据摄像头标定得到的内部参数和外部参数可以对实时图像进行校正和对准,使得实时图像在数学意义上满足处于同一水平线的要求,从而便于在后续的处理中对两个摄像头获取到的图像进行融合,以得到目标融合图像。
在一些可能的实施方式中,请一并参阅图28,本发明实施例提供的飞行器的另一实施例中,第一确定模块402C包括:
第一确定单元4021C,用于采用立体视觉算法确定第一深度图像以及第二深度图像之间的视差值;
合成单元4022C,用于根据第一确定单元4021C确定的视差值将第一深度图像以及第二深度图像合成为目标融合图像。
本实施例中,飞行器确定目标融合图像的过程还包括,先采用立体视觉算法确定第一深度图像以及第二深度图像之间的视差值,然后根据视差值将第一深度图像以及第二深度图像合成为目标融合图像。通过上述方式,可以根据计算得到的视差值来合成目标融合图像,从而提升目标融合图像的准确性。
在一些可能的实施方式中,请参阅图29,本发明实施例提供的飞行器另一实施例中,第二确定模块403C包括:
第二确定单元4031C,用于根据视差值确定目标融合图像中每个像素点的深度值;
第三确定单元4032C,用于根据第二确定单元4031确定的每个像素点的深度值分别确定每个预设区域对应的深度值。
本实施例中,飞行器确定目标融合图像中每个预设区域对应的深度值具体可以分为两个步骤,首先根据视差值确定目标融合图像中每个像素点的深度值,然后根据每个像素点的深度值分别确定每个预设区域对应的深度值。通过上述方法,由最小单位的像素点深度值来预计每个预设区域所对应的深度值,所得到的每个预设区域对应的深度值更为准确,从而提升方案的可行 性和实用性。
在一些可能的实施方式中,请参阅图30,本发明实施例提供的飞行器另一实施例中,第二确定单元4031C包括:
计算子单元40311C,用于按照如下方式计算每个像素点的深度值,:
[X Y Z W]T=Q×[x y disparity(x,y)1]T
Z(x,y)=Z/W;
其中,x表示三维空间中像素点在目标融合图像中的投影横坐标,y表示三维空间中像素点在目标融合图像中的投影纵坐标,disparity(x,y)表示在像素点(x,y)的视差值,Q表示视差深度映射矩阵,[X Y Z W]T表示目标矩阵,[X Y Z W]为目标矩阵的转置矩阵,Z(x,y)表示像素点(x,y)的深度值,Z为转置矩阵中第三列组成的子矩阵,W为转置矩阵中第四列组成的子矩阵。
本实施例中,介绍了如何根据视差值,计算得到目标融合图像中每个像素点的深度值的方法,即采用相关的公式即可计算出所需的结果,由此提升方案的实用性和可行性,增加方案的可操作性。
在一些可能的实施方式中,请参阅图31,本发明实施例提供的飞行器另一实施例中,第二获取模块404C包括:
计算单元4041C,用于按照如下方式计算飞行高度信息:
Figure PCTCN2017111577-appb-000035
h=d sinβ;
其中,β表示地面与飞行器的法线所构成的倾斜角,α表示当前飞行姿态信息中的翻滚角,γ表示当前飞行姿态信息中的俯仰角,d表示每个预设区域对应的深度值,h表示飞行高度信息。
本实施例中,介绍了如何根据每个预设区域对应的深度值以及飞行器的当前飞行姿态信息,计算得到飞行高度信息的方法,即采用相关的公式即可计算出所需的结果,由此提升方案的实用性和可行性,增加方案的可操作性。
此外,本发明实施例还提供了一种设备,包括:
处理器以及存储器;
存储器用于存储程序代码,并将程序代码传输给处理器;
处理器用于根据该程序代码中的指令执行上述飞行器的障碍物检测的方法,获取飞行定位信息的方法,获取飞行高度信息的方法。
另外,本发明实施例还提供了一种存储介质,存储介质用于存储程序代码,该程序代码用于执行上述飞行器的障碍物检测的方法,获取飞行定位信息的方法,获取飞行高度信息的方法。
另外,本发明实施例还提供了一种包括指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述飞行器的障碍物检测的方法,获取飞行定位信息的方法,获取飞行高度信息的方法。
另外本发明实施例还提供了另一种飞行器,如图32所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。以飞行器为无人机为例:
图32示出的是与本发明实施例提供的飞行器相关的无人机的部分结构的框图。参考图32,无人机包括:射频(英文全称:Radio Frequency,英文缩写:RF)电路510、存储器520、输入单元530、显示单元540、传感器550、音频电路560、无线保真(英文全称:wireless fidelity,英文缩写:WiFi)模块570、处理器580、以及电源590等部件。本领域技术人员可以理解,图13中示出的无人机结构并不构成对无人机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图32对无人机的各个构成部件进行具体的介绍:
RF电路510可用于收发信息或通话过程中,信号的接收和发送,特别地,将飞行器控制装置的下行信息接收后,给处理器580处理;另外,将设计上行的数据发送给飞行器控制装置。通常,RF电路510包括但不限于天线、至 少一个放大器、收发信机、耦合器、低噪声放大器(英文全称:Low Noise Amplifier,英文缩写:LNA)、双工器等。此外,RF电路510还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯***(英文全称:Global System of Mobile communication,英文缩写:GSM)、通用分组无线服务(英文全称:General Packet Radio Service,GPRS)、码分多址(英文全称:Code Division Multiple Access,英文缩写:CDMA)、宽带码分多址(英文全称:Wideband Code Division Multiple Access,英文缩写:WCDMA)、长期演进(英文全称:Long Term Evolution,英文缩写:LTE)、电子邮件、短消息服务(英文全称:Short Messaging Service,SMS)等。
存储器520可用于存储软件程序以及模块,处理器580通过运行存储在存储器520的软件程序以及模块,从而执行无人机的各种功能应用以及数据处理。存储器520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据无人机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元530可用于接收输入的数字或字符信息,以及产生与无人机的用户设置以及功能控制有关的键信号输入。作为一种示例,输入单元530可包括触控面板531以及其他输入设备532。触控面板531,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板531上或在触控面板531附近的操作),并根据预先设定的程式驱动相应的连接装置。在一种可能的实施方式中,触控面板531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器580,并能接收处理器580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板531。除了触控 面板531,输入单元530还可以包括其他输入设备532。比如,其他输入设备532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元540可用于显示由用户输入的信息或提供给用户的信息以及无人机的各种菜单。显示单元540可包括显示面板541,在一些可能的实施方式中,可以采用液晶显示器(英文全称:Liquid Crystal Display,英文缩写:LCD)、有机发光二极管(英文全称:Organic Light-Emitting Diode,英文缩写:OLED)等形式来配置显示面板541。并且,触控面板531可覆盖显示面板541,当触控面板531检测到在其上或附近的触摸操作后,传送给处理器580以确定触摸事件的类型,随后处理器580根据触摸事件的类型在显示面板541上提供相应的视觉输出。虽然在图13中,触控面板531与显示面板541是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板531与显示面板541集成而实现手机的输入和输出功能。
无人机还可包括至少一种传感器550,比如光传感器、运动传感器以及其他传感器。作为一种示例,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板541的亮度,接近传感器可在无人机移动到光亮处时,关闭显示面板541和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别无人机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路560、扬声器561,传声器562可提供用户与无人机之间的音频接口。音频电路560可将接收到的音频数据转换后的电信号,传输到扬声器561,由扬声器561转换为声音信号输出;另一方面,传声器562将收集的声音信号转换为电信号,由音频电路560接收后转换为音频数据,再将音频数据输出处理器580处理后,经RF电路510以发送给比如另一手机,或者将音频数据输出至存储器520以便进一步处理。
WiFi属于短距离无线传输技术,无人机通过WiFi模块570可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图13示出了WiFi模块570,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器580是无人机的控制中心,利用各种接口和线路连接整个无人机的各个部分,通过运行或执行存储在存储器520内的软件程序和/或模块,以及调用存储在存储器520内的数据,执行无人机的各种功能和处理数据,从而对无人机进行整体监控。在一种示例中,处理器580可包括一个或多个处理单元;比如,处理器580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器580中。
无人机还包括给各个部件供电的电源590(比如电池),优选的,电源可以通过电源管理***与处理器580逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
尽管未示出,无人机还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本发明实施例中,该终端所包括的处理器580还具有上述飞行器的障碍物检测的方法和/或获取飞行定位信息的方法和/或获取飞行高度信息的方法所对应的功能。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个 ***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(英文全称:Read-Only Memory,英文缩写:ROM)、随机存取存储器(英文全称:Random Access Memory,英文缩写:RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims (42)

  1. 一种飞行器的障碍物检测方法,包括:
    通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,所述第一图像由所述双目摄像头中的左眼拍摄得到,所述第二图像由所述双目摄像头中的右眼拍摄得到;
    确定所述目标障碍物投影在所述第一图像中的第一像素位置,以及所述目标障碍物投影在所述第二图像中的第二像素位置,并根据所述第一像素位置和所述第二像素位置,计算所述第一像素位置和所述第二像素位置之间的视差值;
    根据所述第一像素位置和所述第二像素位置之间的视差值、预置的视差深度映射矩阵,计算所述双目摄像头距离所述目标障碍物的深度值,以用于检测所述飞行器的飞行方向上是否有障碍物阻挡。
  2. 根据权利要求1所述的方法,所述通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,所述方法还包括:
    对所述第一图像和所述第二图像分别进行缩放处理和裁剪处理;
    将处理后的第一图像、第二图像分别转换为第一灰度图和第二灰度图,并对所述第一灰度图和所述第二灰度图分别进行均衡化处理;
    所述确定所述目标障碍物投影在所述第一图像中的第一像素位置,以及所述目标障碍物投影在所述第二图像中的第二像素位置,包括:
    从均衡化处理后的第一灰度图中确定出所述目标障碍物投影到的第一像素位置,以及从均衡化处理后的第二灰度图中确定出所述目标障碍物投影到的第二图像位置。
  3. 根据权利要求1或2所述的方法,所述通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,所述方法还包括:
    获取所述双目摄像头的内参信息和外参信息,所述内参信息包括:所述左眼的径向畸变参数和切向畸变参数、所述右眼的径向畸变参数和切向畸变参数,所述外参信息包括:所述双目摄像头中左眼和右眼之间的旋转矩阵和 偏移矩阵;
    根据所述内参信息分别对所述第一图像和所述第二图像进行畸变补偿,得到畸变补偿完成后的第一图像和畸变补偿完成后的第二图像;
    根据所述外参信息,对所述畸变补偿完成后的第一图像和所述畸变补偿完成后的第二图像,进行同一水平面上的图像校正处理。
  4. 根据权利要求1所述的方法,所述根据所述第一像素位置和所述第二像素位置之间的视差值、预置的视差深度映射矩阵,计算所述双目摄像头距离所述目标障碍物的深度值之后,所述方法还包括:
    将所述双目摄像头距离所述目标障碍物的深度值,发送给所述飞行器的飞行控制模块,由所述飞行控制模块根据所述双目摄像头距离所述目标障碍物的深度值,判断在其飞行方向上是否有障碍物阻挡。
  5. 根据权利要求1至4中任一项所述的方法,所述确定所述目标障碍物投影在所述第一图像中的第一像素位置,以及所述目标障碍物投影在所述第二图像中的第二像素位置,包括:
    根据所述飞行器在所述双目摄像头中形成的机身尺寸图像,确定图像选择窗口,所述图像选择窗口的总像素值大于所述机身尺寸图像的总像素值、且小于所述第一图像的总像素值、且小于所述第二图像的总像素值;
    使用所述图像选择窗口分别从所述第一图像、所述第二图像中选择出与所述图像选择窗口对应的第一子图像和第二子图像;
    使用全局匹配SGBM算法,对所述第一子图像和所述第二子图像分别拍摄到的所述目标障碍物进行图像点的匹配,通过匹配成功的图像点确定所述目标障碍物投影在所述第一子图像中的第一像素位置,以及所述目标障碍物投影在所述第二子图像中的第二像素位置。
  6. 根据权利要求5所述的方法,所述根据所述第一像素位置和所述第二像素位置之间的视差值、预置的视差深度映射矩阵,计算所述双目摄像头距离所述目标障碍物的深度值,包括:
    根据所述第一像素位置和所述第二像素位置之间的视差值、预置的视差深度映射矩阵,分别计算出与所述图像选择窗口对应的所有像素点的深度值;
    将所述图像选择窗口划分为多个图像子窗口,根据与所述图像选择窗口 对应的所有像素点的深度值,分别计算出每个图像子窗口的深度值;
    从所述每个图像子窗口的深度值中选择深度值最小的图像子窗口,确定所述深度值最小的图像子窗口的深度值为所述双目摄像头距离所述目标障碍物的深度值。
  7. 根据权利要求6所述的方法,所述确定所述深度值最小的图像子窗口的深度值为所述双目摄像头距离所述目标障碍物的深度值之后,所述方法还包括:
    将所述每个图像子窗口的深度值均发送给所述飞行器的飞行控制模块,由所述飞行控制模块根据所述每个图像子窗口的深度值选择避障方向后再调整所述飞行器的飞行姿态。
  8. 一种飞行器的障碍物检测装置,包括:
    图像采集模块,用于通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像,其中,所述第一图像由所述双目摄像头中的左眼拍摄得到,所述第二图像由所述双目摄像头中的右眼拍摄得到;
    视差计算模块,用于确定所述目标障碍物投影在所述第一图像中的第一像素位置,以及所述目标障碍物投影在所述第二图像中的第二像素位置,并根据所述第一像素位置和所述第二像素位置,计算所述第一像素位置和所述第二像素位置之间的视差值;
    深度计算模块,用于根据所述第一像素位置和所述第二像素位置之间的视差值、预置的视差深度映射矩阵,计算所述双目摄像头距离所述目标障碍物的深度值,以用于检测所述飞行器的飞行方向上是否有障碍物阻挡。
  9. 根据权利要求8所述的装置,所述飞行器的障碍物检测装置还包括:图像预处理模块,其中,
    所述图像预处理模块,用于所述图像采集模块通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,对所述第一图像和所述第二图像分别进行缩放处理和裁剪处理;将处理后的第一图像、第二图像分别转换为第一灰度图和第二灰度图,并对所述第一灰度图和所述第二灰度图分别进行均衡化处理;
    所述视差计算模块,具体用于从均衡化处理后的第一灰度图中确定出所述目标障碍物投影到的第一像素位置,以及从均衡化处理后的第二灰度图中确定出所述目标障碍物投影到的第二图像位置。
  10. 根据权利要求8或9所述的装置,所述飞行器的障碍物检测装置,还包括:
    获取模块,用于所述图像采集模块通过飞行器配置的双目摄像头对目标障碍物进行实时的图像采集,得到第一图像和第二图像之后,获取所述双目摄像头的内参信息和外参信息,所述内参信息包括:所述左眼的径向畸变参数和切向畸变参数、所述右眼的径向畸变参数和切向畸变参数,所述外参信息包括:所述双目摄像头中左眼和右眼之间的旋转矩阵和偏移矩阵;
    畸变补偿模块,用于根据所述内参信息分别对所述第一图像和所述第二图像进行畸变补偿,得到畸变补偿完成后的第一图像和畸变补偿完成后的第二图像;
    校正模块,用于根据所述外参信息,对所述畸变补偿完成后的第一图像和所述畸变补偿完成后的第二图像,进行同一水平面上的图像校正处理。
  11. 根据权利要求8所述的装置,所述飞行器的障碍物检测装置,还包括:
    第一发送模块,用于所述深度计算模块根据所述第一像素位置和所述第二像素位置之间的视差值、预置的视差深度映射矩阵,计算所述双目摄像头距离所述目标障碍物的深度值之后,将所述双目摄像头距离所述目标障碍物的深度值,发送给所述飞行器的飞行控制模块,由所述飞行控制模块根据所述双目摄像头距离所述目标障碍物的深度值,判断在其飞行方向上是否有障碍物阻挡。
  12. 根据权利要求8至11中任一项所述的装置,所述视差计算模块,包括:
    窗口确定单元,用于根据所述飞行器在所述双目摄像头中形成的机身尺寸图像,确定图像选择窗口,所述图像选择窗口的总像素值大于所述机身尺寸图像的总像素值、且小于所述第一图像的总像素值、且小于所述第二图像的总像素值;
    图像区域选择单元,用于使用所述图像选择窗口分别从所述第一图像、所述第二图像中选择出与所述图像选择窗口对应的第一子图像和第二子图像;
    图像匹配单元,用于使用全局匹配SGBM算法,对所述第一子图像和所述第二子图像分别拍摄到的所述目标障碍物进行图像点的匹配,通过匹配成功的图像点确定所述目标障碍物投影在所述第一子图像中的第一像素位置,以及所述目标障碍物投影在所述第二子图像中的第二像素位置。
  13. 根据权利要求12所述的装置,所述深度计算模块,包括:
    像素点深度值计算单元,用于根据所述第一像素位置和所述第二像素位置之间的视差值、预置的视差深度映射矩阵,分别计算出与所述图像选择窗口对应的所有像素点的深度值;
    子窗口深度值计算单元,用于将所述图像选择窗口划分为多个图像子窗口,根据与所述图像选择窗口对应的所有像素点的深度值,分别计算出每个图像子窗口的深度值;
    深度值确定单元,用于从所述每个图像子窗口的深度值中选择深度值最小的图像子窗口,确定所述深度值最小的图像子窗口的深度值为所述双目摄像头距离所述目标障碍物的深度值。
  14. 根据权利要求13所述的装置,所述飞行器的障碍物检测装置,还包括:
    第二发送模块,用于所述深度值确定模块确定所述深度值最小的图像子窗口的深度值为所述双目摄像头距离所述目标障碍物的深度值之后,将所述每个图像子窗口的深度值均都发送给所述飞行器的飞行控制模块,由所述飞行控制模块根据所述每个图像子窗口的深度值选择避障方向后再调整所述飞行器的飞行姿态。
  15. 一种飞行器的障碍物检测方法,包括:
    飞行器执行上述权利要求1-7任意一项所述飞行器的障碍物检测方法。
  16. 一种获取飞行定位信息的方法,所述方法应用于飞行器,所述飞行器包括第一摄像头以及第二摄像头,其中,所述第一摄像头用于获取N个不 同时刻所对应的N个第一实时图像,所述第二摄像头用于获取所述N个不同时刻对应的N个第二实时图像,所述N为大于或等于2的正整数,所述方法包括:
    根据所述N个第一实时图像,确定(N-1)个第一本征参数,以及根据所述N个第二实时图像,确定(N-1)个第二本征参数;其中,所述第一本征参数用于表征所述N个第一实时图像中相邻两帧图像的平移参数,所述第二本征参数用于表征所述N个第二实时图像中相邻两帧图像的平移参数;
    通过所述第一摄像头,获取所述N个不同时刻中起始时刻的第一初始定位信息,以及通过所述第二摄像头,获取所述N个不同时刻中起始时刻的第二初始定位信息;
    根据所述第一初始定位信息与所述(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,以及根据所述第二初始定位信息与所述(N-1)个第二本征参数,确定所述(N-1)个时刻对应的(N-1)个第二飞行定位信息;其中,所述(N-1)个时刻为所述N个不同时刻中除起始时刻之外的(N-1)个时刻;
    根据所述(N-1)个第一飞行定位信息以及所述(N-1)个第二飞行定位信息,获取所述N个不同时刻中结束时刻所对应的目标飞行定位信息。
  17. 根据权利要求16所述的方法,所述第一摄像头与所述第二摄像头在预置摄像头距离范围内设置于所述飞行器的同一水平线上。
  18. 根据权利要求16所述的方法,所述根据所述N个第一实时图像,确定(N-1)个第一本征参数,以及根据所述N个第二实时图像,确定(N-1)个第二本征参数之前,所述方法还包括:
    通过所述第一摄像头获取第一时刻对应的第一子图像以及第二时刻对应的第二子图像,其中,所述第一时刻与所述第二时刻均为所述N个不同时刻中的两个时刻,所述第一子图像与所述第二子图像均属于所述第一实时图像;
    通过所述第二摄像头获取所述第一时刻对应的第三子图像以及所述第二时刻对应的第四子图像,其中,所述第三子图像与所述第四子图像均属于所述第二实时图像;
    采用基于双目立体视觉方式,测量得到第一深度信息以及第二深度信息, 其中,所述第一深度信息为根据所述第一子图像和所述第二子图像得到,所述第二深度信息为根据所述第三子图像和所述第四子图像得到。
  19. 根据权利要求18所述的方法,所述第一本征参数包括第一旋转矩阵以及第一平移向量,所述第二本征参数包括第二旋转矩阵以及第二平移向量,其中,所述第一旋转矩阵用于表示所述第一摄像头的角度变化,所述第二旋转矩阵用于表示所述第二摄像头的角度变化,所述第一平移向量用于表示所述第一摄像头的高度变化,所述第二平移向量用于表示所述第二摄像头的高度变化。
  20. 根据权利要求16至19中任一项所述的方法,所述根据所述(N-1)个第一飞行定位信息以及所述(N-1)个第二飞行定位信息,获取所述N个不同时刻中结束时刻所对应的目标飞行定位信息之后,所述方法还包括:
    根据所述目标飞行定位信息,确定第(N+1)时刻所对应的第一子飞行定位信息;
    采用预置定位约束条件以及所述第一子飞行定位信息,获取所述第(N+1)时刻所对应的第二子飞行定位信息;
    根据所述第一子飞行定位信息以及所述第一本征参数,确定第(N+2)时刻所对应的第三子飞行定位信息;
    采用所述预置定位约束条件以及所述第三子飞行定位信息,获取所述第(N+2)时刻所对应的第四子飞行定位信息;
    计算所述第一子飞行定位信息与所述第三目标飞行定位信息的第一最优解,并计算所述第二子飞行定位信息与所述第四子飞行定位信息的第二最优解,所述第一最优解与所述第二最优解构成所述第(N+2)时刻的飞行定位信息。
  21. 一种飞行器,所述飞行器包括第一摄像头以及第二摄像头,其中,所述第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,所述第二摄像头用于获取所述N个不同时刻对应的N个第二实时图像,所述N为大于或等于2的正整数,所述飞行器包括:
    第一确定模块,用于根据所述N个第一实时图像,确定(N-1)个第一本征参数,以及根据所述N个第二实时图像,确定(N-1)个第二本征参数;其中, 所述第一本征参数用于表征所述N个第一实时图像中相邻两帧图像的平移参数,所述第二本征参数用于表征所述N个第二实时图像中相邻两帧图像的平移参数;
    第一获取模块,用于通过所述第一摄像头,获取所述N个不同时刻中起始时刻的第一初始定位信息,以及通过所述第二摄像头,获取所述N个不同时刻中起始时刻的第二初始定位信息;
    第二确定模块,用于根据所述第一初始定位信息与所述(N-1)个第一本征参数,确定(N-1)个时刻对应的(N-1)个第一飞行定位信息,以及根据所述第二初始定位信息与所述(N-1)个第二本征参数,确定所述(N-1)个时刻对应的(N-1)个第二飞行定位信息;其中,所述(N-1)个时刻为所述N个不同时刻中除起始时刻之外的(N-1)个时刻;
    第二获取模块,用于根据所述(N-1)个第一飞行定位信息以及所述(N-1)个第二飞行定位信息,获取所述N个不同时刻中结束时刻所对应的目标飞行定位信息。
  22. 根据权利要求21所述的飞行器,所述飞行器还包括:
    设置模块,用于在预置摄像头距离范围内,将所述第一摄像头与所述第二摄像头设置于所述飞行器的同一水平线上。
  23. 根据权利要求21所述的飞行器,所述飞行器还包括:
    第三获取模块,用于所述第一确定模块根据所述N个第一实时图像,确定(N-1)个第一本征参数,以及根据所述N个第二实时图像,确定(N-1)个第二本征参数之前,通过所述第一摄像头获取第一时刻对应的第一子图像以及第二时刻对应的第二子图像,其中,所述第一时刻与所述第二时刻均为所述N个不同时刻中的两个时刻,所述第一子图像与所述第二子图像均属于所述第一实时图像;
    第四获取模块,用于通过所述第二摄像头获取所述第一时刻对应的第三子图像以及所述第二时刻对应的第四子图像,其中,所述第三子图像与所述第四子图像均属于所述第二实时图像;
    测量模块,用于采用基于双目立体视觉方式,测量得到第一深度信息以及第二深度信息,其中,所述第一深度信息为根据所述第一子图像和所述第 二子图像得到,所述第二深度信息为根据所述第三子图像和所述第四子图像得到。
  24. 根据权利要求23所述的飞行器,所述第一本征参数包括第一旋转矩阵以及第一平移向量,所述第二本征参数包括第二旋转矩阵以及第二平移向量,其中,所述第一旋转矩阵用于表示所述第一摄像头的角度变化,所述第二旋转矩阵用于表示所述第二摄像头的角度变化,所述第一平移向量用于表示所述第一摄像头的高度变化,所述第二平移向量用于表示所述第二摄像头的高度变化。
  25. 根据权利要求21至24中任一项所述的飞行器,所述飞行器还包括:
    第三确定模块,用于所述第二获取模块根据所述(N-1)个第一飞行定位信息以及所述(N-1)个第二飞行定位信息,获取所述N个不同时刻中结束时刻所对应的目标飞行定位信息之后,根据所述目标飞行定位信息,确定第(N+1)时刻所对应的第一子飞行定位信息;
    第五获取模块,用于采用预置定位约束条件以及所述第一子飞行定位信息,获取所述第(N+1)时刻所对应的第二子飞行定位信息;
    第四确定模块,用于根据所述第一子飞行定位信息以及所述第一本征参数,确定第(N+2)时刻所对应的第三子飞行定位信息;
    第六获取模块,用于采用所述预置定位约束条件以及所述第三子飞行定位信息,获取所述第(N+2)时刻所对应的第四子飞行定位信息;
    计算模块,用于计算所述第一子飞行定位信息与所述第三目标飞行定位信息的第一最优解,并计算所述第二子飞行定位信息与所述第四子飞行定位信息的第二最优解,所述第一最优解与所述第二最优解构成所述第(N+2)时刻的飞行定位信息。
  26. 一种获取飞行定位信息的方法,所述方法应用于飞行器,所述飞行器包括第一摄像头以及第二摄像头,其中,所述第一摄像头用于获取N个不同时刻所对应的N个第一实时图像,所述第二摄像头用于获取所述N个不同时刻对应的N个第二实时图像,所述N为大于或等于2的正整数,所述方法包括:
    所述飞行器执行上述权利要求16-20任意一项所述的获取飞行定位信息 的方法。
  27. 一种获取飞行高度信息的方法,所述方法应用于飞行器,所述飞行器包括第一摄像头以及第二摄像头,其中,所述第一摄像头用于获取第一实时图像,所述第二摄像头用于获取第二实时图像,所述方法包括:
    根据所述第一实时图像获取第一深度图像,以及根据所述第二实时图像获取第二深度图像;
    根据所述第一深度图像以及所述第二深度图像确定目标融合图像,所述目标融合图像中包含至少一个预设区域;
    确定所述目标融合图像中每个预设区域对应的深度值;
    根据所述每个预设区域对应的深度值以及所述飞行器的当前飞行姿态信息,获取飞行高度信息。
  28. 根据权利要求27所述的方法,所述根据所述第一实时图像获取第一深度图像,以及根据所述第二实时图像获取第二深度图像之前,所述方法还包括:
    在预置摄像头距离范围内,将所述第一摄像头与所述第二摄像头设置于所述飞行器的同一水平线上。
  29. 根据权利要求27或28所述的方法,所述根据所述第一实时图像获取第一深度图像,以及根据所述第二实时图像获取第二深度图像,包括:
    按照预设图像规格,对所述第一实时图像以及所述第二实时图像进行缩放处理;
    采用预先获取到的内部参数以及外部参数,对经过缩放处理后的所述第一实时图像以及所述第二实时图像进行图像校正,得到第一深度图像以及第二深度图像。
  30. 根据权利要求29所述的方法,所述采用预先获取到的内部参数以及外部参数,对经过缩放处理后的所述第一实时图像以及所述第二实时图像进行图像校正,包括:
    采用预先获取到的内部参数,对经过缩放处理后的所述第一实时图像以及所述第二实时图像进行畸变补偿,其中,所述内部参数包含所述第一摄像头的桶形畸变参数和切向畸变参数,以及所述第二摄像头的桶形畸变参数和 切向畸变参数;
    采用预先获取到的外部参数,对经过缩放处理后的所述第一实时图像以及所述第二实时图像进行旋转和平移,其中,所述外部参数包含所述第一摄像头的平移参数和旋转参数,以及所述第二摄像头的平移参数和旋转参数。
  31. 根据权利要求要求30所述的方法,所述根据所述第一深度图像以及所述第二深度图像确定目标融合图像,包括:
    采用立体视觉算法,确定所述第一深度图像以及所述第二深度图像之间的视差值;
    根据所述视差值,将所述第一深度图像以及所述第二深度图像合成为目标融合图像。
  32. 根据权利要求要求31所述的方法,所述确定所述目标融合图像中每个预设区域对应的深度值,包括:
    根据所述视差值,确定所述目标融合图像中每个像素点的深度值;
    根据每个像素点的深度值,分别确定每个预设区域对应的深度值。
  33. 一种飞行器,所述飞行器包括第一摄像头以及第二摄像头,其中,所述第一摄像头用于获取第一实时图像,所述第二摄像头用于获取第二实时图像,所述飞行器还包括:
    第一获取模块,用于根据所述第一实时图像获取第一深度图像,以及根据所述第二实时图像获取第二深度图像;
    第一确定模块,用于根据所述第一获取模块获取的所述第一深度图像以及所述第二深度图像确定目标融合图像,所述目标融合图像中包含至少一个预设区域;
    第二确定模块,用于确定所述第一确定模块得到的所述目标融合图像中每个预设区域对应的深度值;
    第二获取模块,用于根据所述第二确定模块确定的所述每个预设区域对应的深度值以及所述飞行器的当前飞行姿态信息,获取飞行高度信息。
  34. 根据权利要求33所述的飞行器,所述飞行器还包括:
    设置模块,用于所述第一获取模块根据所述第一实时图像获取第一深度图像,以及根据所述第二实时图像获取第二深度图像之前,在预置摄像头距 离范围内,将所述第一摄像头与所述第二摄像头设置于所述飞行器的同一水平线上。
  35. 根据权利要求33或34所述的飞行器,所述第一获取模块包括:
    缩放单元,用于按照预设图像规格,对所述第一实时图像以及所述第二实时图像进行缩放处理;
    校正单元,用于采用预先获取到的内部参数以及外部参数,对经过所述缩放单元缩放处理后的所述第一实时图像以及所述第二实时图像进行图像校正,并得到所述第一深度图像以及所述第二深度图像。
  36. 根据权利要求35所述的飞行器,所述校正单元包括:
    第一处理子单元,用于采用预先获取到的内部参数,对经过缩放处理后的所述第一实时图像以及所述第二实时图像进行畸变补偿,其中,所述内部参数包含所述第一摄像头的桶形畸变参数和切向畸变参数,以及所述第二摄像头的桶形畸变参数和切向畸变参数;
    第二处理子单元,用于采用预先获取到的外部参数,对经过缩放处理后的所述第一实时图像以及所述第二实时图像进行旋转和平移,其中,所述外部参数包含所述第一摄像头的平移参数和旋转参数,以及所述第二摄像头的平移参数和旋转参数。
  37. 根据权利要求36所述的飞行器,所述第一确定模块包括:
    第一确定单元,用于采用立体视觉算法,确定所述第一深度图像以及所述第二深度图像之间的视差值;
    合成单元,用于根据,所述视差值,将所述第一深度图像以及所述第二深度图像合成为目标融合图像。
  38. 根据权利要求37所述的飞行器,所述第二确定模块包括:
    第二确定单元,用于根据所述视差值,确定所述目标融合图像中每个像素点的深度值;
    第三确定单元,用于根据每个像素点的深度值,分别确定每个预设区域对应的深度值。
  39. 一种获取飞行高度信息的方法,所述方法应用于飞行器,所述飞行器包括第一摄像头以及第二摄像头,其中,所述第一摄像头用于获取第一实 时图像,所述第二摄像头用于获取第二实时图像,所述方法包括:
    所述飞行器执行上述权利要求27-32任意一项所述的获取飞行高度信息的方法。
  40. 一种设备,所述设备包括:
    处理器以及存储器;
    所述存储器用于存储程序代码,并将所述程序代码传输给所述处理器;
    所述处理器用于根据所述程序代码中的指令执行权利要求1-7、16-20、27-32任一项所述的方法。
  41. 一种存储介质,所述存储介质用于存储程序代码,所述程序代码用于执行权利要求1-7、16-20、27-32任一项所述的方法。
  42. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1-7、16-20、27-32任一项所述的方法。
PCT/CN2017/111577 2016-11-24 2017-11-17 飞行器的信息获取方法、装置及设备 WO2018095278A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/296,073 US10942529B2 (en) 2016-11-24 2019-03-07 Aircraft information acquisition method, apparatus and device

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201611045197.6 2016-11-24
CN201611045197.6A CN106529495B (zh) 2016-11-24 2016-11-24 一种飞行器的障碍物检测方法和装置
CN201611100232.XA CN106767682A (zh) 2016-12-01 2016-12-01 一种获取飞行高度信息的方法及飞行器
CN201611100232.X 2016-12-01
CN201611100259.9A CN106767817B (zh) 2016-12-01 2016-12-01 一种获取飞行定位信息的方法及飞行器
CN201611100259.9 2016-12-01

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/296,073 Continuation US10942529B2 (en) 2016-11-24 2019-03-07 Aircraft information acquisition method, apparatus and device

Publications (1)

Publication Number Publication Date
WO2018095278A1 true WO2018095278A1 (zh) 2018-05-31

Family

ID=62195736

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/111577 WO2018095278A1 (zh) 2016-11-24 2017-11-17 飞行器的信息获取方法、装置及设备

Country Status (2)

Country Link
US (1) US10942529B2 (zh)
WO (1) WO2018095278A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407103A (zh) * 2018-09-07 2019-03-01 昆明理工大学 一种无人机雾天障碍物识别装置及其识别方法
CN112489140A (zh) * 2020-12-15 2021-03-12 北京航天测控技术有限公司 姿态测量方法
CN112862687A (zh) * 2021-02-24 2021-05-28 之江实验室 一种基于二维特征点的双目内窥图像三维拼接方法
CN113099204A (zh) * 2021-04-13 2021-07-09 北京航空航天大学青岛研究院 一种基于vr头戴显示设备的远程实景增强现实方法

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110119154A (zh) * 2016-11-30 2019-08-13 深圳市大疆创新科技有限公司 飞行器的控制方法、装置和设备以及飞行器
JP6699902B2 (ja) * 2016-12-27 2020-05-27 株式会社東芝 画像処理装置及び画像処理方法
EP3550506B1 (en) * 2018-04-05 2021-05-12 Everdrone AB A method for improving the interpretation of the surroundings of a uav, and a uav system
EP3761220A1 (en) 2019-07-05 2021-01-06 Everdrone AB Method for improving the interpretation of the surroundings of a vehicle
US11022972B2 (en) * 2019-07-31 2021-06-01 Bell Textron Inc. Navigation system with camera assist
CN111274959B (zh) * 2019-12-04 2022-09-16 北京航空航天大学 一种基于可变视场角的加油锥套位姿精确测量方法
CN111089564B (zh) * 2019-12-16 2021-12-07 上海航天控制技术研究所 一种动平台动目标双目测距***及方法
TWI726536B (zh) * 2019-12-16 2021-05-01 財團法人工業技術研究院 影像擷取方法及影像擷取設備
CN111105462B (zh) * 2019-12-30 2024-05-28 联想(北京)有限公司 位姿确定方法及装置、增强现实设备和可读存储介质
US20230106278A1 (en) 2020-02-21 2023-04-06 Harman International Industries, Incorporated Image processing method, apparatus, device and medium
US11232315B2 (en) 2020-04-28 2022-01-25 NextVPU (Shanghai) Co., Ltd. Image depth determining method and living body identification method, circuit, device, and medium
CN111572790A (zh) * 2020-05-07 2020-08-25 重庆交通大学 一种无人机可伸缩全面保护控制***及方法
CN111666876B (zh) * 2020-06-05 2023-06-09 阿波罗智联(北京)科技有限公司 用于检测障碍物的方法、装置、电子设备和路侧设备
CN111932622B (zh) * 2020-08-10 2022-06-28 浙江大学 一种无人机的飞行高度的确定装置、方法及***
CN112184832B (zh) * 2020-09-24 2023-01-17 中国人民解放军军事科学院国防科技创新研究院 一种基于增强现实技术的可见光相机与雷达联合探测方法
CN112489186B (zh) * 2020-10-28 2023-06-27 中汽数据(天津)有限公司 一种自动驾驶双目数据感知方法
CN113312979B (zh) * 2021-04-30 2024-04-16 阿波罗智联(北京)科技有限公司 图像处理方法、装置、电子设备、路侧设备及云控平台
CN114565570B (zh) * 2022-02-18 2024-03-15 成都飞机工业(集团)有限责任公司 一种弱刚性蒙皮锪窝孔深度测量方法、装置、设备及介质
CN114676379B (zh) * 2022-02-25 2023-05-05 中国人民解放军国防科技大学 高超声速巡航飞行器整体红外辐射特性计算方法及装置
US20230316740A1 (en) * 2022-03-31 2023-10-05 Wing Aviation Llc Method for Controlling an Unmanned Aerial Vehicle to Avoid Obstacles
CN117308969B (zh) * 2023-09-27 2024-05-14 广东电网有限责任公司汕尾供电局 一种启发式快速探索随机树的电力杆塔三维航线规划方法

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007047953A2 (en) * 2005-10-20 2007-04-26 Prioria, Inc. System and method for onboard vision processing
CN101419055A (zh) * 2008-10-30 2009-04-29 北京航空航天大学 基于视觉的空间目标位姿测量装置和方法
CN101504287A (zh) * 2009-01-22 2009-08-12 浙江大学 基于视觉信息的无人飞行器自主着陆的姿态参数估算方法
CN102313536A (zh) * 2011-07-21 2012-01-11 清华大学 基于机载双目视觉的障碍物感知方法
CN101527046B (zh) * 2009-04-28 2012-09-05 青岛海信数字多媒体技术国家重点实验室有限公司 一种运动检测方法、装置和***
CN102779347A (zh) * 2012-06-14 2012-11-14 清华大学 一种用于飞行器的目标跟踪与定位方法和装置
CN102939763A (zh) * 2010-06-14 2013-02-20 高通股份有限公司 计算三维图像的视差
CN105222760A (zh) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 一种基于双目视觉的无人机自主障碍物检测***及方法
CN105974938A (zh) * 2016-06-16 2016-09-28 零度智控(北京)智能科技有限公司 避障方法、装置、载体及无人机
CN106127788A (zh) * 2016-07-04 2016-11-16 触景无限科技(北京)有限公司 一种视觉避障方法和装置
CN106529495A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的障碍物检测方法和装置
CN106767682A (zh) * 2016-12-01 2017-05-31 腾讯科技(深圳)有限公司 一种获取飞行高度信息的方法及飞行器
CN106767817A (zh) * 2016-12-01 2017-05-31 腾讯科技(深圳)有限公司 一种获取飞行定位信息的方法及飞行器

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5145585B2 (zh) * 1972-12-11 1976-12-04
US6405975B1 (en) * 1995-12-19 2002-06-18 The Boeing Company Airplane ground maneuvering camera system
JP5145585B2 (ja) * 2007-06-08 2013-02-20 国立大学法人 熊本大学 物標検出装置
US7864096B2 (en) * 2008-01-23 2011-01-04 Aviation Communication & Surveillance Systems Llc Systems and methods for multi-sensor collision avoidance
CN100554877C (zh) 2008-02-19 2009-10-28 哈尔滨工程大学 一种面向水下运载器的实时双目视觉导引方法
CN102435174B (zh) 2011-11-01 2013-04-10 清华大学 基于混合式双目视觉的障碍物检测方法及装置
CA2833985C (en) * 2012-11-19 2020-07-07 Rosemount Aerospace, Inc. Collision avoidance system for aircraft ground operations
US9583012B1 (en) * 2013-08-16 2017-02-28 The Boeing Company System and method for detection and avoidance
CN103679707A (zh) 2013-11-26 2014-03-26 西安交通大学 基于双目相机视差图的道路障碍物检测***及检测方法
US9047771B1 (en) * 2014-03-07 2015-06-02 The Boeing Company Systems and methods for ground collision avoidance
US9870617B2 (en) * 2014-09-19 2018-01-16 Brain Corporation Apparatus and methods for saliency detection based on color occurrence analysis
CN112859899A (zh) * 2014-10-31 2021-05-28 深圳市大疆创新科技有限公司 用于利用视觉标记进行监视的***和方法
US9955056B2 (en) * 2015-03-16 2018-04-24 Qualcomm Incorporated Real time calibration for multi-camera wireless device
US10565887B2 (en) * 2015-03-16 2020-02-18 Sikorsky Aircraft Corporation Flight initiation proximity warning system
US9933264B2 (en) * 2015-04-06 2018-04-03 Hrl Laboratories, Llc System and method for achieving fast and reliable time-to-contact estimation using vision and range sensor data for autonomous navigation
US10019907B2 (en) * 2015-09-11 2018-07-10 Qualcomm Incorporated Unmanned aerial vehicle obstacle detection and avoidance
US10665115B2 (en) * 2016-01-05 2020-05-26 California Institute Of Technology Controlling unmanned aerial vehicles to avoid obstacle collision
CN105787447A (zh) 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 一种无人机基于双目视觉的全方位避障的方法及***
CN105807786A (zh) 2016-03-04 2016-07-27 深圳市道通智能航空技术有限公司 一种无人机自动避障的方法及***
CN105913474A (zh) 2016-04-05 2016-08-31 清华大学深圳研究生院 双目三维重构装置及其三维重构方法以及一种安卓应用
CN106020232B (zh) 2016-07-07 2019-08-02 天津航天中为数据***科技有限公司 一种无人机避障装置及避障方法
US10126722B2 (en) * 2016-08-01 2018-11-13 Qualcomm Incorporated System and method of dynamically controlling parameters for processing sensor output data for collision avoidance and path planning

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007047953A2 (en) * 2005-10-20 2007-04-26 Prioria, Inc. System and method for onboard vision processing
CN101419055A (zh) * 2008-10-30 2009-04-29 北京航空航天大学 基于视觉的空间目标位姿测量装置和方法
CN101504287A (zh) * 2009-01-22 2009-08-12 浙江大学 基于视觉信息的无人飞行器自主着陆的姿态参数估算方法
CN101527046B (zh) * 2009-04-28 2012-09-05 青岛海信数字多媒体技术国家重点实验室有限公司 一种运动检测方法、装置和***
CN102939763A (zh) * 2010-06-14 2013-02-20 高通股份有限公司 计算三维图像的视差
CN102313536A (zh) * 2011-07-21 2012-01-11 清华大学 基于机载双目视觉的障碍物感知方法
CN102779347A (zh) * 2012-06-14 2012-11-14 清华大学 一种用于飞行器的目标跟踪与定位方法和装置
CN105222760A (zh) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 一种基于双目视觉的无人机自主障碍物检测***及方法
CN105974938A (zh) * 2016-06-16 2016-09-28 零度智控(北京)智能科技有限公司 避障方法、装置、载体及无人机
CN106127788A (zh) * 2016-07-04 2016-11-16 触景无限科技(北京)有限公司 一种视觉避障方法和装置
CN106529495A (zh) * 2016-11-24 2017-03-22 腾讯科技(深圳)有限公司 一种飞行器的障碍物检测方法和装置
CN106767682A (zh) * 2016-12-01 2017-05-31 腾讯科技(深圳)有限公司 一种获取飞行高度信息的方法及飞行器
CN106767817A (zh) * 2016-12-01 2017-05-31 腾讯科技(深圳)有限公司 一种获取飞行定位信息的方法及飞行器

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SU , DONG: "Navigation and Obstacle Avoidance for Miniature UAV Based on Binocular Stereo Vision", UNIVERSITY OF ELECTRONIC SCIENCE AND TECHNOLOGY OF CHINA MASTER' THESES, 1 March 2014 (2014-03-01) *
ZHANG, LIANG ET AL.: "Pose Estimation Algorithm and Verification Based on Binocular Stereo Vision for Unmanned Aerial Vehicle", JOURNAL OF HARBIN INSTITUTE OF TECHNOLOGY, vol. 46, no. 5, 31 May 2014 (2014-05-31), pages 66 - 72 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407103A (zh) * 2018-09-07 2019-03-01 昆明理工大学 一种无人机雾天障碍物识别装置及其识别方法
CN112489140A (zh) * 2020-12-15 2021-03-12 北京航天测控技术有限公司 姿态测量方法
CN112489140B (zh) * 2020-12-15 2024-04-05 北京航天测控技术有限公司 姿态测量方法
CN112862687A (zh) * 2021-02-24 2021-05-28 之江实验室 一种基于二维特征点的双目内窥图像三维拼接方法
CN112862687B (zh) * 2021-02-24 2023-10-31 之江实验室 一种基于二维特征点的双目内窥图像三维拼接方法
CN113099204A (zh) * 2021-04-13 2021-07-09 北京航空航天大学青岛研究院 一种基于vr头戴显示设备的远程实景增强现实方法
CN113099204B (zh) * 2021-04-13 2022-12-13 北京航空航天大学青岛研究院 一种基于vr头戴显示设备的远程实景增强现实方法

Also Published As

Publication number Publication date
US20190206073A1 (en) 2019-07-04
US10942529B2 (en) 2021-03-09

Similar Documents

Publication Publication Date Title
WO2018095278A1 (zh) 飞行器的信息获取方法、装置及设备
US11649052B2 (en) System and method for providing autonomous photography and videography
CN106529495B (zh) 一种飞行器的障碍物检测方法和装置
WO2018023492A1 (zh) 一种云台控制方法及***
US11019322B2 (en) Estimation system and automobile
EP3008695B1 (en) Robust tracking using point and line features
WO2018227350A1 (zh) 无人机返航控制方法、无人机和机器可读存储介质
WO2019076304A1 (zh) 基于双目相机的无人机视觉slam方法、无人机及存储介质
CN110793544B (zh) 路侧感知传感器参数标定方法、装置、设备及存储介质
JP2019049457A (ja) 画像処理装置および測距装置
WO2018159168A1 (en) System and method for virtually-augmented visual simultaneous localization and mapping
WO2019100219A1 (zh) 输出影像生成方法、设备及无人机
WO2019155335A1 (en) Unmanned aerial vehicle including an omnidirectional depth sensing and obstacle avoidance aerial system and method of operating same
CN111226154B (zh) 自动对焦相机和***
CN106767682A (zh) 一种获取飞行高度信息的方法及飞行器
WO2019051832A1 (zh) 可移动物体控制方法、设备及***
CN106767817B (zh) 一种获取飞行定位信息的方法及飞行器
EP3005238B1 (en) Method and system for coordinating between image sensors
WO2020019175A1 (zh) 图像处理方法和设备、摄像装置以及无人机
WO2022246812A1 (zh) 定位方法、装置、电子设备及存储介质
CN116295340A (zh) 一种基于全景相机的无人机双目视觉slam方法
CN113706391B (zh) 无人机航拍图像实时拼接方法、***、设备及存储介质
WO2020119572A1 (zh) 形状推断装置、形状推断方法、程序以及记录介质
JP7242822B2 (ja) 推定システムおよび自動車
WO2023047799A1 (ja) 画像処理装置、画像処理方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17874997

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17874997

Country of ref document: EP

Kind code of ref document: A1