WO2020114433A1 - Depth perception method and apparatus, and depth perception device - Google Patents

Depth perception method and apparatus, and depth perception device Download PDF

Info

Publication number
WO2020114433A1
WO2020114433A1 PCT/CN2019/123072 CN2019123072W WO2020114433A1 WO 2020114433 A1 WO2020114433 A1 WO 2020114433A1 CN 2019123072 W CN2019123072 W CN 2019123072W WO 2020114433 A1 WO2020114433 A1 WO 2020114433A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
group
images
divided
correction
Prior art date
Application number
PCT/CN2019/123072
Other languages
French (fr)
Chinese (zh)
Inventor
郑欣
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2020114433A1 publication Critical patent/WO2020114433A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Definitions

  • Embodiments of the present invention relate to the field of computer vision technology, and in particular, to a depth perception method, apparatus, and depth perception equipment.
  • obstacle sensing sensors With the popularity of robots and drones, obstacle sensing sensors have also been increasingly used.
  • binocular sensors are widely used in obstacle sensing sensors due to their low cost, wide application scenarios, long detection range, and high efficiency. Due to the wide angle of view of fisheye lens, more and more researches on binocular depth perception based on fisheye lens.
  • the binocular fisheye lens is used for obstacle depth perception. Due to the severe deformation of the periphery of the fisheye image, the edge of the fisheye image after correction is greatly distorted, and there is a problem of inaccurate stereo matching.
  • the Taylor series model is used for calibration and depth measurement, and the spherical image is expressed as a rectangular image characterized by latitude and longitude. This method can reduce the measurement error due to distortion to a certain extent, but there is still a large error at the edge of the image.
  • An object of the embodiments of the present invention is to provide a depth sensing method, device, and depth sensing device with high edge depth detection accuracy.
  • an embodiment of the present invention provides a depth perception method, which is used for a binocular system to perceive the depth of a target area.
  • the binocular system includes a first camera device and a second camera device, and the method includes :
  • the center line of the first divided image is parallel to the center line of the second divided image, and the line connecting the first divided image and the second divided image is The centerline is at a preset angle;
  • the depth information of the region corresponding to each group of the divided images is obtained.
  • the performing image segmentation and correction on the first target image to obtain multiple first segmented images includes:
  • a piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
  • the performing image segmentation and correction on the second target image to obtain multiple second segmented images respectively corresponding to the multiple first segmented images includes:
  • a piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  • the separately calibrating each group of divided images to obtain the calibration parameters corresponding to each group of the divided images includes:
  • the Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  • performing binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images includes:
  • the obtaining depth information of the area corresponding to each group of the segmented images according to the disparity map and the calibration parameters includes:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters
  • connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • the line between the first camera device and the second camera device is parallel to the horizontal direction
  • the image segmentation and correction of the first target image includes:
  • the image segmentation and correction of the second target image includes:
  • Image segmentation and correction are performed on the second target image in a direction at a preset angle with the horizontal direction, and the preset angle is not 90°.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • an embodiment of the present invention provides a depth sensing device, which is used for a binocular system to sense the depth of a target area.
  • the binocular system includes a first camera device and a second camera device, and the device includes :
  • An obtaining module configured to obtain a first target image of the target area obtained by the first camera device and a second target image of the target area obtained by the second camera device;
  • a segmentation and correction module is used for:
  • each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first divided The connection between the image and the second divided image is at a preset angle with the center line;
  • a calibration module the calibration module is used to calibrate each group of segmented images separately to obtain calibration parameters corresponding to each group of segmented images;
  • a binocular matching module configured to perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images
  • the depth information acquisition module is configured to acquire depth information of a region corresponding to each segmented image according to the disparity map and the calibration parameter.
  • the segmentation and correction module is specifically used to:
  • a piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
  • the segmentation and correction module is specifically used to:
  • a piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  • the calibration module is specifically used to:
  • the Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  • the binocular matching module is specifically used to:
  • the depth information acquisition module is specifically used to:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters
  • connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • the line between the first camera device and the second camera device is parallel to the horizontal direction
  • the segmentation and correction module is specifically used for:
  • the preset angle is not 90°.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • an embodiment of the present invention provides a depth sensing device, including:
  • a binocular system set on the main body, includes a first camera device and a second camera device;
  • a controller the controller is provided in the main body; the controller includes:
  • At least one processor and
  • a memory the memory is communicatively connected to the at least one processor, the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor The processor can execute the method described above.
  • connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • the depth sensing device is a drone.
  • the depth sensing method, device and depth sensing device of the embodiment of the present invention perform image segmentation and correction on the first target image of the target area captured by the first camera device to obtain multiple first segmented images Perform image segmentation and correction on the second target image of the target area to obtain a plurality of second segmented images corresponding to the plurality of first segmented images, each of the first segmented image and its corresponding second segmented image A set of split images. After the image is segmented and then corrected, a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy. Moreover, multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • FIG. 1 is a schematic structural diagram of an embodiment of a depth perception device according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of an application scenario of a depth sensing device according to an embodiment of the present invention
  • 3a is a schematic diagram of the position of the camera device in one embodiment of the depth perception device of the present invention.
  • 3b is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • 3c is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • FIG. 5 is a schematic flowchart of an embodiment of the depth perception method of the present invention.
  • FIG. 6 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an embodiment of a depth perception device of the present invention.
  • FIG. 8 is a schematic diagram of a hardware structure of a controller in an embodiment of a depth perception device of the present invention.
  • the depth sensing method and apparatus provided by the embodiments of the present invention may be applied to the depth sensing device 100 shown in FIG. 1.
  • the depth sensing device 100 includes a main body (not shown in the figure), a binocular system 101 for sensing an image of a target area, and a controller 10.
  • the binocular system 101 includes a first camera 20 and a second camera 30, both of which are used to acquire a target image of a target area.
  • the controller 10 is used to process the target images acquired by the first camera 20 and the second camera 30 to acquire depth information.
  • the controller 10 performs image segmentation and correction on the first target image acquired by the first camera 20 and the second target image acquired by the second camera 30 to obtain a plurality of first segmented images, and A plurality of second divided images respectively corresponding to the first divided image, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images.
  • the target image is divided into multiple small images, and after correcting the multiple small images, a corrected image with relatively small image quality loss can be obtained.
  • Stereo matching and depth calculation using images with a small loss of image quality improve the accuracy of binocular stereo matching and the accuracy of edge depth detection.
  • multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • the depth sensing device 100 can be used in various situations where depth sensing is required, such as three-dimensional reconstruction, obstacle detection, and path planning.
  • the depth sensing device 100 is used in drones, robots, and the like.
  • FIG. 2 shows an application scenario of the depth sensing device 100 as an unmanned aerial vehicle for obstacle detection and path planning.
  • the first camera 20 and the second camera 30 can be any suitable lens.
  • the depth sensing method and device of the embodiments of the present invention are more suitable for the case where the first camera 20 and the second camera 30 are wide-angle lenses such as fisheye lenses, or panoramic lenses such as fold-back and omnidirectional lenses. Since the edges of the images captured by these lenses are severely deformed and the edge depth detection accuracy is lower, the depth perception method and device of the embodiments of the present invention can obtain edge correction images with relatively small image quality loss, greatly improving the edge depth detection accuracy.
  • the first camera 20 and the second camera 30 can be arranged in any suitable manner.
  • the first camera 20 and the second camera 30 can be staggered.
  • the first camera 20 and the second camera 30 are arranged diagonally.
  • the first camera 20 is located at the diagonal point on the upper side
  • the second camera 30 is located at the diagonal point on the lower side.
  • the connection between the first camera 20 and the second camera 30 is at a predetermined angle A with the horizontal direction, so that the first camera 20 and the second camera 30 do not block each other.
  • the images obtained by dividing and correcting the first target image and the second target image obtained by the first camera 20 and the second camera 30 are shown in FIG.
  • the first camera device 20 and the second camera device 30 may also be arranged in parallel horizontally, and the first camera device 20 and the second camera device 30 are located on the same horizontal line.
  • it when performing image segmentation on the first target image and the second target image, it may be in a direction at a preset angle A with the horizontal direction (for example (B direction in FIG. 4) Image segmentation and correction are performed on the first target image and the second target image. It can be seen from Figure 4 that the two segmented images in each group of binocular systems are staggered from each other, and no dead zone area will appear.
  • the center line of the first segmented image (for example, line a or line b) is parallel to the center line of the second segmented image.
  • the first divided image and the second divided image need to be staggered from each other, that is, the connection line between the first divided image and the second divided image and the center line form a preset angle A.
  • the preset included angle may be any suitable angle other than 90 degrees, for example, 45 degrees or 135 degrees.
  • FIG. 5 is a schematic flowchart of a depth sensing method according to an embodiment of the present invention. The method may be executed by the depth sensing device 100 in FIG. 1. As shown in FIG. 5, the method includes:
  • a first target image of the target area is obtained by the first camera 20, and a second target image of the target area is obtained by the second camera 30.
  • the first camera device 20 and the second camera device 30 can be any suitable lens, such as a wide-angle lens such as a fisheye lens, or a panoramic lens such as a fold-back or omnidirectional lens.
  • a wide-angle lens such as a fisheye lens
  • a panoramic lens such as a fold-back or omnidirectional lens.
  • each of the first segmented image and the The second divided image corresponding to the first divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first The connection line between a divided image and the second divided image forms a predetermined angle with the center line.
  • the piecewise algorithm may be used to segment and correct the first target image and the second target image to obtain multiple first segmented images and multiple second segmented images.
  • the number of the first divided image and the second divided image may be any appropriate number, for example, 2, 3, 4, 5 or more. Because the middle area of the target image is less deformed and the surrounding area is more deformed, in order to further improve the edge depth detection accuracy, the middle area can be regarded as a single area, and the area around the middle area is divided into multiple peripheral areas. Among them, the number of surrounding areas may be 4, 6, 8, or other.
  • FIG. 6 shows a scenario where the first camera 20 and the second camera 30 are fisheye lenses and there are four peripheral areas.
  • the left image is an image before correction
  • the right image is an image after correction.
  • the above four peripheral areas are an upper area, a lower area, a left area and a right area (corresponding to the numbers 1, 5, 2, 4 in FIG. 6 respectively) Area).
  • the upper area, the left area, the middle area, the right area, and the lower area represent the images of the front, left, top, right, and rear, respectively, and the binocular system formed thereby can obtain Depth information in five directions: front, left, top, right, and back.
  • the upper area, the left area, the middle area, the right area, and the lower area respectively represent the images of the front left, front right, top, left, and back right directions, and the binocular system composed thereby can obtain Depth information in five directions: front left, front right, top, back left, and back right.
  • depth perception in the five directions of front, left, top, right, and rear can be achieved, and the perception angle is 180 degrees.
  • two or more sets of depth sensing devices 100 may be installed on the drone or robot to achieve 360-degree depth sensing.
  • the embodiments of the present invention use fewer lenses and save space. It also reduces the calibration error caused by the deformation of the connected components, reduces the complexity and calibration time of the calibration algorithm, and the calibration parameters are not easy to change and have the advantage of zero delay.
  • the Zhang Zhengyou method or the Faugeras method may be used to separately calibrate each group of segmented images to obtain the calibration parameters (including internal and external parameters) corresponding to each group of the segmented images.
  • the BM (Boyer-Moore) algorithm or SGBM (Semi Global Block Matching) algorithm can be used to perform binocular matching on each group of the segmented images to obtain the corresponding position of each group of the segmented images Describe the parallax map.
  • the image after the first target image and the second target image are divided and corrected please refer to FIG. 3b .
  • the first target image is divided into 1-5 five divided images
  • the second target image is divided into 6-10 five divided images.
  • 1 and 6 form the front binocular system
  • 2 and 7 form the left binocular system
  • 3 and 8 form the upper binocular system
  • 4 and 9 form the right binocular system
  • 5 and 10 form the rear binocular system.
  • the existing stereo matching algorithm obtains the parallax of each feature point on the first target image and the second target image, and the parallax of each feature point constitutes a parallax map corresponding to each group of divided images.
  • the following formula can be used to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the divided images:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters. According to the three-dimensional coordinates of each point in the area, the depth information of the area can be obtained.
  • the baseline distance between area 1 and area 6, the baseline distance between area 5 and area 10 is D
  • the baseline distance between area 2 and area 7 is D
  • the baseline distance between them is W
  • the baseline distance between regions 3 and 8 is (It can also be W or D).
  • W is the horizontal distance between the first camera 20 and the second camera 30
  • D is the vertical distance between the first camera 20 and the second camera 30.
  • the first target image of the target area captured by the first camera device is image-divided and corrected to obtain multiple first divided images
  • the second target image of the target area captured by the second camera device is image-divided and After correction, a plurality of second divided images respectively corresponding to the plurality of first divided images are obtained, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images.
  • a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy.
  • multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • an embodiment of the present invention further provides a depth sensing device, which is used in the depth sensing device 100 in FIG. 1, and the depth sensing device 700 includes:
  • An acquiring module 701 the acquiring module is configured to acquire a first target image of the target area acquired by the first camera device and a second target image of the target area acquired by the second camera device.
  • the segmentation and correction module 702 is used to:
  • each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first divided The connection between the image and the second divided image is at a predetermined angle with the center line.
  • the calibration module 703 is configured to calibrate each group of divided images separately to obtain calibration parameters corresponding to each group of the divided images.
  • the binocular matching module 704 is configured to perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images.
  • the depth information obtaining module 705 is configured to obtain the depth information of the area corresponding to each group of the divided images according to the disparity map and the calibration parameters.
  • the first target image of the target area captured by the first camera device is image-divided and corrected to obtain a plurality of first divided images
  • the second target image of the target area captured by the second camera device is image-divided and After correction, a plurality of second divided images respectively corresponding to the plurality of first divided images are obtained, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images.
  • a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy.
  • multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
  • the segmentation and correction module 702 is specifically used to:
  • a piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images. as well as
  • a piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  • the calibration module 703 is specifically used to:
  • the Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  • the binocular matching module 704 is specifically used to:
  • the depth information acquisition module 705 is specifically used to:
  • x i , y i , and z i represent the three-dimensional coordinates of each point
  • baseline is the length of the baseline
  • disparity is the disparity data obtained from the disparity map
  • cx, cy, fx, and fy are the calibration parameters
  • the connection between the first camera device and the second camera device is at a preset angle with the horizontal direction.
  • the preset included angle is 45° or 135°.
  • connection line between the first camera device and the second camera device is parallel to the horizontal direction
  • the segmentation and correction module 702 is specifically used for:
  • the preset angle is not 90°.
  • the preset included angle is 45° or 135°.
  • both the first camera device and the second camera device are fisheye lenses.
  • the above-mentioned device can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the above-mentioned device can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method.
  • the methods provided in the embodiments of the present application refer to the methods provided in the embodiments of the present application.
  • FIG. 8 is a schematic diagram of the hardware structure of the controller 10 in the depth perception device 100 according to an embodiment of the present invention. As shown in FIG. 8, the controller 10 includes:
  • One or more processors 11 and memory 12, one processor 11 is taken as an example in FIG. 8.
  • the processor 11 and the memory 12 may be connected through a bus or in other ways. In FIG. 8, the connection through a bus is used as an example.
  • the memory 12 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions corresponding to the depth perception method in the embodiments of the present application.
  • / Module for example, the acquisition module 701, segmentation and correction module 702, calibration module 703, binocular matching module 704, and depth information acquisition module 705 shown in FIG. 7.
  • the processor 11 executes various functional applications and data processing of the depth-aware device 100 by running non-volatile software programs, instructions, and modules stored in the memory 12, that is, implements the depth-aware method of the foregoing method embodiments.
  • the memory 12 may include a storage program area and a storage data area, where the storage program area may store an operating system and application programs required by at least one function; the storage data area may store data created according to the use of a depth-aware device, and the like.
  • the memory 12 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices.
  • the memory 12 may optionally include memories remotely provided with respect to the processor 11, and these remote memories may be connected to a depth-aware device through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
  • the one or more modules are stored in the memory 12, and when executed by the one or more processors 11, execute the depth perception method in any of the above method embodiments, for example, execute FIG. 5 described above Steps 101 to 106 of the method; implement the functions of the modules 701-705 in FIG. 7.
  • An embodiment of the present application provides a non-volatile computer-readable storage medium that stores computer-executable instructions that are executed by one or more processors, such as in FIG. 8
  • One of the processors 11 may enable the one or more processors to execute the depth perception method in any of the above method embodiments, for example, to perform the method steps 101 to 106 in FIG. 5 described above; Functions of modules 701-705.
  • the main body is the body of the drone, wherein the first camera 20 and the second camera 30 of the depth sensing device 100 may be provided in the drone
  • the controller 10 may use a separate controller, or may be controlled by a drone's flight control chip.
  • the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located One place, or can be distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each embodiment can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware.
  • the program may be stored in a computer-readable storage medium. When executed, it may include the processes of the foregoing method embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (RandomAccessMemory, RAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)

Abstract

A depth perception method and apparatus, and a depth perception device. The method comprises: obtaining a first target image of a target region by means of a first photographing unit, and obtaining a second target image of the target region by means of a second photographing unit (101); performing image segmentation and correction on the first target image so as to obtain multiple first segmented images (102); performing image segmentation and correction on the second target image so as to obtain multiple second segmented images respectively corresponding to the multiple first segmented images (103); performing binocular matching on each group of segmented images to obtain a disparity map corresponding to each group of segmented images (105); and according to the disparity map and a calibration parameter, obtaining the depth information of a region corresponding to each group of segmented images (106). By means of the method, the edge depth detection precision is improved. Moreover, multiple groups of segmented images constitute multiple groups of binocular systems, and thus, the depth information in multiple directions can be perceived.

Description

一种深度感知方法,装置和深度感知设备Depth perception method, device and depth perception equipment 技术领域Technical field
本发明实施例涉及计算机视觉技术领域,特别涉及一种深度感知方法、装置和深度感知设备。Embodiments of the present invention relate to the field of computer vision technology, and in particular, to a depth perception method, apparatus, and depth perception equipment.
背景技术Background technique
随着机器人、无人机的普及,障碍物感知传感器也随之得到越来越多的应用。其中,由于双目传感器成本低、适用场景广、探测范围远以及高效等特点,在障碍物感知传感器中得到了广泛应用。由于鱼眼镜头视角广,针对基于鱼眼镜头的双目深度感知的研究越来越多。With the popularity of robots and drones, obstacle sensing sensors have also been increasingly used. Among them, binocular sensors are widely used in obstacle sensing sensors due to their low cost, wide application scenarios, long detection range, and high efficiency. Due to the wide angle of view of fisheye lens, more and more researches on binocular depth perception based on fisheye lens.
实现本发明过程中,发明人发现相关技术中至少存在如下问题:In the process of implementing the present invention, the inventor found that the related art has at least the following problems:
采用双目鱼眼镜头进行障碍物深度感知,由于鱼眼图像周边变形厉害,进行矫正后的鱼眼图像边缘畸变大,存在立体匹配不精准的问题。目前,虽有技术利用特殊标定模型能降低双目立体匹配的误差,如公开号为CN102005039B的专利文献中利用泰勒级数模型进行标定并测量深度,将球面图像表示为经纬度表征的矩形图像。该方法能一定程度上降低由于畸变造成的测量误差,但在图像边缘处仍然存在较大的误差。The binocular fisheye lens is used for obstacle depth perception. Due to the severe deformation of the periphery of the fisheye image, the edge of the fisheye image after correction is greatly distorted, and there is a problem of inaccurate stereo matching. At present, although there are technologies that use a special calibration model to reduce the error of binocular stereo matching, for example, in the patent document with publication number CN102005039B, the Taylor series model is used for calibration and depth measurement, and the spherical image is expressed as a rectangular image characterized by latitude and longitude. This method can reduce the measurement error due to distortion to a certain extent, but there is still a large error at the edge of the image.
发明内容Summary of the invention
本发明实施例的目的是提供一种边缘深度检测精度高的深度感知方法,装置和深度感知设备。An object of the embodiments of the present invention is to provide a depth sensing method, device, and depth sensing device with high edge depth detection accuracy.
第一方面,本发明实施例提供了一种深度感知方法,所述方法用于双目***感知目标区域的深度,所述双目***包括第一摄像装置和第二摄像装置,所述方法包括:In a first aspect, an embodiment of the present invention provides a depth perception method, which is used for a binocular system to perceive the depth of a target area. The binocular system includes a first camera device and a second camera device, and the method includes :
通过所述第一摄像装置获得所述目标区域的第一目标图像,通过所 述第二摄像装置获得所述目标区域的第二目标图像;Obtaining a first target image of the target area by the first camera, and obtaining a second target image of the target area by the second camera;
对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像;Performing image segmentation and correction on the first target image to obtain multiple first segmented images;
对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,其中,每一个所述第一分割图像和与所述第一分割图像对应的第二分割图像为一组分割图像;Performing image segmentation and correction on the second target image to obtain a plurality of second segmented images respectively corresponding to the plurality of first segmented images, wherein each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images;
每一组所述分割图像中,所述第一分割图像的中心线与所述第二分割图像的中心线平行,且所述第一分割图像和所述第二分割图像的连线与所述中心线呈预设夹角;In each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the line connecting the first divided image and the second divided image is The centerline is at a preset angle;
分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的标定参数;Calibrate each group of segmented images separately to obtain calibration parameters corresponding to each group of segmented images;
对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图;Performing binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images;
根据所述视差图和所述标定参数,获取每一组所述分割图像对应的区域的深度信息。According to the disparity map and the calibration parameter, the depth information of the region corresponding to each group of the divided images is obtained.
在一些实施例中,所述对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像,包括:In some embodiments, the performing image segmentation and correction on the first target image to obtain multiple first segmented images includes:
采用piecewise算法,对所述第一目标图像进行图像分割和校正,以获取所述多个第一分割图像。A piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
在一些实施例中,所述对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,包括:In some embodiments, the performing image segmentation and correction on the second target image to obtain multiple second segmented images respectively corresponding to the multiple first segmented images includes:
采用piecewise算法,对所述第二目标图像进行图像分割和校正,以获取所述多个第二分割图像。A piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
在一些实施例中,所述分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的标定参数,包括:In some embodiments, the separately calibrating each group of divided images to obtain the calibration parameters corresponding to each group of the divided images includes:
利用张正友法或faugeras法分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的所述标定参数。The Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
在一些实施例中,所述对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图,包括:In some embodiments, performing binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images includes:
利用BM算法或SGBM算法对每一组所述分割图像进行双目匹配, 以获取每一组所述分割图像对应的所述视差图。Use the BM algorithm or SGBM algorithm to perform binocular matching on each group of the divided images to obtain the disparity map corresponding to each group of the divided images.
在一些实施例中,所述根据所述视差图和所述标定参数,获取每一组所述分割图像对应区域的深度信息,包括:In some embodiments, the obtaining depth information of the area corresponding to each group of the segmented images according to the disparity map and the calibration parameters includes:
利用以下公式,获取每一组所述分割图像对应的所述区域中每一点的三维坐标:Use the following formula to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the segmented images:
Figure PCTCN2019123072-appb-000001
Figure PCTCN2019123072-appb-000001
Figure PCTCN2019123072-appb-000002
Figure PCTCN2019123072-appb-000002
Figure PCTCN2019123072-appb-000003
Figure PCTCN2019123072-appb-000003
其中,x i、y i、z i表示每个点的三维坐标,baseline为基线长度,disparity为由所述视差图获取的视差数据,cx、cy、fx、fy为所述标定参数; Where x i , y i , and z i represent the three-dimensional coordinates of each point, baseline is the length of the baseline, disparity is the disparity data obtained from the disparity map, and cx, cy, fx, and fy are the calibration parameters;
根据所述区域中每一点的所述三维坐标,获取所述区域的深度信息。Obtain depth information of the area according to the three-dimensional coordinates of each point in the area.
在一些实施例中,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向呈预设夹角。In some embodiments, the connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
在一些实施例中,所述预设夹角为45°或135°。In some embodiments, the preset included angle is 45° or 135°.
在一些实施例中,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向平行;In some embodiments, the line between the first camera device and the second camera device is parallel to the horizontal direction;
所述对所述第一目标图像进行图像分割和校正,包括:The image segmentation and correction of the first target image includes:
在与水平方向呈预设夹角的方向上对所述第一目标图像进行图像分割和校正;Performing image segmentation and correction on the first target image in a direction at a preset angle with the horizontal direction;
所述对所述第二目标图像进行图像分割和校正,包括:The image segmentation and correction of the second target image includes:
在与水平方向呈预设夹角的方向上对所述第二目标图像进行图像分割和校正,所述预设夹角不为90°。Image segmentation and correction are performed on the second target image in a direction at a preset angle with the horizontal direction, and the preset angle is not 90°.
在一些实施例中,所述预设夹角为45°或135°。In some embodiments, the preset included angle is 45° or 135°.
在一些实施例中,所述第一摄像装置与所述第二摄像装置均为鱼眼镜头。In some embodiments, both the first camera device and the second camera device are fisheye lenses.
第二方面,本发明实施例提供了一种深度感知装置,所述装置用于双目***感知目标区域的深度,所述双目***包括第一摄像装置和第二 摄像装置,所述装置包括:In a second aspect, an embodiment of the present invention provides a depth sensing device, which is used for a binocular system to sense the depth of a target area. The binocular system includes a first camera device and a second camera device, and the device includes :
获取模块,所述获取模块用于获取所述第一摄像装置获取的所述目标区域的第一目标图像和所述第二摄像装置获取的所述目标区域的第二目标图像;An obtaining module, the obtaining module is configured to obtain a first target image of the target area obtained by the first camera device and a second target image of the target area obtained by the second camera device;
分割和校正模块,所述分割和校正模块用于:A segmentation and correction module, the segmentation and correction module is used for:
对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像;以及Performing image segmentation and correction on the first target image to obtain multiple first segmented images; and
对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,其中,每一个所述第一分割图像和与所述第一分割图像对应的第二分割图像为一组分割图像;每一组所述分割图像中,所述第一分割图像的中心线与所述第二分割图像的中心线平行,且所述第一分割图像和所述第二分割图像的连线与所述中心线呈预设夹角;Performing image segmentation and correction on the second target image to obtain a plurality of second segmented images respectively corresponding to the plurality of first segmented images, wherein each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first divided The connection between the image and the second divided image is at a preset angle with the center line;
标定模块,所述标定模块用于分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的标定参数;A calibration module, the calibration module is used to calibrate each group of segmented images separately to obtain calibration parameters corresponding to each group of segmented images;
双目匹配模块,所述双目匹配模块用于对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图;以及A binocular matching module, the binocular matching module is configured to perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images; and
深度信息获取模块,用于根据所述视差图和所述标定参数,获取每一组所述分割图像对应的区域的深度信息。The depth information acquisition module is configured to acquire depth information of a region corresponding to each segmented image according to the disparity map and the calibration parameter.
在一些实施例中,所述分割和校正模块具体用于:In some embodiments, the segmentation and correction module is specifically used to:
采用piecewise算法,对所述第一目标图像进行图像分割和校正,以获取所述多个第一分割图像。A piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
在一些实施例中,所述分割和校正模块具体用于:In some embodiments, the segmentation and correction module is specifically used to:
采用piecewise算法,对所述第二目标图像进行图像分割和校正,以获取所述多个第二分割图像。A piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
在一些实施例中,所述标定模块具体用于:In some embodiments, the calibration module is specifically used to:
利用张正友法或faugeras法分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的所述标定参数。The Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
在一些实施例中,所述双目匹配模块具体用于:In some embodiments, the binocular matching module is specifically used to:
利用BM算法或SGBM算法对每一组所述分割图像进行双目匹配, 以获取每一组所述分割图像对应的所述视差图。Use the BM algorithm or SGBM algorithm to perform binocular matching on each group of the divided images to obtain the disparity map corresponding to each group of the divided images.
在一些实施例中,所述深度信息获取模块具体用于:In some embodiments, the depth information acquisition module is specifically used to:
利用以下公式,获取每一组所述分割图像对应的所述区域中每一点的三维坐标:Use the following formula to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the segmented images:
Figure PCTCN2019123072-appb-000004
Figure PCTCN2019123072-appb-000004
Figure PCTCN2019123072-appb-000005
Figure PCTCN2019123072-appb-000005
Figure PCTCN2019123072-appb-000006
Figure PCTCN2019123072-appb-000006
其中,x i、y i、z i表示每个点的三维坐标,baseline为基线长度,disparity为由所述视差图获取的视差数据,cx、cy、fx、fy为所述标定参数; Where x i , y i , and z i represent the three-dimensional coordinates of each point, baseline is the length of the baseline, disparity is the disparity data obtained from the disparity map, and cx, cy, fx, and fy are the calibration parameters;
根据所述区域中每一点的所述三维坐标,获取所述区域的深度信息。Obtain depth information of the area according to the three-dimensional coordinates of each point in the area.
在一些实施例中,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向呈预设夹角。In some embodiments, the connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
在一些实施例中,所述预设夹角为45°或135°。In some embodiments, the preset included angle is 45° or 135°.
在一些实施例中,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向平行;In some embodiments, the line between the first camera device and the second camera device is parallel to the horizontal direction;
所述分割和校正模块具体用于:The segmentation and correction module is specifically used for:
在与水平方向呈预设夹角的方向上对所述第一目标图像进行图像分割和校正;以及在与水平方向呈预设夹角的方向上对所述第二目标图像进行图像分割和校正,所述预设夹角不为90°。Performing image segmentation and correction on the first target image in a direction at a predetermined angle with the horizontal direction; and performing image segmentation and correction on the second target image in a direction with a predetermined angle to the horizontal direction , The preset angle is not 90°.
在一些实施例中,所述预设夹角为45°或135°。In some embodiments, the preset included angle is 45° or 135°.
在一些实施例中,所述第一摄像装置与所述第二摄像装置均为鱼眼镜头。In some embodiments, both the first camera device and the second camera device are fisheye lenses.
第三方面,本发明实施例提供了一种深度感知设备,包括:In a third aspect, an embodiment of the present invention provides a depth sensing device, including:
主体;main body;
双目***,设于所述主体,包括第一摄像装置和第二摄像装置;A binocular system, set on the main body, includes a first camera device and a second camera device;
控制器,所述控制器设于所述主体;所述控制器包括:A controller, the controller is provided in the main body; the controller includes:
至少一个处理器,以及At least one processor, and
存储器,所述存储器与所述至少一个处理器通信连接,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述的方法。A memory, the memory is communicatively connected to the at least one processor, the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor The processor can execute the method described above.
在一些实施例中,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向呈预设夹角。In some embodiments, the connection between the first camera device and the second camera device is at a predetermined angle with the horizontal direction.
在一些实施例中,所述预设夹角为45°或135°。In some embodiments, the preset included angle is 45° or 135°.
在一些实施例中,所述第一摄像装置与所述第二摄像装置均为鱼眼镜头。In some embodiments, both the first camera device and the second camera device are fisheye lenses.
在一些实施例中,所述深度感知设备为无人机。In some embodiments, the depth sensing device is a drone.
本发明实施例的深度感知方法,装置和深度感知设备,将第一摄像装置拍摄的目标区域的第一目标图像进行图像分割和矫正,获得多个第一分割图像,将第二摄像装置拍摄的目标区域的第二目标图像进行图像分割和矫正,获得与所述多个第一分割图像分别对应的多个第二分割图像,每一个所述第一分割图像和与其对应的第二分割图像组成一组分割图像。将图像进行分割后再矫正可以获得画质损失相对小的矫正图像,从而提高了双目立体匹配的精度,也提高了边缘深度检测精度。而且,多组分割图像构成多组双目***,可以感知多个方向的深度信息。The depth sensing method, device and depth sensing device of the embodiment of the present invention perform image segmentation and correction on the first target image of the target area captured by the first camera device to obtain multiple first segmented images Perform image segmentation and correction on the second target image of the target area to obtain a plurality of second segmented images corresponding to the plurality of first segmented images, each of the first segmented image and its corresponding second segmented image A set of split images. After the image is segmented and then corrected, a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy. Moreover, multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
附图说明BRIEF DESCRIPTION
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplarily illustrated by the pictures in the corresponding drawings. These exemplary descriptions do not constitute a limitation on the embodiments, and elements with the same reference numerals in the drawings represent similar elements. Unless otherwise stated, the figures in the drawings do not constitute a scale limitation.
图1是本发明实施例深度感知设备的一个实施例的结构示意图;FIG. 1 is a schematic structural diagram of an embodiment of a depth perception device according to an embodiment of the present invention;
图2是本发明实施例深度感知设备的应用场景示意图;2 is a schematic diagram of an application scenario of a depth sensing device according to an embodiment of the present invention;
图3a是本发明深度感知设备的一个实施例中摄像装置的位置示意图;3a is a schematic diagram of the position of the camera device in one embodiment of the depth perception device of the present invention;
图3b是本发明实施例中对目标图像进行图像分割的示意图;3b is a schematic diagram of image segmentation of a target image in an embodiment of the present invention;
图3c是本发明实施例中对目标图像进行图像分割的示意图;3c is a schematic diagram of image segmentation of a target image in an embodiment of the present invention;
图4是本发明实施例中对目标图像进行图像分割的示意图;4 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention;
图5是本发明深度感知方法的一个实施例的流程示意图;5 is a schematic flowchart of an embodiment of the depth perception method of the present invention;
图6是本发明实施例中对目标图像进行图像分割的示意图;6 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention;
图7是本发明深度感知装置的一个实施例的结构示意图;7 is a schematic structural diagram of an embodiment of a depth perception device of the present invention;
图8是本发明深度感知设备的一个实施例中控制器的硬件结构示意图。8 is a schematic diagram of a hardware structure of a controller in an embodiment of a depth perception device of the present invention.
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。To make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of the embodiments of the present invention, but not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative efforts fall within the protection scope of the present invention.
本发明实施例提供的深度感知方法和装置可以应用于如图1所示的深度感知设备100。深度感知设备100包括主体(图中未示出)、用于感知目标区域图像的双目***101和控制器10。双目***101包括第一摄像装置20和第二摄像装置30,第一摄像装置20和第二摄像装置30均用于获取目标区域的目标图像。控制器10用于对第一摄像装置20和第二摄像装置30获取的目标图像进行处理,获取深度信息。具体的,控制器10对第一摄像装置20获取的第一目标图像和第二摄像装置30获取的第二目标图像进行图像分割和矫正,获得多个第一分割图像,以及与所述多个第一分割图像分别对应的多个第二分割图像,每一个所述第一分割图像和与其对应的第二分割图像组成一组分割图像。将目标图像分成多个小图像,分别对多个小图像进行矫正后可以获得画质损失相对小的矫正图像。利用画质损失小的图像进行立体匹配和深度计算,提高了双目立体匹配的精度,也提高了边缘深度检测精度。而且,多组分割图像构成多组双目***,可以感知多个方向的深度信息。The depth sensing method and apparatus provided by the embodiments of the present invention may be applied to the depth sensing device 100 shown in FIG. 1. The depth sensing device 100 includes a main body (not shown in the figure), a binocular system 101 for sensing an image of a target area, and a controller 10. The binocular system 101 includes a first camera 20 and a second camera 30, both of which are used to acquire a target image of a target area. The controller 10 is used to process the target images acquired by the first camera 20 and the second camera 30 to acquire depth information. Specifically, the controller 10 performs image segmentation and correction on the first target image acquired by the first camera 20 and the second target image acquired by the second camera 30 to obtain a plurality of first segmented images, and A plurality of second divided images respectively corresponding to the first divided image, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images. The target image is divided into multiple small images, and after correcting the multiple small images, a corrected image with relatively small image quality loss can be obtained. Stereo matching and depth calculation using images with a small loss of image quality improve the accuracy of binocular stereo matching and the accuracy of edge depth detection. Moreover, multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
其中,该深度感知设备100可以用于三维重建、障碍物检测及路径规划等各种需要进行深度感知的场合,例如,深度感知设备100用于无人机、机器人等。图2示出了深度感知设备100作为无人机进行障碍物 检测和路径规划的应用场景。The depth sensing device 100 can be used in various situations where depth sensing is required, such as three-dimensional reconstruction, obstacle detection, and path planning. For example, the depth sensing device 100 is used in drones, robots, and the like. FIG. 2 shows an application scenario of the depth sensing device 100 as an unmanned aerial vehicle for obstacle detection and path planning.
其中,第一摄像装置20和第二摄像装置30可以为任何合适的镜头。本发明实施例的深度感知方法和装置更适用于第一摄像装置20和第二摄像装置30为鱼眼镜头等广角镜头,或者折返、全向等全景镜头的场合。由于这些镜头拍摄的图像边缘变形较厉害,边缘深度检测精度更低,采用本发明实施例的深度感知方法和装置能获得画质损失相对小的边缘矫正图像,极大的提高边缘深度检测精度。The first camera 20 and the second camera 30 can be any suitable lens. The depth sensing method and device of the embodiments of the present invention are more suitable for the case where the first camera 20 and the second camera 30 are wide-angle lenses such as fisheye lenses, or panoramic lenses such as fold-back and omnidirectional lenses. Since the edges of the images captured by these lenses are severely deformed and the edge depth detection accuracy is lower, the depth perception method and device of the embodiments of the present invention can obtain edge correction images with relatively small image quality loss, greatly improving the edge depth detection accuracy.
其中,第一摄像装置20和第二摄像装置30可以以任何合适的方式排列,在其中一些实施例中,为了使多组分割图像构成的多组双目***中不出现死区区域(即无法测量的区域),第一摄像装置20和第二摄像装置30可以错落设置。例如,如图3a所示,第一摄像装置20和第二摄像装置30对角排列,第一摄像装置20位于上侧的对角点,第二摄像装置30位于下侧的对角点,第一摄像装置20与第二摄像装置30之间的连线与水平方向呈预设夹角A,这样可以使第一摄像装置20和第二摄像装置30彼此不互相遮挡。第一摄像装置20和第二摄像装置30获得的第一目标图像和第二目标图像经分割校正后获得的图像如图3b所示(图中以将图像分割成五块图像为例说明)。此时是在正常方向上(即图3c中的a方向)对图像进行分割,由图3b和图3c可以看出,每组双目***中的两个分割图像(例如1和6、2和7、3和8等)彼此互相错开,不会出现死区区域。Among them, the first camera 20 and the second camera 30 can be arranged in any suitable manner. In some of these embodiments, in order to prevent the dead zone region (ie Measured area), the first camera 20 and the second camera 30 can be staggered. For example, as shown in FIG. 3a, the first camera 20 and the second camera 30 are arranged diagonally. The first camera 20 is located at the diagonal point on the upper side, and the second camera 30 is located at the diagonal point on the lower side. The connection between the first camera 20 and the second camera 30 is at a predetermined angle A with the horizontal direction, so that the first camera 20 and the second camera 30 do not block each other. The images obtained by dividing and correcting the first target image and the second target image obtained by the first camera 20 and the second camera 30 are shown in FIG. 3b (in the figure, the image is divided into five images as an example). At this time, the image is segmented in the normal direction (ie, the direction a in FIG. 3c). As can be seen from FIGS. 3b and 3c, the two segmented images in each group of binocular systems (for example, 1 and 6, 2 and 7, 3, 8 etc.) are staggered from each other and no dead zone will appear.
在另一些实施例中,第一摄像装置20和第二摄像装置30也可以水平并行排列,第一摄像装置20和第二摄像装置30位于同一水平线上。为了使各组双目***中不出现死区区域,请参照图4,对第一目标图像和第二目标图像进行图像分割时,可以在与水平方向呈预设夹角A的方向上(例如图4中的b方向)对所述第一目标图像和所述第二目标图像进行图像分割和校正。由图4可以看出,每组双目***中的两个分割图像彼此互相错开,不会出现死区区域。In other embodiments, the first camera device 20 and the second camera device 30 may also be arranged in parallel horizontally, and the first camera device 20 and the second camera device 30 are located on the same horizontal line. In order to prevent the dead zone from appearing in each group of binocular systems, please refer to FIG. 4, when performing image segmentation on the first target image and the second target image, it may be in a direction at a preset angle A with the horizontal direction (for example (B direction in FIG. 4) Image segmentation and correction are performed on the first target image and the second target image. It can be seen from Figure 4 that the two segmented images in each group of binocular systems are staggered from each other, and no dead zone area will appear.
由图3c和图4均可见,各组双目***中,第一分割图像的中心线(例如a线或b线)与第二分割图像的中心线平行,为了使各组双目***中不出现死区区域,第一分割图像和第二分割图像需彼此错开,即第 一分割图像和第二分割图像的连线与所述中心线呈预设夹角A。其中,所述预设夹角可以为任何不为90度的合适角度,例如45度或135度。It can be seen from Figures 3c and 4 that in each group of binocular systems, the center line of the first segmented image (for example, line a or line b) is parallel to the center line of the second segmented image. When a dead zone occurs, the first divided image and the second divided image need to be staggered from each other, that is, the connection line between the first divided image and the second divided image and the center line form a preset angle A. Wherein, the preset included angle may be any suitable angle other than 90 degrees, for example, 45 degrees or 135 degrees.
图5为本发明实施例提供的一种深度感知方法的流程示意图,所述方法可以由图1中的深度感知设备100执行,如图5所示,所述方法包括:FIG. 5 is a schematic flowchart of a depth sensing method according to an embodiment of the present invention. The method may be executed by the depth sensing device 100 in FIG. 1. As shown in FIG. 5, the method includes:
101:通过第一摄像装置20获得所述目标区域的第一目标图像,通过第二摄像装置30获得所述目标区域的第二目标图像。101: A first target image of the target area is obtained by the first camera 20, and a second target image of the target area is obtained by the second camera 30.
其中,第一摄像装置20和第二摄像装置30可以为任何合适的镜头,例如鱼眼镜头等广角镜头,或者折返、全向等全景镜头。The first camera device 20 and the second camera device 30 can be any suitable lens, such as a wide-angle lens such as a fisheye lens, or a panoramic lens such as a fold-back or omnidirectional lens.
102:对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像。102: Perform image segmentation and correction on the first target image to obtain multiple first segmented images.
103:对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,其中,每一个所述第一分割图像和与所述第一分割图像对应的第二分割图像为一组分割图像;每一组所述分割图像中,所述第一分割图像的中心线与所述第二分割图像的中心线平行,且所述第一分割图像和所述第二分割图像的连线与所述中心线呈预设夹角。103: Perform image segmentation and correction on the second target image to obtain a plurality of second segmented images respectively corresponding to the plurality of first segmented images, wherein each of the first segmented image and the The second divided image corresponding to the first divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first The connection line between a divided image and the second divided image forms a predetermined angle with the center line.
其中,在一些实施例中,可以利用piecewise算法对第一目标图像和第二目标图像进行分割和矫正,以获取多个第一分割图像和多个第二分割图像。In some embodiments, the piecewise algorithm may be used to segment and correct the first target image and the second target image to obtain multiple first segmented images and multiple second segmented images.
其中,第一分割图像和第二分割图像的个数可以为合适的任意个,例如2、3、4、5或更多个。由于目标图像的中间区域变形较小,周围区域变形较大,为进一步提高边缘深度检测精度,可以将中间区域作为一个单独的区域,中间区域周围的区域分成多个周边区域。其中,周边区域的个数可以为4个、6个、8个或者其他。The number of the first divided image and the second divided image may be any appropriate number, for example, 2, 3, 4, 5 or more. Because the middle area of the target image is less deformed and the surrounding area is more deformed, in order to further improve the edge depth detection accuracy, the middle area can be regarded as a single area, and the area around the middle area is divided into multiple peripheral areas. Among them, the number of surrounding areas may be 4, 6, 8, or other.
图6示出了第一摄像装置20和第二摄像装置30为鱼眼镜头、周边区域为4个的情景,其中,左侧图像为矫正前的图像,右侧图像为矫正后的图像。在图3b、图3c和图4所示的实施例中,上述4个周边区域分别为上区域、下区域、左区域和右区域(分别对应图6中的数字1、5、 2、4代表的区域)。在图3b和图3c所示的实施例中,上区域、左区域、中间区域、右区域和下区域分别代表前、左、上、右、后方的图像,由此组成的双目***可以获得前、左、上、右、后五个方向的深度信息。在图4所示的实施例中,上区域、左区域、中间区域、右区域和下区域分别代表左前、右前、上、左后、右后方向的图像,由此组成的双目***可以获得左前、右前、上、左后、右后五个方向的深度信息。FIG. 6 shows a scenario where the first camera 20 and the second camera 30 are fisheye lenses and there are four peripheral areas. The left image is an image before correction, and the right image is an image after correction. In the embodiments shown in FIG. 3b, FIG. 3c and FIG. 4, the above four peripheral areas are an upper area, a lower area, a left area and a right area (corresponding to the numbers 1, 5, 2, 4 in FIG. 6 respectively) Area). In the embodiments shown in FIGS. 3b and 3c, the upper area, the left area, the middle area, the right area, and the lower area represent the images of the front, left, top, right, and rear, respectively, and the binocular system formed thereby can obtain Depth information in five directions: front, left, top, right, and back. In the embodiment shown in FIG. 4, the upper area, the left area, the middle area, the right area, and the lower area respectively represent the images of the front left, front right, top, left, and back right directions, and the binocular system composed thereby can obtain Depth information in five directions: front left, front right, top, back left, and back right.
在图6所示的实施例中,可以实现前、左、上、右、后五个方向的深度感知,实现感知角度为180度。实际应用时,可以在无人机、机器人上装设两组或多组深度感知设备100,以实现360度的深度感知。相比于利用5组镜头的全向感知***,本发明实施例使用镜头少、节省空间。且降低了由于连接组件形变带来的标定误差,减少了标定算法的复杂度以及标定时间,而且标定参数不易变化、具有零延迟优势。In the embodiment shown in FIG. 6, depth perception in the five directions of front, left, top, right, and rear can be achieved, and the perception angle is 180 degrees. In practical applications, two or more sets of depth sensing devices 100 may be installed on the drone or robot to achieve 360-degree depth sensing. Compared with an omnidirectional perception system using 5 groups of lenses, the embodiments of the present invention use fewer lenses and save space. It also reduces the calibration error caused by the deformation of the connected components, reduces the complexity and calibration time of the calibration algorithm, and the calibration parameters are not easy to change and have the advantage of zero delay.
104:分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的标定参数。104: calibrate each group of divided images separately to obtain calibration parameters corresponding to each group of the divided images.
在其中一些实施例中,可以利用张正友法或faugeras法分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的所述标定参数(包括内参数和外参数)。In some of these embodiments, the Zhang Zhengyou method or the Faugeras method may be used to separately calibrate each group of segmented images to obtain the calibration parameters (including internal and external parameters) corresponding to each group of the segmented images.
105:对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图。105: Perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images.
在其中一些实施例中,可以利用BM(Boyer-Moore)算法或SGBM(Semi Global Block Matching)算法对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的所述视差图。In some of these embodiments, the BM (Boyer-Moore) algorithm or SGBM (Semi Global Block Matching) algorithm can be used to perform binocular matching on each group of the segmented images to obtain the corresponding position of each group of the segmented images Describe the parallax map.
具体的,以第一目标图像和第二目标图像被分割为如图6所示的五个分割图像为例,则第一目标图像和第二目标图像被分割并矫正后的图像请参照图3b。在图3b所示的示例中,第一目标图像被分为1-5五个分割图像,第二目标图像被分成6-10五个分割图像。1与6构成前方双目***、2与7构成左方双目***、3与8构成上方双目***、4与9构成右方双目***、5与10构成后方双目***。首先对1-10分割图像进行特征点提取,然后针对1与6、2与7、3与8、4与9、5与10分别进行特征点匹配,特征点匹配之后,可以利用匹配结果,采用现有的立 体匹配算法,获得各特征点在第一目标图像和第二目标图像上的视差,各特征点的视差组成每组分割图像对应的视差图。Specifically, taking the first target image and the second target image divided into five divided images as shown in FIG. 6 as an example, the image after the first target image and the second target image are divided and corrected, please refer to FIG. 3b . In the example shown in FIG. 3b, the first target image is divided into 1-5 five divided images, and the second target image is divided into 6-10 five divided images. 1 and 6 form the front binocular system, 2 and 7 form the left binocular system, 3 and 8 form the upper binocular system, 4 and 9 form the right binocular system, and 5 and 10 form the rear binocular system. First, extract the feature points of the 1-10 segmented image, and then match the feature points for 1 and 6, 2 and 7, 3 and 8, 4 and 9, 5 and 10 respectively. After the feature points are matched, you can use the matching results to adopt The existing stereo matching algorithm obtains the parallax of each feature point on the first target image and the second target image, and the parallax of each feature point constitutes a parallax map corresponding to each group of divided images.
106:根据所述视差图和所述标定参数,获取每一组所述分割图像对应的区域的深度信息。106: Obtain depth information of the area corresponding to each group of the divided images according to the disparity map and the calibration parameters.
其中,在一些实施例中,可以利用以下公式,获取每一组所述分割图像对应的所述区域中每一点的三维坐标:In some embodiments, the following formula can be used to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the divided images:
Figure PCTCN2019123072-appb-000007
Figure PCTCN2019123072-appb-000007
Figure PCTCN2019123072-appb-000008
Figure PCTCN2019123072-appb-000008
Figure PCTCN2019123072-appb-000009
Figure PCTCN2019123072-appb-000009
其中,x i、y i、z i表示每个点的三维坐标,baseline为基线长度,disparity为由所述视差图获取的视差数据,cx、cy、fx、fy为所述标定参数。根据所述区域中每一点的所述三维坐标,可以获取所述区域的深度信息。 Wherein, x i , y i , and z i represent the three-dimensional coordinates of each point, baseline is the length of the baseline, disparity is the disparity data obtained from the disparity map, and cx, cy, fx, and fy are the calibration parameters. According to the three-dimensional coordinates of each point in the area, the depth information of the area can be obtained.
在图3b所示的实施例中,区域1和区域6之间的基线距离、区域5和区域10之间的基线距离为D,区域2和区域7之间的基线距离、区域4和区域9之间的基线距离为W,区域3和区域8之间的基线距离为
Figure PCTCN2019123072-appb-000010
(也可以为W或者D)。其中,W为第一摄像装置20和第二摄像装置30的横向距离,D为第一摄像装置20和第二摄像装置30的纵向距离。
In the embodiment shown in FIG. 3b, the baseline distance between area 1 and area 6, the baseline distance between area 5 and area 10 is D, the baseline distance between area 2 and area 7, the area 4 and area 9 The baseline distance between them is W, and the baseline distance between regions 3 and 8 is
Figure PCTCN2019123072-appb-000010
(It can also be W or D). Where, W is the horizontal distance between the first camera 20 and the second camera 30, and D is the vertical distance between the first camera 20 and the second camera 30.
本发明实施例将第一摄像装置拍摄的目标区域的第一目标图像进行图像分割和矫正,获得多个第一分割图像,将第二摄像装置拍摄的目标区域的第二目标图像进行图像分割和矫正,获得与所述多个第一分割图像分别对应的多个第二分割图像,每一个所述第一分割图像和与其对应的第二分割图像组成一组分割图像。将图像进行分割后再矫正可以获得画质损失相对小的矫正图像,从而提高了双目立体匹配的精度,也提高了边缘深度检测精度。而且,多组分割图像构成多组双目***,可以感知多个方向的深度信息。In the embodiment of the present invention, the first target image of the target area captured by the first camera device is image-divided and corrected to obtain multiple first divided images, and the second target image of the target area captured by the second camera device is image-divided and After correction, a plurality of second divided images respectively corresponding to the plurality of first divided images are obtained, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images. After the image is segmented and then corrected, a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy. Moreover, multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
相应的,如图7所示,本发明实施例还提供了一种深度感知装置,所述装置用于图1中的深度感知设备100,深度感知装置700包括:Correspondingly, as shown in FIG. 7, an embodiment of the present invention further provides a depth sensing device, which is used in the depth sensing device 100 in FIG. 1, and the depth sensing device 700 includes:
获取模块701,所述获取模块用于获取所述第一摄像装置获取的所述目标区域的第一目标图像和所述第二摄像装置获取的所述目标区域的第二目标图像。An acquiring module 701, the acquiring module is configured to acquire a first target image of the target area acquired by the first camera device and a second target image of the target area acquired by the second camera device.
分割和校正模块702,用于:The segmentation and correction module 702 is used to:
对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像;以及Performing image segmentation and correction on the first target image to obtain multiple first segmented images; and
对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,其中,每一个所述第一分割图像和与所述第一分割图像对应的第二分割图像为一组分割图像;每一组所述分割图像中,所述第一分割图像的中心线与所述第二分割图像的中心线平行,且所述第一分割图像和所述第二分割图像的连线与所述中心线呈预设夹角。Performing image segmentation and correction on the second target image to obtain a plurality of second segmented images respectively corresponding to the plurality of first segmented images, wherein each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first divided The connection between the image and the second divided image is at a predetermined angle with the center line.
标定模块703,用于分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的标定参数。The calibration module 703 is configured to calibrate each group of divided images separately to obtain calibration parameters corresponding to each group of the divided images.
双目匹配模块704,用于对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图。The binocular matching module 704 is configured to perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images.
深度信息获取模块705,用于根据所述视差图和所述标定参数,获取每一组所述分割图像对应的区域的深度信息。The depth information obtaining module 705 is configured to obtain the depth information of the area corresponding to each group of the divided images according to the disparity map and the calibration parameters.
本发明实施例将第一摄像装置拍摄的目标区域的第一目标图像进行图像分割和矫正,获得多个第一分割图像,将第二摄像装置拍摄的目标区域的第二目标图像进行图像分割和矫正,获得与所述多个第一分割图像分别对应的多个第二分割图像,每一个所述第一分割图像和与其对应的第二分割图像组成一组分割图像。将图像进行分割后再矫正可以获得画质损失相对小的矫正图像,从而提高了双目立体匹配的精度,也提高了边缘深度检测精度。而且,多组分割图像构成多组双目***,可以感知多个方向的深度信息。In the embodiment of the present invention, the first target image of the target area captured by the first camera device is image-divided and corrected to obtain a plurality of first divided images, and the second target image of the target area captured by the second camera device is image-divided and After correction, a plurality of second divided images respectively corresponding to the plurality of first divided images are obtained, and each of the first divided image and the second divided image corresponding thereto constitutes a group of divided images. After the image is segmented and then corrected, a corrected image with relatively small image quality loss can be obtained, thereby improving the accuracy of binocular stereo matching and also improving the edge depth detection accuracy. Moreover, multiple sets of segmented images constitute multiple sets of binocular systems, which can perceive depth information in multiple directions.
在深度感知装置700的一些实施例中,分割和校正模块702具体用 于:In some embodiments of the depth perception device 700, the segmentation and correction module 702 is specifically used to:
采用piecewise算法,对所述第一目标图像进行图像分割和校正,以获取所述多个第一分割图像。以及A piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images. as well as
采用piecewise算法,对所述第二目标图像进行图像分割和校正,以获取所述多个第二分割图像。A piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
在深度感知装置的一些实施例中,标定模块703具体用于:In some embodiments of the depth perception device, the calibration module 703 is specifically used to:
利用张正友法或faugeras法分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的所述标定参数。The Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
在深度感知装置的一些实施例中,双目匹配模块704具体用于:In some embodiments of the depth perception device, the binocular matching module 704 is specifically used to:
利用BM算法或SGBM算法对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的所述视差图。Use the BM algorithm or SGBM algorithm to perform binocular matching on each group of the divided images to obtain the disparity map corresponding to each group of the divided images.
在深度感知装置的一些实施例中,深度信息获取模块705具体用于:In some embodiments of the depth perception device, the depth information acquisition module 705 is specifically used to:
利用以下公式,获取每一组所述分割图像对应的所述区域中每一点的三维坐标:Use the following formula to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the segmented images:
Figure PCTCN2019123072-appb-000011
Figure PCTCN2019123072-appb-000011
Figure PCTCN2019123072-appb-000012
Figure PCTCN2019123072-appb-000012
Figure PCTCN2019123072-appb-000013
Figure PCTCN2019123072-appb-000013
其中,x i、y i、z i表示每个点的三维坐标,baseline为基线长度,disparity为由所述视差图获取的视差数据,cx、cy、fx、fy为所述标定参数; Where x i , y i , and z i represent the three-dimensional coordinates of each point, baseline is the length of the baseline, disparity is the disparity data obtained from the disparity map, and cx, cy, fx, and fy are the calibration parameters;
根据所述区域中每一点的所述三维坐标,获取所述区域的深度信息。Obtain depth information of the area according to the three-dimensional coordinates of each point in the area.
在深度感知装置700的一些实施例中,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向呈预设夹角。In some embodiments of the depth sensing device 700, the connection between the first camera device and the second camera device is at a preset angle with the horizontal direction.
在深度感知装置700的一些实施例中,所述预设夹角为45°或135°。In some embodiments of the depth sensing device 700, the preset included angle is 45° or 135°.
在深度感知装置700的一些实施例中,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向平行;In some embodiments of the depth sensing device 700, the connection line between the first camera device and the second camera device is parallel to the horizontal direction;
分割和校正模块702具体用于:The segmentation and correction module 702 is specifically used for:
在与水平方向呈预设夹角的方向上对所述第一目标图像进行图像分割和校正;以及在与水平方向呈预设夹角的方向上对所述第二目标图像进行图像分割和校正,所述预设夹角不为90°。Performing image segmentation and correction on the first target image in a direction at a predetermined angle with the horizontal direction; and performing image segmentation and correction on the second target image in a direction with a predetermined angle to the horizontal direction , The preset angle is not 90°.
在深度感知装置700的一些实施例中,所述预设夹角为45°或135°。In some embodiments of the depth sensing device 700, the preset included angle is 45° or 135°.
在深度感知装置700的一些实施例中,所述第一摄像装置与所述第二摄像装置均为鱼眼镜头。In some embodiments of the depth sensing device 700, both the first camera device and the second camera device are fisheye lenses.
需要说明的是,上述装置可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在装置实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。It should be noted that the above-mentioned device can execute the method provided by the embodiments of the present application, and has functional modules and beneficial effects corresponding to the execution method. For technical details that are not described in detail in the device embodiments, refer to the methods provided in the embodiments of the present application.
图8是本发明实施例的深度感知设备100中控制器10的硬件结构示意图,如图8所示,控制器10包括:FIG. 8 is a schematic diagram of the hardware structure of the controller 10 in the depth perception device 100 according to an embodiment of the present invention. As shown in FIG. 8, the controller 10 includes:
一个或多个处理器11以及存储器12,图8中以一个处理器11为例。One or more processors 11 and memory 12, one processor 11 is taken as an example in FIG. 8.
处理器11和存储器12可以通过总线或者其他方式连接,图8中以通过总线连接为例。The processor 11 and the memory 12 may be connected through a bus or in other ways. In FIG. 8, the connection through a bus is used as an example.
存储器12作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的深度感知方法对应的程序指令/模块(例如,附图7所示的获取模块701、分割和校正模块702、标定模块703、双目匹配模块704和深度信息获取模块705)。处理器11通过运行存储在存储器12中的非易失性软件程序、指令以及模块,从而执行深度感知设备100的各种功能应用以及数据处理,即实现上述方法实施例的深度感知方法。The memory 12 is a non-volatile computer-readable storage medium, and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as program instructions corresponding to the depth perception method in the embodiments of the present application. / Module (for example, the acquisition module 701, segmentation and correction module 702, calibration module 703, binocular matching module 704, and depth information acquisition module 705 shown in FIG. 7). The processor 11 executes various functional applications and data processing of the depth-aware device 100 by running non-volatile software programs, instructions, and modules stored in the memory 12, that is, implements the depth-aware method of the foregoing method embodiments.
存储器12可以包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需要的应用程序;存储数据区可存储根据深度感知设备的使用所创建的数据等。此外,存储器12可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器12可选包括相对于处理器11远程设置的存储器,这些远程存储器可以通过网络连接至深度感知设备。上述网络的实例包括但不限于互联网、 企业内部网、局域网、移动通信网及其组合。The memory 12 may include a storage program area and a storage data area, where the storage program area may store an operating system and application programs required by at least one function; the storage data area may store data created according to the use of a depth-aware device, and the like. In addition, the memory 12 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage devices. In some embodiments, the memory 12 may optionally include memories remotely provided with respect to the processor 11, and these remote memories may be connected to a depth-aware device through a network. Examples of the above network include but are not limited to the Internet, intranet, local area network, mobile communication network, and combinations thereof.
所述一个或者多个模块存储在所述存储器12中,当被所述一个或者多个处理器11执行时,执行上述任意方法实施例中的深度感知方法,例如,执行以上描述的图5中的方法步骤101至步骤106;实现图7中的模块701-705的功能。The one or more modules are stored in the memory 12, and when executed by the one or more processors 11, execute the depth perception method in any of the above method embodiments, for example, execute FIG. 5 described above Steps 101 to 106 of the method; implement the functions of the modules 701-705 in FIG. 7.
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above-mentioned products can execute the method provided by the embodiments of the present application, and have corresponding function modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, refer to the method provided in the embodiments of the present application.
本申请实施例提供了一种非易失性计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令被一个或多个处理器执行,例如图8中的一个处理器11,可使得上述一个或多个处理器可执行上述任意方法实施例中的深度感知方法,例如,执行以上描述的图5中的方法步骤101至步骤106;实现图7中的模块701-705的功能。An embodiment of the present application provides a non-volatile computer-readable storage medium that stores computer-executable instructions that are executed by one or more processors, such as in FIG. 8 One of the processors 11 may enable the one or more processors to execute the depth perception method in any of the above method embodiments, for example, to perform the method steps 101 to 106 in FIG. 5 described above; Functions of modules 701-705.
在本发明实施例的深度感知设备100用于无人机时,主体即无人机的机身,其中,深度感知设备100的第一摄像装置20和第二摄像装置30可以设置在无人机的机身上、控制器10可以采用单独的控制器,也可以利用无人机的飞控芯片来进行控制。When the depth sensing device 100 of the embodiment of the present invention is used in a drone, the main body is the body of the drone, wherein the first camera 20 and the second camera 30 of the depth sensing device 100 may be provided in the drone On the fuselage, the controller 10 may use a separate controller, or may be controlled by a drone's flight control chip.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located One place, or can be distributed to multiple network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
通过以上的实施例的描述,本领域普通技术人员可以清楚地了解到各实施例可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存 储于一计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(RandomAccessMemory,RAM)等。Through the description of the above embodiments, a person of ordinary skill in the art can clearly understand that each embodiment can be implemented by means of software plus a general hardware platform, and of course, it can also be implemented by hardware. Those of ordinary skill in the art may understand that all or part of the processes in the methods of the above embodiments may be completed by a computer program instructing relevant hardware. The program may be stored in a computer-readable storage medium. When executed, it may include the processes of the foregoing method embodiments. Wherein, the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (RandomAccessMemory, RAM), etc.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;在本发明的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本发明的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention, rather than limiting them; under the idea of the present invention, the technical features in the above embodiments or different embodiments may also be combined, The steps can be implemented in any order, and there are many other variations of the different aspects of the present invention as described above. For simplicity, they are not provided in the details; although the present invention has been described in detail with reference to the foregoing embodiments, it is common in the art The skilled person should understand that they can still modify the technical solutions described in the foregoing embodiments, or equivalently replace some of the technical features; and these modifications or replacements do not deviate the essence of the corresponding technical solutions from the implementation of the present invention. Examples of technical solutions.

Claims (27)

  1. 一种深度感知方法,所述方法用于双目***感知目标区域的深度,所述双目***包括第一摄像装置和第二摄像装置,其特征在于,所述方法包括:A depth perception method, which is used for a binocular system to perceive the depth of a target area. The binocular system includes a first camera device and a second camera device, characterized in that the method includes:
    通过所述第一摄像装置获得所述目标区域的第一目标图像,通过所述第二摄像装置获得所述目标区域的第二目标图像;Obtaining a first target image of the target area by the first camera, and obtaining a second target image of the target area by the second camera;
    对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像;Performing image segmentation and correction on the first target image to obtain multiple first segmented images;
    对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,其中,每一个所述第一分割图像和与所述第一分割图像对应的第二分割图像为一组分割图像;Performing image segmentation and correction on the second target image to obtain a plurality of second segmented images respectively corresponding to the plurality of first segmented images, wherein each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images;
    每一组所述分割图像中,所述第一分割图像的中心线与所述第二分割图像的中心线平行,且所述第一分割图像和所述第二分割图像的连线与所述中心线呈预设夹角;In each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the line connecting the first divided image and the second divided image is The centerline is at a preset angle;
    分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的标定参数;Calibrate each group of segmented images separately to obtain calibration parameters corresponding to each group of segmented images;
    对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图;Performing binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images;
    根据所述视差图和所述标定参数,获取每一组所述分割图像对应的区域的深度信息。According to the disparity map and the calibration parameter, the depth information of the region corresponding to each group of the divided images is obtained.
  2. 根据权利要求1所述的方法,其特征在于,所述对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像,包括:The method according to claim 1, wherein the performing image segmentation and correction on the first target image to obtain a plurality of first segmented images includes:
    采用piecewise算法,对所述第一目标图像进行图像分割和校正,以获取所述多个第一分割图像。A piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
  3. 根据权利要求1或2所述的方法,其特征在于,所述对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,包括:The method according to claim 1 or 2, wherein the second target image is subjected to image segmentation and correction to obtain a plurality of second segmented images corresponding to the plurality of first segmented images, respectively ,include:
    采用piecewise算法,对所述第二目标图像进行图像分割和校正,以 获取所述多个第二分割图像。A piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的标定参数,包括:The method according to any one of claims 1-3, wherein the separately calibrating each group of divided images to obtain calibration parameters corresponding to each group of the divided images includes:
    利用张正友法或faugeras法分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的所述标定参数。The Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  5. 根据权利要求1-4中任一项所述的方法,其特征在于,所述对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图,包括:The method according to any one of claims 1 to 4, wherein the performing binocular matching on each group of the divided images to obtain the disparity map corresponding to each group of the divided images includes:
    利用BM算法或SGBM算法对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的所述视差图。Use the BM algorithm or SGBM algorithm to perform binocular matching on each group of the divided images to obtain the disparity map corresponding to each group of the divided images.
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述根据所述视差图和所述标定参数,获取每一组所述分割图像对应区域的深度信息,包括:The method according to any one of claims 1-5, wherein the acquiring depth information of the area corresponding to each group of the divided images according to the disparity map and the calibration parameters includes:
    利用以下公式,获取每一组所述分割图像对应的所述区域中每一点的三维坐标:Use the following formula to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the segmented images:
    Figure PCTCN2019123072-appb-100001
    Figure PCTCN2019123072-appb-100001
    Figure PCTCN2019123072-appb-100002
    Figure PCTCN2019123072-appb-100002
    Figure PCTCN2019123072-appb-100003
    Figure PCTCN2019123072-appb-100003
    其中,x i、y i、z i表示每个点的三维坐标,baseline为基线长度,disparity为由所述视差图获取的视差数据,cx、cy、fx、fy为所述标定参数; Where x i , y i , and z i represent the three-dimensional coordinates of each point, baseline is the length of the baseline, disparity is the disparity data obtained from the disparity map, and cx, cy, fx, and fy are the calibration parameters;
    根据所述区域中每一点的所述三维坐标,获取所述区域的深度信息。Obtain depth information of the area according to the three-dimensional coordinates of each point in the area.
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向呈预设夹角。The method according to any one of claims 1-6, wherein the connection between the first camera device and the second camera device is at a preset angle with the horizontal direction.
  8. 根据权利要求7所述的方法,其特征在于,所述预设夹角为45° 或135°。The method according to claim 7, wherein the preset angle is 45° or 135°.
  9. 根据权利要求1-6中任一项所述的方法,其特征在于,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向平行;The method according to any one of claims 1-6, wherein a line between the first camera device and the second camera device is parallel to the horizontal direction;
    所述对所述第一目标图像进行图像分割和校正,包括:The image segmentation and correction of the first target image includes:
    在与水平方向呈预设夹角的方向上对所述第一目标图像进行图像分割和校正;Performing image segmentation and correction on the first target image in a direction at a preset angle with the horizontal direction;
    所述对所述第二目标图像进行图像分割和校正,包括:The image segmentation and correction of the second target image includes:
    在与水平方向呈预设夹角的方向上对所述第二目标图像进行图像分割和校正,所述预设夹角不为90°。Image segmentation and correction are performed on the second target image in a direction at a preset angle with the horizontal direction, and the preset angle is not 90°.
  10. 根据权利要求9所述的方法,其特征在于,所述预设夹角为45°或135°。The method according to claim 9, wherein the preset angle is 45° or 135°.
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,所述第一摄像装置与所述第二摄像装置均为鱼眼镜头。The method according to any one of claims 1-10, wherein the first camera device and the second camera device are both fisheye lenses.
  12. 一种深度感知装置,所述装置用于双目***感知目标区域的深度,所述双目***包括第一摄像装置和第二摄像装置,其特征在于,所述装置包括:A depth sensing device for sensing the depth of a target area by a binocular system, the binocular system includes a first camera device and a second camera device, characterized in that the device includes:
    获取模块,所述获取模块用于获取所述第一摄像装置获取的所述目标区域的第一目标图像和所述第二摄像装置获取的所述目标区域的第二目标图像;An obtaining module, the obtaining module is configured to obtain a first target image of the target area obtained by the first camera device and a second target image of the target area obtained by the second camera device;
    分割和校正模块,所述分割和校正模块用于:A segmentation and correction module, the segmentation and correction module is used for:
    对所述第一目标图像进行图像分割和校正,以获取多个第一分割图像;以及Performing image segmentation and correction on the first target image to obtain multiple first segmented images; and
    对所述第二目标图像进行图像分割和校正,以获取与所述多个第一分割图像分别对应的多个第二分割图像,其中,每一个所述第一分割图像和与所述第一分割图像对应的第二分割图像为一组分割图像;每一组所述分割图像中,所述第一分割图像的中心线与所述第二分割图像的中心线平行,且所述第一分割图像和所述第二分割图像的连线与所述中心线呈预设夹角;Performing image segmentation and correction on the second target image to obtain a plurality of second segmented images respectively corresponding to the plurality of first segmented images, wherein each of the first segmented image and the first The second divided image corresponding to the divided image is a group of divided images; in each group of the divided images, the center line of the first divided image is parallel to the center line of the second divided image, and the first divided The connection between the image and the second divided image is at a preset angle with the center line;
    标定模块,所述标定模块用于分别对每一组分割图像进行标定,以 获取每一组所述分割图像对应的标定参数;A calibration module, the calibration module is used to calibrate each group of segmented images separately to obtain calibration parameters corresponding to each group of segmented images;
    双目匹配模块,所述双目匹配模块用于对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的视差图;以及A binocular matching module, the binocular matching module is configured to perform binocular matching on each group of the divided images to obtain a disparity map corresponding to each group of the divided images; and
    深度信息获取模块,用于根据所述视差图和所述标定参数,获取每一组所述分割图像对应的区域的深度信息。The depth information acquisition module is configured to acquire depth information of a region corresponding to each segmented image according to the disparity map and the calibration parameter.
  13. 根据权利要求12所述的装置,其特征在于,所述分割和校正模块具体用于:The apparatus according to claim 12, wherein the segmentation and correction module is specifically used for:
    采用piecewise算法,对所述第一目标图像进行图像分割和校正,以获取所述多个第一分割图像。A piecewise algorithm is used to perform image segmentation and correction on the first target image to obtain the plurality of first segmented images.
  14. 根据权利要求12或13所述的装置,其特征在于,所述分割和校正模块具体用于:The device according to claim 12 or 13, wherein the segmentation and correction module is specifically used for:
    采用piecewise算法,对所述第二目标图像进行图像分割和校正,以获取所述多个第二分割图像。A piecewise algorithm is used to perform image segmentation and correction on the second target image to obtain the plurality of second segmented images.
  15. 根据权利要求12-14中任一项所述的装置,其特征在于,所述标定模块具体用于:The device according to any one of claims 12-14, wherein the calibration module is specifically used to:
    利用张正友法或faugeras法分别对每一组分割图像进行标定,以获取每一组所述分割图像对应的所述标定参数。The Zhang Zhengyou method or the Faugeras method are used to calibrate each group of segmented images to obtain the calibration parameters corresponding to each group of segmented images.
  16. 根据权利要求12-15中任一项所述的装置,其特征在于,所述双目匹配模块具体用于:The device according to any one of claims 12 to 15, wherein the binocular matching module is specifically used to:
    利用BM算法或SGBM算法对每一组所述分割图像进行双目匹配,以获取每一组所述分割图像对应的所述视差图。Use the BM algorithm or SGBM algorithm to perform binocular matching on each group of the divided images to obtain the disparity map corresponding to each group of the divided images.
  17. 根据权利要求12-16中任一项所述的装置,其特征在于,所述深度信息获取模块具体用于:The apparatus according to any one of claims 12-16, wherein the depth information acquisition module is specifically used to:
    利用以下公式,获取每一组所述分割图像对应的所述区域中每一点的三维坐标:Use the following formula to obtain the three-dimensional coordinates of each point in the area corresponding to each group of the segmented images:
    Figure PCTCN2019123072-appb-100004
    Figure PCTCN2019123072-appb-100004
    Figure PCTCN2019123072-appb-100005
    Figure PCTCN2019123072-appb-100005
    Figure PCTCN2019123072-appb-100006
    Figure PCTCN2019123072-appb-100006
    其中,x i、y i、z i表示每个点的三维坐标,baseline为基线长度,disparity为由所述视差图获取的视差数据,cx、cy、fx、fy为所述标定参数; Where x i , y i , and z i represent the three-dimensional coordinates of each point, baseline is the length of the baseline, disparity is the disparity data obtained from the disparity map, and cx, cy, fx, and fy are the calibration parameters;
    根据所述区域中每一点的所述三维坐标,获取所述区域的深度信息。Obtain depth information of the area according to the three-dimensional coordinates of each point in the area.
  18. 根据权利要求12-17中任一项所述的装置,其特征在于,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向呈预设夹角。The device according to any one of claims 12 to 17, wherein the connection between the first camera device and the second camera device is at a preset angle with the horizontal direction.
  19. 根据权利要求18所述的装置,其特征在于,所述预设夹角为45°或135°。The device according to claim 18, wherein the preset angle is 45° or 135°.
  20. 根据权利要求12-17中任一项所述的装置,其特征在于,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向平行;The device according to any one of claims 12-17, wherein the connection line between the first camera device and the second camera device is parallel to the horizontal direction;
    所述分割和校正模块具体用于:The segmentation and correction module is specifically used for:
    在与水平方向呈预设夹角的方向上对所述第一目标图像进行图像分割和校正;以及在与水平方向呈预设夹角的方向上对所述第二目标图像进行图像分割和校正,所述预设夹角不为90°。Performing image segmentation and correction on the first target image in a direction at a predetermined angle with the horizontal direction; and performing image segmentation and correction on the second target image in a direction with a predetermined angle to the horizontal direction , The preset angle is not 90°.
  21. 根据权利要求20所述的装置,其特征在于,所述预设夹角为45°或135°。The device according to claim 20, wherein the preset included angle is 45° or 135°.
  22. 根据权利要求12-21中任一项所述的装置,其特征在于,所述第一摄像装置与所述第二摄像装置均为鱼眼镜头。The device according to any one of claims 12-21, wherein the first camera device and the second camera device are both fisheye lenses.
  23. 一种深度感知设备,其特征在于,包括:A depth-sensing device, characterized in that it includes:
    主体;main body;
    双目***,设于所述主体,包括第一摄像装置和第二摄像装置;A binocular system, set on the main body, includes a first camera device and a second camera device;
    控制器,所述控制器设于所述主体;所述控制器包括:A controller, the controller is provided in the main body; the controller includes:
    至少一个处理器,以及At least one processor, and
    存储器,所述存储器与所述至少一个处理器通信连接,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-11任一项所述的方法。A memory, the memory is communicatively connected to the at least one processor, the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor The processor can execute the method according to any one of claims 1-11.
  24. 根据权利要求23所述的设备,其特征在于,所述第一摄像装置与所述第二摄像装置之间的连线与水平方向呈预设夹角。The apparatus according to claim 23, wherein the connection between the first camera and the second camera is at a preset angle with the horizontal direction.
  25. 根据权利要求24所述的设备,其特征在于,所述预设夹角为45°或135°。The apparatus according to claim 24, wherein the preset angle is 45° or 135°.
  26. 根据权利要求23-25中任一项所述的设备,其特征在于,所述第一摄像装置与所述第二摄像装置均为鱼眼镜头。The apparatus according to any one of claims 23-25, wherein the first camera device and the second camera device are both fisheye lenses.
  27. 根据权利要求23-26中任一项所述的设备,其特征在于,所述深度感知设备为无人机。The device according to any one of claims 23 to 26, wherein the depth sensing device is a drone.
PCT/CN2019/123072 2018-12-04 2019-12-04 Depth perception method and apparatus, and depth perception device WO2020114433A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811473003.1 2018-12-04
CN201811473003.1A CN109658451B (en) 2018-12-04 2018-12-04 Depth sensing method and device and depth sensing equipment

Publications (1)

Publication Number Publication Date
WO2020114433A1 true WO2020114433A1 (en) 2020-06-11

Family

ID=66112775

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/123072 WO2020114433A1 (en) 2018-12-04 2019-12-04 Depth perception method and apparatus, and depth perception device

Country Status (2)

Country Link
CN (1) CN109658451B (en)
WO (1) WO2020114433A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658451B (en) * 2018-12-04 2021-07-30 深圳市道通智能航空技术股份有限公司 Depth sensing method and device and depth sensing equipment
CN110580724B (en) * 2019-08-28 2022-02-25 贝壳技术有限公司 Method and device for calibrating binocular camera set and storage medium
CN111986248B (en) * 2020-08-18 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Multi-vision sensing method and device and automatic driving automobile

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075271A2 (en) * 2006-12-18 2008-06-26 Koninklijke Philips Electronics N.V. Calibrating a camera system
CN105277169A (en) * 2015-09-25 2016-01-27 安霸半导体技术(上海)有限公司 Image segmentation-based binocular range finding method
CN105787447A (en) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 Method and system of unmanned plane omnibearing obstacle avoidance based on binocular vision
CN109658451A (en) * 2018-12-04 2019-04-19 深圳市道通智能航空技术有限公司 A kind of depth perception method, device and depth perception equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6756993B2 (en) * 2001-01-17 2004-06-29 The University Of North Carolina At Chapel Hill Methods and apparatus for rendering images using 3D warping techniques
CN202818442U (en) * 2012-05-25 2013-03-20 常州泰勒维克今创电子有限公司 All-digital panoramic camera
CN102750711B (en) * 2012-06-04 2015-07-29 清华大学 A kind of binocular video depth map calculating method based on Iamge Segmentation and estimation
CN102721370A (en) * 2012-06-18 2012-10-10 南昌航空大学 Real-time mountain landslide monitoring method based on computer vision
CN103198473B (en) * 2013-03-05 2016-02-24 腾讯科技(深圳)有限公司 A kind of degree of depth drawing generating method and device
CN106709948A (en) * 2016-12-21 2017-05-24 浙江大学 Quick binocular stereo matching method based on superpixel segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008075271A2 (en) * 2006-12-18 2008-06-26 Koninklijke Philips Electronics N.V. Calibrating a camera system
CN105277169A (en) * 2015-09-25 2016-01-27 安霸半导体技术(上海)有限公司 Image segmentation-based binocular range finding method
CN105787447A (en) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 Method and system of unmanned plane omnibearing obstacle avoidance based on binocular vision
CN109658451A (en) * 2018-12-04 2019-04-19 深圳市道通智能航空技术有限公司 A kind of depth perception method, device and depth perception equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAO-QING LIU: "The Image Process and Location Technology Research Based on Binocular Vision", MASTER THESIS, no. 6, 15 June 2018 (2018-06-15), pages 1 - 76, XP009521581, ISSN: 1674-0246 *

Also Published As

Publication number Publication date
CN109658451B (en) 2021-07-30
CN109658451A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
US10586352B2 (en) Camera calibration
CN109920011B (en) External parameter calibration method, device and equipment for laser radar and binocular camera
CN109767474B (en) Multi-view camera calibration method and device and storage medium
CN108323190B (en) Obstacle avoidance method and device and unmanned aerial vehicle
EP3825954A1 (en) Photographing method and device and unmanned aerial vehicle
US10085011B2 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN106960454B (en) Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
US11039121B2 (en) Calibration apparatus, chart for calibration, chart pattern generation apparatus, and calibration method
US20170127045A1 (en) Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof
US10789719B2 (en) Method and apparatus for detection of false alarm obstacle
US9094672B2 (en) Stereo picture generating device, and stereo picture generating method
WO2018210078A1 (en) Distance measurement method for unmanned aerial vehicle, and unmanned aerial vehicle
WO2020114433A1 (en) Depth perception method and apparatus, and depth perception device
WO2018120040A1 (en) Obstacle detection method and device
JP2018179980A (en) Camera calibration method, camera calibration program and camera calibration device
CN106570899B (en) Target object detection method and device
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN112837207B (en) Panoramic depth measurement method, four-eye fisheye camera and binocular fisheye camera
CN110458952B (en) Three-dimensional reconstruction method and device based on trinocular vision
CN112052788A (en) Environment sensing method and device based on binocular vision and unmanned aerial vehicle
WO2021195939A1 (en) Calibrating method for external parameters of binocular photographing device, movable platform and system
CN109325913A (en) Unmanned plane image split-joint method and device
CN109785225B (en) Method and device for correcting image
EP4050553A1 (en) Method and device for restoring image obtained from array camera
WO2015182771A1 (en) Image capturing device, image processing device, image processing method, and computer program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19894082

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19894082

Country of ref document: EP

Kind code of ref document: A1