CN109658451B - Depth sensing method and device and depth sensing equipment - Google Patents

Depth sensing method and device and depth sensing equipment Download PDF

Info

Publication number
CN109658451B
CN109658451B CN201811473003.1A CN201811473003A CN109658451B CN 109658451 B CN109658451 B CN 109658451B CN 201811473003 A CN201811473003 A CN 201811473003A CN 109658451 B CN109658451 B CN 109658451B
Authority
CN
China
Prior art keywords
segmentation
image
images
group
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811473003.1A
Other languages
Chinese (zh)
Other versions
CN109658451A (en
Inventor
郑欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Original Assignee
Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Autel Intelligent Aviation Technology Co Ltd filed Critical Shenzhen Autel Intelligent Aviation Technology Co Ltd
Priority to CN201811473003.1A priority Critical patent/CN109658451B/en
Publication of CN109658451A publication Critical patent/CN109658451A/en
Priority to PCT/CN2019/123072 priority patent/WO2020114433A1/en
Application granted granted Critical
Publication of CN109658451B publication Critical patent/CN109658451B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • G06T3/047Fisheye or wide-angle transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The embodiment of the invention discloses a depth sensing method, a depth sensing device and depth sensing equipment. The method comprises the following steps: obtaining a first target image of the target area through the first camera device, and obtaining a second target image of the target area through the second camera device; performing image segmentation and correction on the first target image to obtain a plurality of first segmented images; performing image segmentation and correction on the second target image to acquire a plurality of second segmentation images respectively corresponding to the plurality of first segmentation images; performing binocular matching on each group of the segmentation images to obtain a disparity map corresponding to each group of the segmentation images; and acquiring the depth information of the region corresponding to each group of the segmented images according to the disparity map and the calibration parameters. The embodiment of the invention improves the edge depth detection precision. Moreover, the multiple groups of segmented images form multiple groups of binocular systems, and depth information in multiple directions can be perceived.

Description

Depth sensing method and device and depth sensing equipment
Technical Field
The embodiment of the invention relates to the technical field of computer vision, in particular to a depth perception method, a depth perception device and depth perception equipment.
Background
With the popularization of robots and unmanned planes, barrier perception sensors are applied more and more. The binocular sensor has the characteristics of low cost, wide application scene, long detection range, high efficiency and the like, and is widely applied to the obstacle perception sensor. Due to the wide visual angle of the fisheye lens, researches on binocular depth perception based on the fisheye lens are more and more.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the related art:
adopt two mesh fisheye lens to carry out obstacle depth perception, because fisheye image periphery warp seriously, fisheye image edge deformity grow after correcting has the inaccurate problem of stereo matching. At present, although there is a technology that can reduce the error of binocular stereo matching by using a special calibration model, for example, in patent document with publication number CN102005039B, a taylor series model is used to perform calibration and measure depth, and a spherical image is represented as a rectangular image represented by longitude and latitude. The method can reduce the measurement error caused by distortion to a certain extent, but still has larger error at the edge of the image.
Disclosure of Invention
The embodiment of the invention aims to provide a depth sensing method, a depth sensing device and depth sensing equipment with high edge depth detection precision.
In a first aspect, an embodiment of the present invention provides a depth perception method, where the method is used for a binocular system to perceive depth of a target area, where the binocular system includes a first camera and a second camera, and the method includes:
obtaining a first target image of the target area through the first camera device, and obtaining a second target image of the target area through the second camera device;
performing image segmentation and correction on the first target image to obtain a plurality of first segmented images;
performing image segmentation and correction on the second target image to acquire a plurality of second segmentation images respectively corresponding to the plurality of first segmentation images, wherein each of the first segmentation images and the second segmentation images corresponding to the first segmentation images are a set of segmentation images;
in each group of the segmentation images, the central line of the first segmentation image is parallel to the central line of the second segmentation image, and a connecting line of the first segmentation image and the second segmentation image forms a preset included angle with the central line;
calibrating each group of segmented images respectively to obtain calibration parameters corresponding to each group of segmented images;
performing binocular matching on each group of the segmentation images to obtain a disparity map corresponding to each group of the segmentation images;
and acquiring the depth information of the region corresponding to each group of the segmented images according to the disparity map and the calibration parameters.
In some embodiments, the image segmenting and correcting the first target image to obtain a plurality of first segmented images comprises:
and carrying out image segmentation and correction on the first target image by adopting a piece wise algorithm so as to obtain a plurality of first segmentation images.
In some embodiments, the image segmenting and correcting the second target image to obtain a plurality of second segmented images respectively corresponding to the plurality of first segmented images includes:
and carrying out image segmentation and correction on the second target image by adopting a piece wise algorithm so as to obtain a plurality of second segmentation images.
In some embodiments, the calibrating the each set of segmented images to obtain calibration parameters corresponding to each set of segmented images includes:
and calibrating each group of segmented images by using a Zhangyingyou method or a Faugeras method respectively to obtain the calibration parameters corresponding to each group of segmented images.
In some embodiments, the binocular matching each set of the segmented images to obtain the disparity map corresponding to each set of the segmented images includes:
and carrying out binocular matching on each group of the segmentation images by utilizing a BM algorithm or an SGBM algorithm so as to obtain the parallax map corresponding to each group of the segmentation images.
In some embodiments, the obtaining depth information of a region corresponding to each of the groups of segmented images according to the disparity map and the calibration parameter includes:
acquiring the three-dimensional coordinates of each point in the region corresponding to each group of the segmentation images by using the following formula:
Figure BDA0001891465780000031
Figure BDA0001891465780000032
Figure BDA0001891465780000033
wherein x isi、yi、ziRepresenting the three-dimensional coordinates of each point, wherein baseline is the length of a base line, disparity is parallax data acquired by the parallax map, and cx, cy, fx and fy are the calibration parameters;
and acquiring the depth information of the region according to the three-dimensional coordinates of each point in the region.
In some embodiments, a connection line between the first camera device and the second camera device forms a preset included angle with a horizontal direction.
In some embodiments, the predetermined included angle is 45 ° or 135 °.
In some embodiments, a line between the first camera and the second camera is parallel to a horizontal direction;
the image segmentation and correction of the first target image comprises:
performing image segmentation and correction on the first target image in a direction forming a preset included angle with the horizontal direction;
the image segmentation and correction of the second target image comprises:
and carrying out image segmentation and correction on the second target image in the direction which forms a preset included angle with the horizontal direction, wherein the preset included angle is not 90 degrees.
In some embodiments, the predetermined included angle is 45 ° or 135 °.
In some embodiments, the first camera device and the second camera device are both fisheye lenses.
In a second aspect, an embodiment of the present invention provides a depth perception device, where the depth perception device is used for a binocular system to perceive depth of a target area, the binocular system includes a first camera device and a second camera device, and the depth perception device includes:
an obtaining module, configured to obtain a first target image of the target area obtained by the first camera device and a second target image of the target area obtained by the second camera device;
a segmentation and correction module to:
performing image segmentation and correction on the first target image to obtain a plurality of first segmented images; and
performing image segmentation and correction on the second target image to acquire a plurality of second segmentation images respectively corresponding to the plurality of first segmentation images, wherein each of the first segmentation images and the second segmentation images corresponding to the first segmentation images are a set of segmentation images; in each group of the segmentation images, the central line of the first segmentation image is parallel to the central line of the second segmentation image, and a connecting line of the first segmentation image and the second segmentation image forms a preset included angle with the central line;
the calibration module is used for respectively calibrating each group of segmentation images so as to obtain calibration parameters corresponding to each group of segmentation images;
the binocular matching module is used for carrying out binocular matching on each group of the segmentation images so as to obtain a disparity map corresponding to each group of the segmentation images; and
and the depth information acquisition module is used for acquiring the depth information of the region corresponding to each group of the segmented images according to the disparity map and the calibration parameters.
In some embodiments, the segmentation and correction module is specifically configured to:
and carrying out image segmentation and correction on the first target image by adopting a piece wise algorithm so as to obtain a plurality of first segmentation images.
In some embodiments, the segmentation and correction module is specifically configured to:
and carrying out image segmentation and correction on the second target image by adopting a piece wise algorithm so as to obtain a plurality of second segmentation images.
In some embodiments, the calibration module is specifically configured to:
and calibrating each group of segmented images by using a Zhangyingyou method or a Faugeras method respectively to obtain the calibration parameters corresponding to each group of segmented images.
In some embodiments, the binocular matching module is specifically configured to:
and carrying out binocular matching on each group of the segmentation images by utilizing a BM algorithm or an SGBM algorithm so as to obtain the parallax map corresponding to each group of the segmentation images.
In some embodiments, the depth information obtaining module is specifically configured to:
acquiring the three-dimensional coordinates of each point in the region corresponding to each group of the segmentation images by using the following formula:
Figure BDA0001891465780000051
Figure BDA0001891465780000052
Figure BDA0001891465780000053
wherein x isi、yi、ziRepresenting the three-dimensional coordinates of each point, wherein baseline is the length of a base line, disparity is parallax data acquired by the parallax map, and cx, cy, fx and fy are the calibration parameters;
and acquiring the depth information of the region according to the three-dimensional coordinates of each point in the region.
In some embodiments, a connection line between the first camera device and the second camera device forms a preset included angle with a horizontal direction.
In some embodiments, the predetermined included angle is 45 ° or 135 °.
In some embodiments, a line between the first camera and the second camera is parallel to a horizontal direction;
the segmentation and correction module is specifically configured to:
performing image segmentation and correction on the first target image in a direction forming a preset included angle with the horizontal direction; and carrying out image segmentation and correction on the second target image in the direction which forms a preset included angle with the horizontal direction, wherein the preset included angle is not 90 degrees.
In some embodiments, the predetermined included angle is 45 ° or 135 °.
In some embodiments, the first camera device and the second camera device are both fisheye lenses.
In a third aspect, an embodiment of the present invention provides a depth sensing device, including:
a main body;
the binocular system is arranged on the main body and comprises a first camera device and a second camera device;
the controller is arranged on the main body; the controller includes:
at least one processor, and
a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
In some embodiments, a connection line between the first camera device and the second camera device forms a preset included angle with a horizontal direction.
In some embodiments, the predetermined included angle is 45 ° or 135 °.
In some embodiments, the first camera device and the second camera device are both fisheye lenses.
In some embodiments, the depth-sensing device is a drone.
The depth perception method, the depth perception device and the depth perception equipment of the embodiment of the invention divide and correct a first target image of a target area shot by a first camera device to obtain a plurality of first divided images, divide and correct a second target image of the target area shot by a second camera device to obtain a plurality of second divided images respectively corresponding to the plurality of first divided images, and each first divided image and the second divided image corresponding to the first divided image form a group of divided images. The image is segmented and then corrected to obtain a corrected image with relatively small image quality loss, so that the binocular stereo matching precision is improved, and the edge depth detection precision is also improved. Moreover, the multiple groups of segmented images form multiple groups of binocular systems, and depth information in multiple directions can be perceived.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a schematic structural diagram of an embodiment of a depth perception device according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an application scenario of a depth perception device according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of the position of the camera in one embodiment of the depth perception device of the present invention;
FIG. 3b is a schematic diagram of image segmentation of a target image according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of image segmentation of a target image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention;
FIG. 5 is a schematic flow chart diagram illustrating one embodiment of a depth perception method of the present invention;
FIG. 6 is a schematic diagram of image segmentation of a target image in an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an embodiment of the depth perception device of the present invention;
fig. 8 is a schematic diagram of a hardware structure of a controller according to an embodiment of the depth sensing device of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The depth perception method and apparatus provided by the embodiment of the invention can be applied to the depth perception device 100 shown in fig. 1. The depth perception device 100 includes a main body (not shown in the drawings), a binocular system 101 for perceiving an image of a target area, and a controller 10. The binocular system 101 includes a first camera 20 and a second camera 30, and the first camera 20 and the second camera 30 are used to acquire a target image of a target area. The controller 10 is configured to process the target images acquired by the first imaging device 20 and the second imaging device 30 to acquire depth information. Specifically, the controller 10 performs image segmentation and rectification on a first target image acquired by the first imaging device 20 and a second target image acquired by the second imaging device 30 to obtain a plurality of first segmented images and a plurality of second segmented images respectively corresponding to the plurality of first segmented images, each of the first segmented images and the second segmented image corresponding thereto forming a set of segmented images. The target image is divided into a plurality of small images, and the plurality of small images are respectively corrected to obtain a corrected image with relatively small image quality loss. The images with small image quality loss are used for stereo matching and depth calculation, so that the binocular stereo matching precision is improved, and the edge depth detection precision is also improved. Moreover, the multiple groups of segmented images form multiple groups of binocular systems, and depth information in multiple directions can be perceived.
The depth perception device 100 may be used in various situations requiring depth perception, such as three-dimensional reconstruction, obstacle detection, and path planning, for example, the depth perception device 100 is used in an unmanned aerial vehicle, a robot, and the like. Fig. 2 shows an application scenario of the depth perception device 100 as a drone for obstacle detection and path planning.
The first camera 20 and the second camera 30 may be any suitable lenses. The depth perception method and the depth perception device of the embodiment of the invention are more suitable for occasions that the first camera device 20 and the second camera device 30 are wide-angle lenses such as fisheye lenses or panoramic lenses such as turning-back and omnidirectional lenses. Because the images shot by the lenses are relatively severely deformed and the edge depth detection precision is lower, the edge correction images with relatively small image quality loss can be obtained by adopting the depth perception method and the depth perception device of the embodiment of the invention, and the edge depth detection precision is greatly improved.
The first camera 20 and the second camera 30 may be arranged in any suitable manner, and in some embodiments, the first camera 20 and the second camera 30 may be arranged in a staggered manner so that no dead zone (i.e., no measurable zone) occurs in the sets of binocular systems formed by the sets of split images. For example, as shown in fig. 3a, the first camera device 20 and the second camera device 30 are arranged diagonally, the first camera device 20 is located at a diagonal point on the upper side, the second camera device 30 is located at a diagonal point on the lower side, and a connecting line between the first camera device 20 and the second camera device 30 forms a predetermined included angle a with the horizontal direction, so that the first camera device 20 and the second camera device 30 do not shield each other. The images obtained by the first and second image capturing devices 20 and 30 after the division correction of the first and second target images are shown in fig. 3b (the division of the images into five images is exemplified). At this time, the image is divided in the normal direction (i.e., the direction a in fig. 3 c), and as can be seen from fig. 3b and 3c, the two divided images (e.g., 1 and 6, 2 and 7, 3 and 8, etc.) in each binocular system are mutually displaced from each other, and no dead zone region occurs.
In other embodiments, the first camera 20 and the second camera 30 may be arranged horizontally in parallel, and the first camera 20 and the second camera 30 are located on the same horizontal line. In order to prevent the dead zone from occurring in each group of binocular systems, referring to fig. 4, when the first target image and the second target image are image-divided, the first target image and the second target image may be image-divided and corrected in a direction (e.g., a direction b in fig. 4) having a preset included angle a with the horizontal direction. As can be seen from fig. 4, the two segmented images in each group of binocular systems are mutually staggered, and no dead zone region occurs.
As can be seen from fig. 3c and fig. 4, in each set of binocular systems, a center line (for example, a line or b line) of the first divided image is parallel to a center line of the second divided image, and in order to prevent a dead zone region from occurring in each set of binocular systems, the first divided image and the second divided image need to be staggered from each other, that is, a connecting line of the first divided image and the second divided image forms a preset included angle a with the center line. The preset included angle may be any suitable angle other than 90 degrees, such as 45 degrees or 135 degrees.
Fig. 5 is a flowchart of a depth sensing method according to an embodiment of the present invention, where the method may be executed by the depth sensing apparatus 100 in fig. 1, as shown in fig. 5, and the method includes:
101: a first target image of the target area is obtained by the first camera 20 and a second target image of the target area is obtained by the second camera 30.
The first camera 20 and the second camera 30 may be any suitable lenses, such as a wide-angle lens like a fish-eye lens, or a panoramic lens like a fold-back and omnidirectional lens.
102: image segmentation and correction are performed on the first target image to obtain a plurality of first segmented images.
103: performing image segmentation and correction on the second target image to acquire a plurality of second segmentation images respectively corresponding to the plurality of first segmentation images, wherein each of the first segmentation images and the second segmentation images corresponding to the first segmentation images are a set of segmentation images; in each group of the segmentation images, the central line of the first segmentation image is parallel to the central line of the second segmentation image, and a connecting line of the first segmentation image and the second segmentation image forms a preset included angle with the central line.
In some embodiments, the first target image and the second target image may be segmented and rectified by using a piewitse algorithm to obtain a plurality of first segmented images and a plurality of second segmented images.
Here, the number of the first divided image and the second divided image may be any suitable number, for example, 2, 3, 4, 5 or more. Since the deformation of the middle region of the target image is small and the deformation of the peripheral region is large, in order to further improve the edge depth detection accuracy, the middle region can be used as a single region, and the region around the middle region is divided into a plurality of peripheral regions. The number of the peripheral regions may be 4, 6, 8, or others.
Fig. 6 shows a scenario in which the first imaging device 20 and the second imaging device 30 are fisheye lenses and the peripheral area is 4, where the left image is an image before correction and the right image is an image after correction. In the embodiment shown in fig. 3b, 3c and 4, the 4 peripheral regions are an upper region, a lower region, a left region and a right region (corresponding to the regions represented by the numbers 1, 5, 2 and 4 in fig. 6). In the embodiment shown in fig. 3b and 3c, the upper area, the left area, the middle area, the right area and the lower area represent images of the front, the left, the upper, the right and the rear, respectively, so that the binocular system can obtain depth information of the front, the left, the upper, the right and the rear five directions. In the embodiment shown in fig. 4, the upper area, the left area, the middle area, the right area, and the lower area represent images in left front, right front, upper, left rear, and right rear directions, respectively, so that the binocular system can obtain depth information in five directions of left front, right front, upper, left rear, and right rear.
In the embodiment shown in fig. 6, depth perception in five directions, namely front, left, upper, right and rear, can be realized, and the perception angle is 180 degrees. During practical application, two or more sets of depth perception devices 100 can be installed on unmanned aerial vehicles and robots to realize depth perception of 360 degrees. Compared with an omnidirectional sensing system using 5 groups of lenses, the embodiment of the invention uses less lenses and saves space. And the calibration error caused by the deformation of the connecting component is reduced, the complexity of a calibration algorithm and the calibration time are reduced, and the calibration parameters are not easy to change and have the advantage of zero delay.
104: and calibrating each group of segmented images respectively to obtain calibration parameters corresponding to each group of segmented images.
In some embodiments, each set of segmented images may be calibrated by using a zhangnyou method or a faugeras method, respectively, to obtain the calibration parameters (including an inner parameter and an outer parameter) corresponding to each set of segmented images.
105: and carrying out binocular matching on each group of the segmentation images to obtain a disparity map corresponding to each group of the segmentation images.
In some embodiments, a BM (Boyer-Moore) algorithm or an sgbm (semi Global Block matching) algorithm may be used to perform binocular matching on each set of the segmented images to obtain the disparity map corresponding to each set of the segmented images.
Specifically, taking the example that the first target image and the second target image are divided into five divided images as shown in fig. 6, please refer to fig. 3b for the image obtained by dividing and rectifying the first target image and the second target image. In the example shown in fig. 3b, the first target image is divided into 1-5 divided images and the second target image is divided into 6-10 divided images. 1 and 6 constitute a front binocular system, 2 and 7 constitute a left binocular system, 3 and 8 constitute an upper binocular system, 4 and 9 constitute a right binocular system, and 5 and 10 constitute a rear binocular system. Firstly, feature point extraction is carried out on 1-10 segmentation images, then feature point matching is respectively carried out on 1 and 6, 2 and 7, 3 and 8, 4 and 9, and 5 and 10, after the feature points are matched, the parallax of each feature point on the first target image and the second target image can be obtained by utilizing the matching result and adopting the existing stereo matching algorithm, and the parallax of each feature point forms a parallax map corresponding to each group of segmentation images.
106: and acquiring the depth information of the region corresponding to each group of the segmented images according to the disparity map and the calibration parameters.
In some embodiments, the following formula may be used to obtain the three-dimensional coordinates of each point in the region corresponding to each set of the segmented images:
Figure BDA0001891465780000111
Figure BDA0001891465780000112
Figure BDA0001891465780000113
wherein x isi、yi、ziAnd the three-dimensional coordinates of each point are represented, baseline is the length of a base line, disparity is parallax data acquired by the parallax map, and cx, cy, fx and fy are the calibration parameters. According to the three-dimensional coordinates of each point in the region, depth information of the region can be acquired.
In the embodiment shown in FIG. 3b, the baseline distance between region 1 and region 6, the baseline distance between region 5 and region 10 is D, the baseline distance between region 2 and region 7, the baseline distance between region 4 and region 9 is W, and the baseline distance between region 3 and region 8 is W
Figure BDA0001891465780000114
(W or D may be used). Where W is the lateral distance between the first and second image capture devices 20, 30 and D is the longitudinal distance between the first and second image capture devices 20, 30.
The embodiment of the invention divides and corrects a first target image of a target area shot by a first camera to obtain a plurality of first divided images, divides and corrects a second target image of the target area shot by a second camera to obtain a plurality of second divided images respectively corresponding to the plurality of first divided images, and each first divided image and the corresponding second divided image form a group of divided images. The image is segmented and then corrected to obtain a corrected image with relatively small image quality loss, so that the binocular stereo matching precision is improved, and the edge depth detection precision is also improved. Moreover, the multiple groups of segmented images form multiple groups of binocular systems, and depth information in multiple directions can be perceived.
Accordingly, as shown in fig. 7, an embodiment of the present invention further provides a depth sensing apparatus, which is used in the depth sensing device 100 in fig. 1, where the depth sensing apparatus 700 includes:
an acquiring module 701, configured to acquire a first target image of the target area acquired by the first imaging device and a second target image of the target area acquired by the second imaging device.
A segmentation and correction module 702 to:
performing image segmentation and correction on the first target image to obtain a plurality of first segmented images; and
performing image segmentation and correction on the second target image to acquire a plurality of second segmentation images respectively corresponding to the plurality of first segmentation images, wherein each of the first segmentation images and the second segmentation images corresponding to the first segmentation images are a set of segmentation images; in each group of the segmentation images, the central line of the first segmentation image is parallel to the central line of the second segmentation image, and a connecting line of the first segmentation image and the second segmentation image forms a preset included angle with the central line.
The calibration module 703 is configured to calibrate each group of segmented images respectively to obtain calibration parameters corresponding to each group of segmented images.
And the binocular matching module 704 is configured to perform binocular matching on each set of the segmented images to obtain a disparity map corresponding to each set of the segmented images.
A depth information obtaining module 705, configured to obtain depth information of a region corresponding to each group of the segmented images according to the disparity map and the calibration parameter.
The embodiment of the invention divides and corrects a first target image of a target area shot by a first camera to obtain a plurality of first divided images, divides and corrects a second target image of the target area shot by a second camera to obtain a plurality of second divided images respectively corresponding to the plurality of first divided images, and each first divided image and the corresponding second divided image form a group of divided images. The image is segmented and then corrected to obtain a corrected image with relatively small image quality loss, so that the binocular stereo matching precision is improved, and the edge depth detection precision is also improved. Moreover, the multiple groups of segmented images form multiple groups of binocular systems, and depth information in multiple directions can be perceived.
In some embodiments of the depth perception device 700, the segmentation and correction module 702 is specifically configured to:
and carrying out image segmentation and correction on the first target image by adopting a piece wise algorithm so as to obtain a plurality of first segmentation images. And
and carrying out image segmentation and correction on the second target image by adopting a piece wise algorithm so as to obtain a plurality of second segmentation images.
In some embodiments of the depth sensing apparatus, the calibration module 703 is specifically configured to:
and calibrating each group of segmented images by using a Zhangyingyou method or a Faugeras method respectively to obtain the calibration parameters corresponding to each group of segmented images.
In some embodiments of the depth perception device, the binocular matching module 704 is specifically configured to:
and carrying out binocular matching on each group of the segmentation images by utilizing a BM algorithm or an SGBM algorithm so as to obtain the parallax map corresponding to each group of the segmentation images.
In some embodiments of the depth sensing apparatus, the depth information obtaining module 705 is specifically configured to:
acquiring the three-dimensional coordinates of each point in the region corresponding to each group of the segmentation images by using the following formula:
Figure BDA0001891465780000131
Figure BDA0001891465780000132
Figure BDA0001891465780000133
wherein x isi、yi、ziRepresenting the three-dimensional coordinates of each point, wherein baseline is the length of a base line, disparity is parallax data acquired by the parallax map, and cx, cy, fx and fy are the calibration parameters;
and acquiring the depth information of the region according to the three-dimensional coordinates of each point in the region.
In some embodiments of the depth sensing device 700, a connection line between the first camera and the second camera forms a predetermined angle with a horizontal direction.
In some embodiments of the depth perception device 700, the preset included angle is 45 ° or 135 °.
In some embodiments of the depth perception device 700, a line between the first camera and the second camera is parallel to a horizontal direction;
the segmentation and correction module 702 is specifically configured to:
performing image segmentation and correction on the first target image in a direction forming a preset included angle with the horizontal direction; and carrying out image segmentation and correction on the second target image in the direction which forms a preset included angle with the horizontal direction, wherein the preset included angle is not 90 degrees.
In some embodiments of the depth perception device 700, the preset included angle is 45 ° or 135 °.
In some embodiments of the depth perception device 700, the first camera and the second camera are both fisheye lenses.
It should be noted that the above-mentioned apparatus can execute the method provided by the embodiments of the present application, and has corresponding functional modules and beneficial effects for executing the method. For technical details which are not described in detail in the device embodiments, reference is made to the methods provided in the embodiments of the present application.
Fig. 8 is a schematic diagram of a hardware structure of the controller 10 in the depth sensing device 100 according to the embodiment of the present invention, and as shown in fig. 8, the controller 10 includes:
one or more processors 11 and a memory 12, with one processor 11 being an example in fig. 8.
The processor 11 and the memory 12 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
The memory 12, as a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the depth perception method in the embodiment of the present application (for example, the obtaining module 701, the segmentation and correction module 702, the calibration module 703, the binocular matching module 704, and the depth information obtaining module 705 shown in fig. 7). The processor 11 executes various functional applications and data processing of the depth perception device 100, namely, implements the depth perception method of the above-described method embodiment, by running non-volatile software programs, instructions and modules stored in the memory 12.
The memory 12 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the depth perception device, and the like. Further, the memory 12 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 12 optionally includes memory located remotely from the processor 11, and these remote memories may be connected to the depth aware device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 12 and, when executed by the one or more processors 11, perform the depth perception method in any of the above-described method embodiments, e.g., performing the above-described method steps 101-106 of fig. 5; the functions of the modules 701 and 705 in fig. 7 are realized.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the methods provided in the embodiments of the present application.
Embodiments of the present application provide a non-transitory computer-readable storage medium storing computer-executable instructions, which are executed by one or more processors, such as one of the processors 11 in fig. 8, to enable the one or more processors to perform the depth sensing method in any of the above method embodiments, such as performing the above-described method steps 101 to 106 in fig. 5; the functions of the modules 701 and 705 in fig. 7 are realized.
When the depth sensing apparatus 100 of the embodiment of the present invention is used for an unmanned aerial vehicle, a main body of the unmanned aerial vehicle is a body of the unmanned aerial vehicle, wherein the first camera device 20 and the second camera device 30 of the depth sensing apparatus 100 may be disposed on the body of the unmanned aerial vehicle, and the controller 10 may adopt an independent controller, or may also utilize a flight control chip of the unmanned aerial vehicle to perform control.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a general hardware platform, and may also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (21)

1. A depth perception method for a binocular system to perceive depth of a target region, the binocular system including a first camera and a second camera, the method comprising:
obtaining a first target image of the target area through the first camera device, and obtaining a second target image of the target area through the second camera device;
firstly, carrying out image segmentation on the first target image and then carrying out image correction to obtain a plurality of first segmented images;
performing image segmentation on the second target image and then performing image correction to obtain a plurality of second segmented images corresponding to the plurality of first segmented images respectively, wherein each first segmented image and the second segmented image corresponding to the first segmented image form a group of segmented images;
in each group of the segmentation images, the central line of the first segmentation image is parallel to the central line of the second segmentation image, and a connecting line of the first segmentation image and the second segmentation image forms a preset included angle with the central line;
calibrating each group of segmented images respectively to obtain calibration parameters corresponding to each group of segmented images;
performing binocular matching on each group of the segmentation images to obtain a disparity map corresponding to each group of the segmentation images;
acquiring depth information of a region corresponding to each group of the segmented images according to the disparity map and the calibration parameters;
a connecting line between the first camera device and the second camera device forms a first preset included angle with the horizontal direction, the first camera device and the second camera device are arranged in a diagonal mode, the first camera device is located at a diagonal point on the upper side, and the second camera device is located at a diagonal point on the lower side;
or, if a connection line between the first camera device and the second camera device is parallel to the horizontal direction, the image segmentation and correction of the first target image includes:
carrying out image segmentation and correction on the first target image in a direction forming a second preset included angle with the horizontal direction; and the number of the first and second groups,
the image segmentation and correction of the second target image comprises:
and carrying out image segmentation and correction on the second target image in the direction which forms a second preset included angle with the horizontal direction, wherein the second preset included angle is not 90 degrees.
2. The method of claim 1, wherein the image segmenting the first target image and then performing image correction to obtain a plurality of first segmented images comprises:
and carrying out image segmentation and correction on the first target image by adopting a piece wise algorithm so as to obtain a plurality of first segmentation images.
3. The method according to claim 1, wherein the image-segmenting and then image-correcting the second target image to obtain a plurality of second segmented images corresponding to the plurality of first segmented images respectively comprises:
and carrying out image segmentation and correction on the second target image by adopting a piece wise algorithm so as to obtain a plurality of second segmentation images.
4. The method according to claim 1, wherein the calibrating each of the segmented images to obtain calibration parameters corresponding to each of the segmented images comprises:
and calibrating each group of segmented images by using a Zhangyingyou method or a Faugeras method respectively to obtain the calibration parameters corresponding to each group of segmented images.
5. The method according to claim 1, wherein the performing binocular matching on each set of the segmented images to obtain the disparity map corresponding to each set of the segmented images comprises:
and carrying out binocular matching on each group of the segmentation images by utilizing a BM algorithm or an SGBM algorithm so as to obtain the parallax map corresponding to each group of the segmentation images.
6. The method according to claim 1, wherein the obtaining depth information of a region corresponding to each group of the segmented images according to the disparity map and the calibration parameters comprises:
acquiring the three-dimensional coordinates of each point in the region corresponding to each group of the segmentation images by using the following formula:
Figure FDA0002988874490000021
Figure FDA0002988874490000031
Figure FDA0002988874490000032
wherein x isi、yi、ziRepresenting the three-dimensional coordinates of each point, wherein baseline is the length of a base line, disparity is parallax data acquired by the parallax map, cx, cy, fx and fy are the calibration parameters, and px and py represent the pixel coordinates of each point in the parallax map;
and acquiring the depth information of the region according to the three-dimensional coordinates of each point in the region.
7. The method of claim 1, wherein the first predetermined included angle is 45 ° or 135 °.
8. The method of claim 1, wherein the second predetermined included angle is 45 ° or 135 °.
9. The method of claim 1, wherein the first camera and the second camera are both fisheye lenses.
10. A depth perception device for a binocular system to perceive depth of a target region, the binocular system including a first camera and a second camera, the device comprising:
an obtaining module, configured to obtain a first target image of the target area obtained by the first camera device and a second target image of the target area obtained by the second camera device;
a segmentation and correction module to:
firstly, carrying out image segmentation on the first target image and then carrying out image correction to obtain a plurality of first segmented images; and
performing image segmentation on the second target image and then performing image correction to obtain a plurality of second segmented images corresponding to the plurality of first segmented images respectively, wherein each first segmented image and the second segmented image corresponding to the first segmented image form a group of segmented images; in each group of the segmentation images, the central line of the first segmentation image is parallel to the central line of the second segmentation image, and a connecting line of the first segmentation image and the second segmentation image forms a preset included angle with the central line;
the calibration module is used for respectively calibrating each group of segmentation images so as to obtain calibration parameters corresponding to each group of segmentation images;
the binocular matching module is used for carrying out binocular matching on each group of the segmentation images so as to obtain a disparity map corresponding to each group of the segmentation images; and
the depth information acquisition module is used for acquiring the depth information of the area corresponding to each group of the segmentation images according to the disparity map and the calibration parameters;
a connecting line between the first camera device and the second camera device forms a first preset included angle with the horizontal direction, the first camera device and the second camera device are arranged in a diagonal mode, the first camera device is located at a diagonal point on the upper side, and the second camera device is located at a diagonal point on the lower side;
or, if a connection line between the first camera device and the second camera device is parallel to the horizontal direction, the image segmentation and correction of the first target image includes:
carrying out image segmentation and correction on the first target image in a direction forming a second preset included angle with the horizontal direction; and the number of the first and second groups,
the image segmentation and correction of the second target image comprises:
and carrying out image segmentation and correction on the second target image in the direction which forms a second preset included angle with the horizontal direction, wherein the second preset included angle is not 90 degrees.
11. The apparatus of claim 10, wherein the segmentation and correction module is specifically configured to:
and carrying out image segmentation and correction on the first target image by adopting a piece wise algorithm so as to obtain a plurality of first segmentation images.
12. The apparatus of claim 10, wherein the segmentation and correction module is specifically configured to:
and carrying out image segmentation and correction on the second target image by adopting a piece wise algorithm so as to obtain a plurality of second segmentation images.
13. The apparatus of claim 10, wherein the calibration module is specifically configured to:
and calibrating each group of segmented images by using a Zhangyingyou method or a Faugeras method respectively to obtain the calibration parameters corresponding to each group of segmented images.
14. The device of claim 10, wherein the binocular matching module is specifically configured to:
and carrying out binocular matching on each group of the segmentation images by utilizing a BM algorithm or an SGBM algorithm so as to obtain the parallax map corresponding to each group of the segmentation images.
15. The apparatus of claim 10, wherein the depth information obtaining module is specifically configured to:
acquiring the three-dimensional coordinates of each point in the region corresponding to each group of the segmentation images by using the following formula:
Figure FDA0002988874490000051
Figure FDA0002988874490000052
Figure FDA0002988874490000053
wherein x isi、yi、ziRepresenting the three-dimensional coordinates of each point, wherein baseline is the length of a base line, disparity is parallax data acquired by the parallax map, cx, cy, fx and fy are the calibration parameters, and px and py represent the pixel coordinates of each point in the parallax map;
and acquiring the depth information of the region according to the three-dimensional coordinates of each point in the region.
16. The device of claim 10, wherein the first predetermined included angle is 45 ° or 135 °.
17. The device of claim 10, wherein the second predetermined included angle is 45 ° or 135 °.
18. The apparatus of claim 10, wherein the first camera and the second camera are both fisheye lenses.
19. A depth perception device, comprising:
a main body;
the binocular system is arranged on the main body and comprises a first camera device and a second camera device;
the controller is arranged on the main body; the controller includes:
at least one processor, and
a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of any of claims 1-9.
20. The apparatus of claim 19, wherein the first camera and the second camera are both fisheye lenses.
21. The device of claim 19, wherein the depth-sensing device is a drone.
CN201811473003.1A 2018-12-04 2018-12-04 Depth sensing method and device and depth sensing equipment Active CN109658451B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811473003.1A CN109658451B (en) 2018-12-04 2018-12-04 Depth sensing method and device and depth sensing equipment
PCT/CN2019/123072 WO2020114433A1 (en) 2018-12-04 2019-12-04 Depth perception method and apparatus, and depth perception device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811473003.1A CN109658451B (en) 2018-12-04 2018-12-04 Depth sensing method and device and depth sensing equipment

Publications (2)

Publication Number Publication Date
CN109658451A CN109658451A (en) 2019-04-19
CN109658451B true CN109658451B (en) 2021-07-30

Family

ID=66112775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811473003.1A Active CN109658451B (en) 2018-12-04 2018-12-04 Depth sensing method and device and depth sensing equipment

Country Status (2)

Country Link
CN (1) CN109658451B (en)
WO (1) WO2020114433A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658451B (en) * 2018-12-04 2021-07-30 深圳市道通智能航空技术股份有限公司 Depth sensing method and device and depth sensing equipment
CN110580724B (en) * 2019-08-28 2022-02-25 贝壳技术有限公司 Method and device for calibrating binocular camera set and storage medium
CN111986248B (en) * 2020-08-18 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Multi-vision sensing method and device and automatic driving automobile

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102721370A (en) * 2012-06-18 2012-10-10 南昌航空大学 Real-time mountain landslide monitoring method based on computer vision
CN202818442U (en) * 2012-05-25 2013-03-20 常州泰勒维克今创电子有限公司 All-digital panoramic camera
CN105277169A (en) * 2015-09-25 2016-01-27 安霸半导体技术(上海)有限公司 Image segmentation-based binocular range finding method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6756993B2 (en) * 2001-01-17 2004-06-29 The University Of North Carolina At Chapel Hill Methods and apparatus for rendering images using 3D warping techniques
US8593524B2 (en) * 2006-12-18 2013-11-26 Koninklijke Philips N.V. Calibrating a camera system
CN102750711B (en) * 2012-06-04 2015-07-29 清华大学 A kind of binocular video depth map calculating method based on Iamge Segmentation and estimation
CN103198473B (en) * 2013-03-05 2016-02-24 腾讯科技(深圳)有限公司 A kind of degree of depth drawing generating method and device
CN105787447A (en) * 2016-02-26 2016-07-20 深圳市道通智能航空技术有限公司 Method and system of unmanned plane omnibearing obstacle avoidance based on binocular vision
CN106709948A (en) * 2016-12-21 2017-05-24 浙江大学 Quick binocular stereo matching method based on superpixel segmentation
CN109658451B (en) * 2018-12-04 2021-07-30 深圳市道通智能航空技术股份有限公司 Depth sensing method and device and depth sensing equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202818442U (en) * 2012-05-25 2013-03-20 常州泰勒维克今创电子有限公司 All-digital panoramic camera
CN102721370A (en) * 2012-06-18 2012-10-10 南昌航空大学 Real-time mountain landslide monitoring method based on computer vision
CN105277169A (en) * 2015-09-25 2016-01-27 安霸半导体技术(上海)有限公司 Image segmentation-based binocular range finding method

Also Published As

Publication number Publication date
WO2020114433A1 (en) 2020-06-11
CN109658451A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN106529495B (en) Obstacle detection method and device for aircraft
CN109920011B (en) External parameter calibration method, device and equipment for laser radar and binocular camera
CN106960454B (en) Depth of field obstacle avoidance method and equipment and unmanned aerial vehicle
CN109902637B (en) Lane line detection method, lane line detection device, computer device, and storage medium
US10594941B2 (en) Method and device of image processing and camera
CN108323190B (en) Obstacle avoidance method and device and unmanned aerial vehicle
CN107636679B (en) Obstacle detection method and device
US20170293796A1 (en) Flight device and flight control method
US20180322658A1 (en) Camera Calibration
CN107980138B (en) False alarm obstacle detection method and device
US11173841B2 (en) Multicamera system for autonamous driving vehicles
CN109658451B (en) Depth sensing method and device and depth sensing equipment
CN110493488B (en) Video image stabilization method, video image stabilization device and computer readable storage medium
CN112907675B (en) Calibration method, device, system, equipment and storage medium of image acquisition equipment
CN109520480B (en) Distance measurement method and distance measurement system based on binocular stereo vision
CN111383264B (en) Positioning method, positioning device, terminal and computer storage medium
CN110458952B (en) Three-dimensional reconstruction method and device based on trinocular vision
CN105488766A (en) Fish-eye lens image correcting method and device
KR101482645B1 (en) Distortion Center Correction Method Applying 2D Pattern to FOV Distortion Correction Model
CN111882655A (en) Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction
WO2014054124A1 (en) Road surface markings detection device and road surface markings detection method
KR20180131743A (en) Method and Apparatus for Stereo Matching of Wide-Angle Images using SIFT Flow
US10726528B2 (en) Image processing apparatus and image processing method for image picked up by two cameras
CN111243021A (en) Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium
CN113959398B (en) Distance measurement method and device based on vision, drivable equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518055 Guangdong city of Shenzhen province Nanshan District Xili Street Xueyuan Road No. 1001 Chi Yuen Building 9 layer B1

Applicant after: Shenzhen daotong intelligent Aviation Technology Co.,Ltd.

Address before: 518055 Guangdong city of Shenzhen province Nanshan District Xili Street Xueyuan Road No. 1001 Chi Yuen Building 9 layer B1

Applicant before: AUTEL ROBOTICS Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant