WO2021147548A1 - 小障碍物的三维重建方法、检测方法及***、机器人及介质 - Google Patents

小障碍物的三维重建方法、检测方法及***、机器人及介质 Download PDF

Info

Publication number
WO2021147548A1
WO2021147548A1 PCT/CN2020/135028 CN2020135028W WO2021147548A1 WO 2021147548 A1 WO2021147548 A1 WO 2021147548A1 CN 2020135028 W CN2020135028 W CN 2020135028W WO 2021147548 A1 WO2021147548 A1 WO 2021147548A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
image
reconstruction
small
camera
Prior art date
Application number
PCT/CN2020/135028
Other languages
English (en)
French (fr)
Inventor
刘勇
朱俊安
黄寅
郭璁
Original Assignee
深圳市普渡科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市普渡科技有限公司 filed Critical 深圳市普渡科技有限公司
Publication of WO2021147548A1 publication Critical patent/WO2021147548A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Definitions

  • the invention relates to the technical field of robots, in particular to a three-dimensional reconstruction method, a detection method and a system, a robot and a medium for small obstacles.
  • Obstacle detection is an important part of autonomous robot navigation and has been extensively studied in the field of robotics and vision, and has been applied in some consumer-grade products. If small obstacles cannot be accurately detected, the safety of the robot's movement will be affected. However, in the actual environment, due to the various types and small size of small obstacles, the detection of small obstacles is still a challenge.
  • Obstacle detection in the robot scene generally obtains the three-dimensional information (such as three-dimensional position, contour) of the target through distance acquisition sensors or algorithms.
  • three-dimensional information such as three-dimensional position, contour
  • small obstacles need to obtain more accurate three-dimensional information during detection, which requires higher measurement accuracy and resolution of sensors and algorithms.
  • Active sensors such as radar or sonar have high measurement accuracy, but the resolution is low; depth cameras based on infrared structured light can achieve higher resolution, but they are susceptible to sunlight interference. When the interference is large, the imaging has holes and imaging There is insufficient robustness when the target is small.
  • Passive sensors such as binocular stereo matching or sequential image stereo matching, if indiscriminate dense reconstruction is used, there will be a large amount of calculation, and it will be difficult to reconstruct small targets, especially when there is a lot of background noise.
  • detection schemes and algorithms are required to be applicable to various types of small obstacles. If the obstacle is directly detected, then the detection object needs to be defined in advance, and the robustness has certain limitations.
  • the purpose of the present invention is to provide a three-dimensional reconstruction method, detection method and system, robot and medium for small obstacles, which improve the robustness and accuracy of the scheme and algorithm, thereby improving the detection of various small obstacles. accuracy.
  • the present invention provides a method for three-dimensional reconstruction of small obstacles.
  • the method is used to perform three-dimensional reconstruction of small obstacles on the ground.
  • the method includes:
  • a fusion scheme is used to perform three-dimensional reconstruction on the target image to obtain a complete dense point cloud.
  • the binocular camera includes a left camera and a right camera
  • the ground image acquired by the binocular camera includes a left image and a right image
  • the ground image acquired by the structured light depth camera is a structured light depth map.
  • the three-dimensional reconstruction of the target image by the fusion scheme to obtain a complete dense point cloud specifically includes:
  • Sensor calibration includes internal parameter distortion calibration, external parameter calibration of the binocular camera, and external parameter calibration of the binocular camera and the structured light depth camera;
  • Distortion and epipolar correction performing distortion and epipolar correction on the left and right images
  • Data alignment using the external parameters of the structured light depth camera to align the structured light depth map to the coordinate system of the left image and the right image to obtain a binocular depth map;
  • Sparse stereo matching performing sparse stereo matching on the hollow part of the structured light depth map and obtaining disparity, converting the disparity into depth, using the depth and fusing the structured light depth map and the binocular depth map, Rebuild a robust depth map.
  • the operation of sparse stereo matching specifically includes:
  • the hole mask is extracted, sparse stereo matching is performed on the image in the hole mask, and the parallax is obtained.
  • the distortion and epipolar correction operations specifically include:
  • the matching points used for sparse stereo matching are constrained to be aligned on a horizontal straight line.
  • the operation of sparse stereo matching further includes:
  • the robust depth map is divided into blocks, the depth map in each block is converted into the point cloud, and the point cloud is ground-fitted with a plane model. If the point cloud in the block is If the plane assumption is not satisfied, remove the point cloud in the block, otherwise keep the point cloud in the block;
  • the reserved blocks will be checked twice through the deep neural network, and the blocks that have passed the second check will be grown based on the plane normal and the center of gravity, and the three-dimensional plane equations and boundaries of a large piece of ground will be segmented Point cloud
  • the obstacle area is mapped into a complete point cloud to complete the three-dimensional detection of the small obstacle.
  • the depth map in the block that conforms to the plane model but is not on the ground can be excluded through the secondary verification, thereby improving the accuracy of the detection of small obstacles as a whole.
  • the present invention also provides a method for detecting small obstacles, which includes the three-dimensional reconstruction method for small obstacles as described above.
  • the present invention also provides a detection system for small obstacles, characterized in that the detection system includes the above-mentioned three-dimensional reconstruction method for small obstacles.
  • it further includes a robot body, the binocular camera and the structured light depth camera are arranged on the robot body, and the binocular camera and the structured light depth camera are arranged obliquely toward the direction of the ground.
  • a robot includes a processor and a memory, and a computer program is stored in the memory, and the processor is configured to execute the computer program to implement the above-mentioned three-dimensional reconstruction method for small obstacles.
  • a computer storage medium stores a computer program, and when the computer program is executed, the method for realizing the three-dimensional reconstruction of small obstacles as described above is executed.
  • the coverage area of the image collected by the binocular camera and the structured light depth camera on the ground is improved, and the completeness of the recognition of small obstacles is significantly improved.
  • the three-dimensional reconstruction method, detection method, and detection system of small obstacles provided by the present invention, dense reconstruction is performed on the background of the main body of the ground image, and the detection method of "background reduction" is used to detect the small obstacles.
  • the point cloud is separated and detected, which improves the robustness of the method as a whole.
  • FIG. 1 shows a schematic flowchart of a method for 3D reconstruction of small obstacles according to an embodiment of the present invention
  • FIG. 2 shows a schematic flowchart of a depth map processing method of a three-dimensional reconstruction method for small obstacles according to an embodiment of the present invention
  • Fig. 3 shows a schematic flow chart of sparse stereo matching of the three-dimensional reconstruction method for small obstacles according to the embodiment of the present invention.
  • the embodiments of the present invention relate to a three-dimensional reconstruction method, a detection method and a detection system of small obstacles.
  • the method 200 for three-dimensional reconstruction of small obstacles involved in this embodiment is used to perform three-dimensional reconstruction of small obstacles on the ground.
  • the method specifically includes:
  • the binocular camera includes a left camera and a right camera
  • the ground image acquired by the binocular camera includes a left image and a right image
  • the ground image acquired by the structured light depth camera is a structure Light depth map.
  • the three-dimensional reconstruction of the target image by the fusion scheme to obtain a complete dense point cloud specifically includes:
  • Sensor calibration including internal parameter distortion calibration and external parameter calibration of the binocular camera, and external parameter calibration of the binocular camera and the structured light depth camera;
  • Sparse stereo matching performing sparse stereo matching on the hollow part of the structured light depth map and obtaining disparity, converting the disparity into depth, using the depth and fusing the structured light depth map and the binocular depth Figure, reconstruct a robust depth map.
  • the operation of the sparse stereo matching (2054) specifically includes:
  • the hole mask is extracted, sparse stereo matching is performed on the image in the hole mask, and the parallax is obtained.
  • the operation of the distortion and epipolar correction (2052) specifically includes:
  • the matching points used for sparse stereo matching are constrained to be aligned on a horizontal straight line.
  • the operation of the sparse stereo matching (2054) further includes:
  • the depth map in the block that conforms to the plane model but is not on the ground can be excluded through the secondary verification, thereby improving the accuracy of the detection of small obstacles as a whole.
  • the embodiment of the present invention also relates to a method for detecting small obstacles.
  • the detection method includes the three-dimensional reconstruction method of small obstacles as described above. Regarding the three-dimensional reconstruction method of small obstacles, I will not go into details here.
  • the embodiment of the present invention also relates to a detection system for small obstacles.
  • the detection system includes the three-dimensional reconstruction method of small obstacles as described above. Regarding the three-dimensional reconstruction method of small obstacles, I will not go into details here.
  • the detection system for small obstacles further includes a robot body, the binocular camera and the structured light depth camera are arranged on the robot body, and the binocular camera and the structured light depth camera face the direction of the ground. Tilt setting.
  • the coverage area of the image collected by the binocular camera and the structured light depth camera on the ground is improved, and the completeness of the recognition of small obstacles is significantly improved.
  • an embodiment of the present invention further provides a robot, which includes a processor, a memory, a network interface, and a database connected through a system bus.
  • the robot's processor is used to provide calculation and control capabilities.
  • the robot's memory includes non-volatile storage media and internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium.
  • the robot's network interface is used to communicate with external terminals through a network connection.
  • the computer program is executed by the processor to realize a three-dimensional reconstruction method of small obstacles.
  • the embodiment of the present invention also provides a computer storage medium, the computer storage medium stores a computer program, and when the computer program is executed, it executes the three-dimensional reconstruction method for realizing the small obstacle as described above.
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供了一种小障碍物的三维重建方法、检测方法及检测***,方法包括:通过双目相机和结构光深度相机分别获取地面图像,对所述地面图像的占主体的背景执行稠密重建,采用所述双目相机对梯度较大的图像位置进行稀疏特征重建;通过视觉处理技术提取三维的点云,采用"减背景"检测方法对所述小障碍物的点云进行分离检测;将所述小障碍物的点云映射到图像,执行图像分割获得目标图像;采用融合方案对所述目标图像进行三维重构获取完整密集点云。根据本发明提供的小障碍物的三维重建方法、检测方法及检测***,可实现准确的三维重构,因此该方法为小障碍物检测的准确性提供了保障。

Description

小障碍物的三维重建方法、检测方法及***、机器人及介质
本申请以2020年1月20日提交的申请号为202010064033.8,名称为“小障碍物的三维重建方法、检测方法及检测***”的中国发明专利申请为基础,并要求其优先权。
技术领域
本发明涉及机器人技术领域,特别涉及一种小障碍物的三维重建方法、检测方法及***、机器人及介质。
背景技术
障碍物检测是机器人自主导航的重要组成部分且在机器人学和视觉领域得到了广泛研究且在一些消费级产品上得到了应用。小障碍物如果不能得到准确的检测,会导致机器人移动安全性受到影响。但在实际环境中,由于小障碍物种类繁杂,体积较小,对小障碍物检测仍然是一个挑战。
发明内容
机器人场景的障碍物检测一般通过距离获取传感器或算法获取目标的三维信息(如三维位置,轮廓)。一方面,小障碍物由于其体积小,检测时需要获取更准确的三维信息,这对传感器和算法的测量精度及分辨率要求更高。主动传感器中如雷达或者声呐有很高测量精度,但分辨率较低;基于红外结构光的深度相机能达到较高分辨率,但其易受阳光干扰,当干扰影响大时成像存在空洞且成像目标较小时存在鲁棒性不足。被动传感器如双目立体匹配或序列图像立体匹配,若采用无差别稠密重建,存在计算量大,对小目标重建困难存尤其是背景噪声较多时。另一方面,小障碍物种类繁杂,需要检测方案及算法能适用各 种类型的小障碍物。如果直接检测障碍物,那么需要对检测对象有预先的定义,鲁棒性存在一定限制。
鉴于此,本发明的目的在于提供一种小障碍物的三维重建方法、检测方法及***、机器人及介质,提升了方案和算法的鲁棒性和精度,从而提升对各种小障碍物检测的准确性。
为了实现上述目的,本发明实施方式提供如下技术方案:
本发明提供一种小障碍物的三维重建方法,所述方法用于对地面的小障碍物进行三维重建,所述方法包括:
通过双目相机和结构光深度相机分别获取地面图像,
对所述地面图像的占主体的背景执行稠密重建,采用所述双目相机对梯度较大的图像位置进行稀疏特征重建;
通过视觉处理技术提取三维的点云,采用“减背景”检测方法对所述小障碍物的点云进行分离检测;
将所述小障碍物的点云映射到图像,执行图像分割获得目标图像;
采用融合方案对所述目标图像进行三维重构获取完整密集点云。
在这种情况下,通过对所述地面图像的占主体的背景执行稠密重建,以及采用“减背景”检测方法对所述小障碍物的点云进行分离检测,整体提升了方法的鲁棒性,并且通过将小障碍物映射到图像,并执行图像分割,可以将小障碍物完整地分割,以便实现准确的三维重构,因此该方法为小障碍物检测的准确性提供了保障。
其中,所述双目相机包括左相机和右相机,所述双目相机获取的所述地面图像包括左图像和右图像,所述结构光深度相机获取的所述地面图像为结构光深度图。
其中,所述采用融合方案对所述目标图像进行三维重构获取完整密集点云,具体包括:
传感器标定,包括对所述双目相机的内参畸变标定、外参标定以及对所述 双目相机和所述结构光深度相机的外参标定;
畸变与极线矫正,对所述左图像和右图像执行畸变和极线矫正;
数据对齐,利用所述结构光深度相机的所述外参将所述结构光深度图对齐到所述左图像和右图像的坐标系下,获得双目深度图;
稀疏立体匹配,对所述结构光深度图的空洞部分执行稀疏立体匹配并获取视差,将所述视差转换为深度,使用所述深度并融合所述结构光深度图和所述双目深度图,重建出鲁棒的深度图。
由此,无需对全图立体匹配,而只对结构光深度图的空洞部分进行稀疏立体匹配,整体上显著降低了三维重构的计算量,并且提升了对小障碍物的三维重构的鲁棒性。
其中,所述稀疏立体匹配的操作,具体包括:
提取空洞掩模,对所述空洞掩模内的图像执行稀疏立体匹配并获取视差。
其中,所述畸变与极线矫正的操作,具体包括:
将用于稀疏立体匹配的匹配点约束在一条水平直线上进行对齐操作。
在这种情况下,显著减少了后续执行立体匹配的时间并大幅提升了准确度。
其中,所述稀疏立体匹配的操作,进一步包括:
对所述鲁棒的深度图进行分块,将每个所述块内的深度图转换为所述点云,以平面模型对所述点云进行地面拟合,若所述块内的点云不满足平面假设,则将所述块内的点云去除,否则保留所述块内的点云;
通过深度神经网络将对所述保留的块进行二次校验,将通过所述二次校验的所述块基于平面法线和重心进行区域生长,分割出大块地面的三维平面方程及边界点云;
获取所有未通过所述二次校验的所述块内的点云到其所归属地面的距离,若距离大于阈值,则将其分割出来,获得疑似障碍物;
将所述疑似障碍物的所属点云映射到图像,作为区域分割的种子点,进行所述种子点生长,提取出完整的障碍物区域;
将所述障碍物区域映射出完整的点云,完成三维的所述小障碍物的检测。
在这种情况下,通过二次校验可排除符合平面模型但不是地面的块内的深度图,进而总体提升小障碍物的检测的准确性。
本发明还提供一种小障碍物的检测方法,所述检测方法包括如上所述的小障碍物的三维重建方法。
本发明还提供一种小障碍物的检测***,其特征在于,所述检测***包括如上所述的小障碍物的三维重建方法。
其中,还包括机器人本体,所述双目相机和结构光深度相机设置于所述机器人本体,所述双目相机和结构光深度相机朝向所述地面的方向倾斜设置。
一种机器人,所述机器人包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器用于执行所述计算机程序以实现如上所述的小障碍物的三维重建方法。
一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序被执行时执行实现如上所述的小障碍物的三维重建方法。
在这种情况下,提升了双目相机和结构光深度相机采集的到图像对地面的覆盖区域,显著提升了对小障碍物识别的完整性。
根据本发明所提供的小障碍物的三维重建方法、检测方法及检测***,通过对所述地面图像的占主体的背景执行稠密重建,以及采用“减背景”检测方法对所述小障碍物的点云进行分离检测,整体提升了方法的鲁棒性,并且通过将小障碍物映射到图像,并执行图像分割,可以将小障碍物完整地分割,以便实现准确的三维重构,因此该方法为小障碍物检测的准确性提供了保障。
附图说明
图1示出了本发明的实施方式所涉及的小障碍物的三维重建方法的流程示意图;
图2示出了本发明的实施方式所涉及小障碍物的三维重建方法的深度图的处理方法的流程示意图;
图3示出了本发明的实施方式所涉及的小障碍物的三维重建方法的稀疏立体匹配的流程示意图。
具体实施方式
以下,参考附图,详细地说明本发明的优选实施方式。在下面的说明中,对于相同的部件赋予相同的符号,省略重复的说明。另外,附图只是示意性的图,部件相互之间的尺寸的比例或者部件的形状等可以与实际的不同。
本发明实施方式涉及小障碍物的三维重建方法、检测方法及检测***。
如图1所示,本实施方式所涉及的小障碍物的三维重建方法200,用于对地面的小障碍物进行三维重建,方法具体包括:
201、通过所述双目相机和结构光深度相机分别获取地面图像;
202、对所述地面图像的占主体的背景执行稠密重建,采用所述双目相机对梯度较大的图像位置进行稀疏特征重建;
203、通过视觉处理技术提取三维的点云,采用“减背景”检测方法对所述小障碍物的点云进行分离检测;
204、将所述小障碍物的点云映射到图像,执行图像分割获得目标图像;
205、采用融合方案对所述目标图像进行三维重构获取完整密集点云。
在这种情况下,通过对所述地面图像的占主体的背景执行稠密重建,以及采用“减背景”检测方法对所述小障碍物的点云进行分离检测,整体提升了方法的鲁棒性,并且通过将小障碍物映射到图像,并执行图像分割,可以将小障 碍物完整地分割,以便实现准确的三维重构,因此该方法为小障碍物检测的准确性提供了保障。
在本实施方式中,所述双目相机包括左相机和右相机,所述双目相机获取的所述地面图像包括左图像和右图像,所述结构光深度相机获取的所述地面图像为结构光深度图。
如图2所示,在本实施方式中,所述采用融合方案对所述目标图像进行三维重构获取完整密集点云,具体包括:
2051、传感器标定,包括对所述双目相机的内参畸变标定、外参标定以及对所述双目相机和所述结构光深度相机的外参标定;
2052、畸变与极线矫正,对所述左图像和右图像执行畸变和极线矫正;
2053、数据对齐,利用所述结构光深度相机的所述外参将所述结构光深度图对齐到所述左图像和右图像的坐标系下,获得双目深度图;
2054、稀疏立体匹配,对所述结构光深度图的空洞部分执行稀疏立体匹配并获取视差,将所述视差转换为深度,使用所述深度并融合所述结构光深度图和所述双目深度图,重建出鲁棒的深度图。
由此,无需对全图立体匹配,而只对结构光深度图的空洞部分进行稀疏立体匹配,整体上显著降低了三维重构的计算量,并且提升了对小障碍物的三维重构的鲁棒性。
在本实施方式中,所述稀疏立体匹配(2054)的操作,具体包括:
提取空洞掩模,对所述空洞掩模内的图像执行稀疏立体匹配并获取视差。
在本实施方式中,所述畸变与极线矫正(2052)的操作,具体包括:
将用于稀疏立体匹配的匹配点约束在一条水平直线上进行对齐操作。
在这种情况下,显著减少了后续执行立体匹配的时间并大幅提升了准确度。
如图3所示,在本实施方式中,所述稀疏立体匹配(2054)的操作,进一步包括:
2055、对所述鲁棒的深度图进行分块,将每个所述块内的深度图转换为所 述点云,以平面模型对所述点云进行地面拟合,若所述块内的点云不满足平面假设,则将所述块内的点云去除,否则保留所述块内的点云;
2056、通过深度神经网络将对所述保留的块进行二次校验,将通过所述二次校验的所述块基于平面法线和重心进行区域生长,分割出大块地面的三维平面方程及边界点云;
2057、获取所有未通过所述二次校验的所述块内的点云到其所归属地面的距离,若距离大于阈值,则将其分割出来,获得疑似障碍物;
2058、将所述疑似障碍物的所属点云映射到图像,作为区域分割的种子点,进行所述种子点生长,提取出完整的障碍物区域;
2059、将所述障碍物区域映射出完整的点云,完成三维的所述小障碍物的检测。
在这种情况下,通过二次校验可排除符合平面模型但不是地面的块内的深度图,进而总体提升小障碍物的检测的准确性。
本发明实施方式还涉及一种小障碍物的检测方法。所述检测方法包括如上所述的小障碍物的三维重建方法。关于小障碍物的三维重建方法,在此不做赘述。
本发明实施方式还涉及一种小障碍物的检测***。所述检测***包括如上所述的小障碍物的三维重建方法。关于小障碍物的三维重建方法,在此不做赘述。
在本实施方式中,小障碍物的检测***还包括机器人本体,所述双目相机和结构光深度相机设置于所述机器人本体,所述双目相机和结构光深度相机朝向所述地面的方向倾斜设置。
在这种情况下,提升了双目相机和结构光深度相机采集的到图像对地面的覆盖区域,显著提升了对小障碍物识别的完整性。
可选地,本发明实施例还提供一种机器人,所述机器人包括通过***总线连接的处理器、存储器、网络接口和数据库。其中,该机器人的处理器用于提 供计算和控制能力。该机器人的存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作***、计算机程序和数据库。该内存储器为非易失性存储介质中的操作***和计算机程序的运行提供环境。该机器人的网络接口用于与外部的终端通过网络连接通信。该计算机程序被处理器执行时以实现一种小障碍物的三维重建方法。
本发明实施例还提供一种计算机存储介质,所述计算机存储介质存储有计算机程序,所述计算机程序被执行时执行实现如上所述的小障碍物的三维重建方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本发明所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上所述的实施方式,并不构成对该技术方案保护范围的限定。任何在上述实施方式的精神和原则之内所作的修改、等同更换和改进等,均应包含在该技术方案的保护范围之内。

Claims (11)

  1. 一种小障碍物的三维重建方法,其特征在于,所述方法用于对地面的小障碍物进行三维重建,所述方法包括:
    通过双目相机和结构光深度相机分别获取地面图像;
    对所述地面图像的占主体的背景执行稠密重建,采用所述双目相机对梯度较大的图像位置进行稀疏特征重建;
    通过视觉处理技术提取三维的点云,采用“减背景”检测方法对所述小障碍物的点云进行分离检测;
    将所述小障碍物的点云映射到图像,执行图像分割获得目标图像;
    采用融合方案对所述目标图像进行三维重构获取完整密集点云。
  2. 如权利要求1所述的小障碍物的三维重建方法,其特征在于,所述双目相机包括左相机和右相机,所述双目相机获取的所述地面图像包括左图像和右图像,所述结构光深度相机获取的所述地面图像为结构光深度图。
  3. 如权利要求2所述的小障碍物的三维重建方法,其特征在于,所述采用融合方案对所述目标图像进行三维重构获取完整密集点云,具体包括:
    传感器标定,包括对所述双目相机的内参畸变标定、外参标定以及对所述双目相机和所述结构光深度相机的外参标定;
    畸变与极线矫正,对所述左图像和右图像执行畸变和极线矫正;
    数据对齐,利用所述结构光深度相机的所述外参将所述结构光深度图对齐到所述左图像和右图像的坐标系下,获得双目深度图;
    稀疏立体匹配,对所述结构光深度图的空洞部分执行稀疏立体匹配并获取视差,将所述视差转换为深度,使用所述深度并融合所述结构光深度图和所述双目深度图,重建出鲁棒的深度图。
  4. 如权利要求3所述的小障碍物的三维重建方法,其特征在于,所述稀疏立体匹配的操作,具体包括:
    提取空洞掩模,对所述空洞掩模内的图像执行稀疏立体匹配并获取视差。
  5. 如权利要求3所述的小障碍物的三维重建方法,其特征在于,所述畸变与极线矫正的操作,具体包括:
    将用于稀疏立体匹配的匹配点约束在一条水平直线上进行对齐操作。
  6. 如权利要求3所述的小障碍物的三维重建方法,其特征在于,所述稀疏立体匹配的操作,进一步包括:
    对所述鲁棒的深度图进行分块,将每个所述块内的深度图转换为所述点云,以平面模型对所述点云进行地面拟合,若所述块内的点云不满足平面假设,则将所述块内的点云去除,否则保留所述块内的点云;
    通过深度神经网络将对所述保留的块进行二次校验,将通过所述二次校验的所述块基于平面法线和重心进行区域生长,分割出大块地面的三维平面方程及边界点云;
    获取所有未通过所述二次校验的所述块内的点云到其所归属地面的距离,若距离大于阈值,则将其分割出来,获得疑似障碍物;
    将所述疑似障碍物的所属点云映射到图像,作为区域分割的种子点,进行所述种子点生长,提取出完整的障碍物区域;
    将所述障碍物区域映射出完整的点云,完成三维的所述小障碍物的检测。
  7. 一种小障碍物的检测方法,其特征在于,所述检测方法包括如权利要求1-6任一项所述的小障碍物的三维重建方法。
  8. 一种小障碍物的检测***,其特征在于,所述检测***包括如权利要求1-6任一项所述的小障碍物的三维重建方法。
  9. 如权利要求8所述的小障碍物的检测***,其特征在于,还包括机器人本体,所述双目相机和结构光深度相机设置于所述机器人本体,所述双目相机和结构光深度相机朝向所述地面的方向倾斜设置。
  10. 一种机器人,其特征在于,所述机器人包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器用于执行所述计算机程序以实现如权利要 求1至6任一项所述的小障碍物的三维重建方法。
  11. 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序被执行时执行实现如权利要求1至6任一项所述的小障碍物的三维重建方法。
PCT/CN2020/135028 2020-01-20 2020-12-09 小障碍物的三维重建方法、检测方法及***、机器人及介质 WO2021147548A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010064033.8A CN111260773B (zh) 2020-01-20 2020-01-20 小障碍物的三维重建方法、检测方法及检测***
CN202010064033.8 2020-01-20

Publications (1)

Publication Number Publication Date
WO2021147548A1 true WO2021147548A1 (zh) 2021-07-29

Family

ID=70950938

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135028 WO2021147548A1 (zh) 2020-01-20 2020-12-09 小障碍物的三维重建方法、检测方法及***、机器人及介质

Country Status (2)

Country Link
CN (1) CN111260773B (zh)
WO (1) WO2021147548A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119718A (zh) * 2021-11-29 2022-03-01 福州大学 融合颜色特征和边缘特征的双目视觉绿色植被匹配定位方法
CN115880448A (zh) * 2022-12-06 2023-03-31 温州鹿城佳涵网络技术服务工作室 基于双目成像的三维测量方法、装置、设备及存储介质
CN117132973A (zh) * 2023-10-27 2023-11-28 武汉大学 一种地外行星表面环境重建与增强可视化方法及***
CN117291930A (zh) * 2023-08-25 2023-12-26 中建三局第三建设工程有限责任公司 一种基于图片序列中目标物体分割的三维重建方法和***

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260773B (zh) * 2020-01-20 2023-10-13 深圳市普渡科技有限公司 小障碍物的三维重建方法、检测方法及检测***
CN111260715B (zh) * 2020-01-20 2023-09-08 深圳市普渡科技有限公司 深度图的处理方法、小障碍物检测方法及***
CN112327326A (zh) * 2020-10-15 2021-02-05 深圳华芯信息技术股份有限公司 带有障碍物三维信息的二维地图生成方法、***以及终端
CN112766061A (zh) * 2020-12-30 2021-05-07 罗普特科技集团股份有限公司 一种多模态无监督的行人像素级语义标注方法和***
CN112701060B (zh) * 2021-03-24 2021-08-06 高视科技(苏州)有限公司 半导体芯片焊线的检测方法及装置
CN113034490B (zh) * 2021-04-16 2023-10-10 北京石油化工学院 化学品库房的堆垛安全距离监测方法
CN113297958A (zh) * 2021-05-24 2021-08-24 驭势(上海)汽车科技有限公司 一种自动化标注方法、装置、电子设备和存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (zh) * 2014-04-14 2014-07-30 桂林电子科技大学 基于三维点云分割的双目视觉障碍物检测方法
CN104484648A (zh) * 2014-11-27 2015-04-01 浙江工业大学 基于轮廓识别的机器人可变视角障碍物检测方法
US20180018805A1 (en) * 2016-07-13 2018-01-18 Intel Corporation Three dimensional scene reconstruction based on contextual analysis
CN108269281A (zh) * 2016-12-30 2018-07-10 无锡顶视科技有限公司 基于双目视觉的避障技术方法
GB2569609A (en) * 2017-12-21 2019-06-26 Canon Kk Method and device for digital 3D reconstruction
CN111260773A (zh) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 小障碍物的三维重建方法、检测方法及检测***

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104361577B (zh) * 2014-10-20 2015-08-19 湖南戍融智能科技有限公司 一种基于深度图像与可见光图像融合的前景检测方法
US10217225B2 (en) * 2016-06-01 2019-02-26 International Business Machines Corporation Distributed processing for producing three-dimensional reconstructions
CN106796728A (zh) * 2016-11-16 2017-05-31 深圳市大疆创新科技有限公司 生成三维点云的方法、装置、计算机***和移动设备
CN109186586B (zh) * 2018-08-23 2022-03-18 北京理工大学 一种面向动态泊车环境的同时定位及混合地图构建方法
CN110032962B (zh) * 2019-04-03 2022-07-08 腾讯科技(深圳)有限公司 一种物体检测方法、装置、网络设备和存储介质
CN110176032B (zh) * 2019-04-28 2021-02-26 暗物智能科技(广州)有限公司 一种三维重建方法及装置
CN110595392B (zh) * 2019-09-26 2021-03-02 桂林电子科技大学 一种十字线结构光双目视觉扫描***及方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955920A (zh) * 2014-04-14 2014-07-30 桂林电子科技大学 基于三维点云分割的双目视觉障碍物检测方法
CN104484648A (zh) * 2014-11-27 2015-04-01 浙江工业大学 基于轮廓识别的机器人可变视角障碍物检测方法
US20180018805A1 (en) * 2016-07-13 2018-01-18 Intel Corporation Three dimensional scene reconstruction based on contextual analysis
CN108269281A (zh) * 2016-12-30 2018-07-10 无锡顶视科技有限公司 基于双目视觉的避障技术方法
GB2569609A (en) * 2017-12-21 2019-06-26 Canon Kk Method and device for digital 3D reconstruction
CN111260773A (zh) * 2020-01-20 2020-06-09 深圳市普渡科技有限公司 小障碍物的三维重建方法、检测方法及检测***

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114119718A (zh) * 2021-11-29 2022-03-01 福州大学 融合颜色特征和边缘特征的双目视觉绿色植被匹配定位方法
CN115880448A (zh) * 2022-12-06 2023-03-31 温州鹿城佳涵网络技术服务工作室 基于双目成像的三维测量方法、装置、设备及存储介质
CN115880448B (zh) * 2022-12-06 2024-05-14 西安工大天成科技有限公司 基于双目成像的三维测量方法及装置
CN117291930A (zh) * 2023-08-25 2023-12-26 中建三局第三建设工程有限责任公司 一种基于图片序列中目标物体分割的三维重建方法和***
CN117132973A (zh) * 2023-10-27 2023-11-28 武汉大学 一种地外行星表面环境重建与增强可视化方法及***
CN117132973B (zh) * 2023-10-27 2024-01-30 武汉大学 一种地外行星表面环境重建与增强可视化方法及***

Also Published As

Publication number Publication date
CN111260773B (zh) 2023-10-13
CN111260773A (zh) 2020-06-09

Similar Documents

Publication Publication Date Title
WO2021147548A1 (zh) 小障碍物的三维重建方法、检测方法及***、机器人及介质
WO2021147545A1 (zh) 深度图的处理方法、小障碍物检测方法及***、机器人及介质
US10866101B2 (en) Sensor calibration and time system for ground truth static scene sparse flow generation
US11010924B2 (en) Method and device for determining external parameter of stereoscopic camera
US8521418B2 (en) Generic surface feature extraction from a set of range data
WO2018119744A1 (zh) 一种虚警障碍物检测方法及装置
CN107560592B (zh) 一种用于光电跟踪仪联动目标的精确测距方法
CN106650701B (zh) 基于双目视觉的室内阴影环境下障碍物检测方法及装置
WO2018227576A1 (zh) 地面形态检测方法及***、无人机降落方法和无人机
CN102982334B (zh) 基于目标边缘特征与灰度相似性的稀疏视差获取方法
CN107885224A (zh) 基于三目立体视觉的无人机避障方法
CN111107337B (zh) 深度信息补全方法及其装置、监控***和存储介质
CN109292099B (zh) 一种无人机着陆判断方法、装置、设备及存储介质
KR20210090384A (ko) 카메라 및 라이다 센서를 이용한 3d 객체 검출방법 및 장치
KR101714224B1 (ko) 센서 융합 기반 3차원 영상 복원 장치 및 방법
CN113205604A (zh) 一种基于摄像机和激光雷达的可行区域检测方法
Concha et al. Real-time localization and dense mapping in underwater environments from a monocular sequence
CN114004894A (zh) 基于三个标定板的激光雷达与双目相机空间关系确定方法
CN110992463B (zh) 一种基于三目视觉的输电导线弧垂的三维重建方法及***
CN117197333A (zh) 基于多目视觉的空间目标重构与位姿估计方法及***
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
Paudel et al. 2D-3D camera fusion for visual odometry in outdoor environments
CN116429087A (zh) 一种适应于动态环境的视觉slam方法
He et al. Planar constraints for an improved uav-image-based dense point cloud generation
CN114170281A (zh) 一种三维点云数据获取处理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20915146

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20915146

Country of ref document: EP

Kind code of ref document: A1