CN111079765B - Sparse point cloud densification and pavement removal method based on depth map - Google Patents

Sparse point cloud densification and pavement removal method based on depth map Download PDF

Info

Publication number
CN111079765B
CN111079765B CN201911289613.0A CN201911289613A CN111079765B CN 111079765 B CN111079765 B CN 111079765B CN 201911289613 A CN201911289613 A CN 201911289613A CN 111079765 B CN111079765 B CN 111079765B
Authority
CN
China
Prior art keywords
depth map
point cloud
sparse
points
ddm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911289613.0A
Other languages
Chinese (zh)
Other versions
CN111079765A (en
Inventor
杨春平
陈明
马沪敏
张诗涵
杨心怡
彭真明
彭凌冰
赵学功
李伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201911289613.0A priority Critical patent/CN111079765B/en
Publication of CN111079765A publication Critical patent/CN111079765A/en
Application granted granted Critical
Publication of CN111079765B publication Critical patent/CN111079765B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/273Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion removing elements interfering with the pattern to be recognised
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a depth map-based sparse point cloud densification and pavement removal method, and relates to the field of laser radar data processing and application. The method comprises the steps of firstly utilizing a sensor parameter matrix to project sparse point cloud to obtain a sparse depth map, obtaining a rough dense depth map by adopting a linear interpolation method based on Delaunay Triangulation (DT), and then obtaining a fine dense depth map by applying a mask extracted from the sparse depth map. RANSAC ground fitting is carried out on the sparse point cloud in a space coordinate system, outliers are removed, then a ground-free sparse depth map is obtained through projection, a mask is generated, the fine dense depth map is further applied, and a road-surface-free fine dense depth map is obtained. And finally, carrying out back projection on the two depth maps to obtain dense point clouds with a road surface and without the road surface. The method can perform densification and pavement removal on the point cloud data of interest on the premise of not combining other data, has high speed and better ensures the validity of point cloud data information.

Description

Sparse point cloud densification and pavement removal method based on depth map
Technical Field
The invention relates to the field of laser radar data processing and application of environment perception and target detection in automatic driving.
Background
In the field of autopilot, lidar is widely used for its precise positioning of target objects. The lidar is an active sensor, which receives laser pulses reflected by an object to realize the detection of the object, can provide accurate distance information at a higher frequency, has strong robustness to the intensity of ambient light and the color of the object, and can make up for the defects of the traditional camera in this respect, so the lidar is often used as an important component member for environment sensing and target detection. Lidar has been used in the past for real-time positioning and mapping (SLAM) of robots. In recent years, with the appearance of multi-beam laser radars, the perception capability of the laser radars is greatly improved, and the obstacles can be accurately perceived in various weather and complex traffic scenes. However, the laser radar data has the characteristics of dispersion and sparseness, and is difficult to directly use in subsequent algorithms. Therefore, the point cloud data is densified, which is a work with practical significance. In the process of target detection, a target is often located on a road surface, and road surface points in the point cloud data are regarded as interference points in the target detection and need to be removed. The point cloud is densified first and then the ground points are removed, which increases the calculation amount and increases the calculation time, while the point cloud accuracy after densification is reduced by removing the ground points and then the point cloud is densified first, and how to densify the sparse point cloud and remove the road surface on the premise of ensuring the accuracy and reducing the calculation amount is a challenging task.
Existing methods for point cloud densification are mainly classified into image-based methods, point cloud-based methods, and image and point cloud-based methods. Rashidi, a et al propose a method of generating a disparity map based on binocular visual matching, then converting the disparity map into a dense depth map, and finally generating a dense point cloud. Similarly, Zhenfeng Shao et al propose a dense point cloud generation algorithm based on multi-view low-altitude remote sensing images. Kai-Han Lo et al propose a method of combining color images and sparse depth maps using a joint trilateration filter to densify the depth maps. However, the algorithm data is mainly derived from or combined with the image, and the generated data is not robust to illumination and color change, so that the advantage of the laser radar is difficult to utilize and utilize subsequently. The traditional method for densifying based on point cloud mainly directly samples and expands the original point cloud in a space coordinate system, and the method has the defects of large calculation amount, poor real-time performance and the like.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a method for quickly densifying and removing a sparse point cloud in an automatic driving environment, wherein a sparse point cloud projection is converted into a sparse depth map, and a DT linear interpolation is used for performing densification operation to obtain an initial dense depth map, RANSAC ground removal operation and outlier filtering are performed on the sparse point cloud at the same time and are converted into a ground-free sparse depth map, then two kinds of sparse depth map generation masks are used for performing fine processing on the initial dense depth map, and finally, reverse projection is performed to obtain the dense point cloud with a road surface and a road surface. The method can be used for quickly carrying out densification and road surface filtering on point clouds, well ensuring the integrity of point cloud data, effectively solving the problem that the existing laser radar acquired data is too sparse, can be used as an effective means for data preprocessing of automatic driving, and can also be used for further carrying out 3D reconstruction on the basis.
In order to solve the above technical problems and achieve the above object, the present invention adopts the following technical solutions.
A sparse point cloud densification and pavement removal method based on a depth map is characterized by comprising the following steps:
step 1: inputting original sparse point cloud CpointsAnd an internal and external parameter matrix T-point cloud seat obtained by utilizing sensor calibrationThe object is converted and a sparse depth map DM is extractedoriginAnd (3) turning to the step (2);
step 2: for the sparse depth map DM extracted in the step 1originPerforming Delaunay triangulation linear interpolation to obtain an initial rough dense depth map DDMwildTurning to step 3;
and step 3: sparse depth map DM extracted by step 1originGenerating a maskdetailedAnd applied to DDMwildTo obtain a fine dense depth map DDMdetailedTurning to step 4;
and 4, step 4: for the original sparse point cloud C in step 1pointsPerforming RANSAC ground fitting and removing the ground, then filtering outliers, and extracting a ground-free sparse depth map DMngTurning to step 5;
and 5: non-ground sparse depth map DM extracted by step 4ngGenerating a maskngAnd applied to DDMdetailedObtaining a surface-free fine dense depth map DDMngTurning to step 6;
step 6: for the fine dense depth map DDM obtained in the step 3detailedAnd step 5, obtaining a ground-free fine dense depth map DDMngPerforming back projection by using the internal and external parameter matrix T to obtain dense point cloud DCpointsAnd dense point cloud DCpoints_ng
In the above technical solution, the step 1 specifically includes the following steps:
step 1.1: inputting original sparse point cloud CpointsThe data format is [ X, Y, Z]I.e. the spatial coordinate data of the point cloud, calibrating the sensor (refer to the prior art document "A topox for automatic calibration of range and camera sensors using a single shot") to obtain an internal and external parameter matrix T;
step 1.2: utilizing internal and external parameter matrix T to sparse point cloud CpointsCoordinate projection conversion is carried out to obtain Cimg(spatial coordinate system format data for representing point cloud is converted into data in image coordinate system format) in [ xX, yY, X]And is formatted to obtain C'imgThe data format is [ X, y, X]I.e. CpointsAt coordinates X, y in image space and X coordinate (depth) in the spatial coordinate system, the transformation formula is as follows:
Cimg=T·Cpoints
step 1.3: c 'is filtered off'imgPoints in the image outside the boundary (h, w), pair C'imgThe X and the y in the point cloud are rounded, then the point cloud is sorted from small to large according to the depth X, and the repeated points of the X and the y coordinates are removed to obtain the corrected sparse point cloud C ″img
Step 1.4: constructing an all-zero matrix I of size (h, w)0Using the corrected sparse point cloud C ″imgData [ X, y, X ] of (2)]In I0Is filled with the corresponding depth X to obtain a sparse depth map DMorigin
In the above technical solution, the specific steps of step 2 are as follows:
step 2.1: sparse depth map DM using Delaunay triangulationoriginIs triangulated to produce an isolated set of triangulated meshes, Δ ═ δ { δ1,…,δnEach triangle delta comprises three coordinates n1、n2And n3(ii) a Where n is1、n2And n3In fact, the reference number of the coordinates of the depth map in the triangular mesh indicates that the interpolation operation is to be performed in the triangle formed by the three coordinates;
step 2.2: linear interpolation is carried out by using the depth X on three coordinates of the triangle delta to estimate the depth on the triangle inner point, and finally the rough dense depth map DDM is obtainedwild
In the above technical solution, the step 3 specifically comprises the following steps:
step 3.1: constructing an all-zero matrix mask of size (h, w)0
Step 3.2: for the sparse depth map DM obtained in the step 1originEach column searches the coordinate index u of the point with the first depth X not being 0 from top to bottom, and masks0All the u-th element to the last element in each column are set to be 1 to obtain mask'0
Step 3.3: mask 'with transverse square structural element pair'0Performing morphological transverse expansion to obtain a maskdetailed
Step 3.4: will maskdetailedAnd DDMwildMultiplying to obtain a fine dense depth map DDMdetailed
In the above technical solution, the step 4 specifically includes the following steps:
step 4.1: inputting original sparse point cloud CpointsPerforming ground fitting by using RANSAC algorithm, and removing the ground to obtain a ground-free sparse point cloud Cpoints_ng
Step 4.2: non-ground sparse point cloud C adopting Euclidean distance statisticspoints_ngM points around each point in the image and calculating the average distance between each point and its surrounding m points
Figure GDA0003587124810000041
Calculate the m points relative to
Figure GDA0003587124810000042
Marking the point with the deviation larger than the threshold value rho as an outlier to be removed to obtain Cpoints_ng_d
Step 4.3: to Cpoints_ng_dGenerating a ground-free sparse depth map DM by adopting the same steps as the step 1ng
In the above technical solution, the step 5 specifically includes the following steps:
step 5.1: construct an all-zero matrix mask of size (h, w)01
Step 5.2: for the non-ground sparse depth map DM obtained in the step 4ngEach column searches the coordinate index uv of the first point with depth X not being 0 and the last non-zero point from top to bottom, and masks are processed01All the u-th element to the v-th element in each column are set to be 1 to obtain mask'01
Step 5.3: mask 'is paired by oval structural elements'01Performing morphological vertical open operation to obtain maskng
Step 5.4: will maskngAnd DDMdetailedMultiplying to obtain a non-ground fine dense depth map DDMng
In the above technical solution, the reverse specific steps in the step 6 are as follows:
step 6.1: for the fine dense depth map DDM obtained in the step 3detailedAnd step 5, obtaining a ground-free fine dense depth map DDMngRespectively finding out the non-zero coordinates and the depth X corresponding to the coordinates, and constructing in the format of [ X, y, X]The shape is nx3, and n is the number of non-zero points to obtain point cloud C;
step 6.2: and multiplying C by the inverse of the internal and external parameter matrix T of the sensor to obtain the dense point cloud DC.
The invention adopts the technical scheme, so the invention has the following beneficial effects:
according to the method, the point cloud data can be densified and pavement removed on the premise that the point cloud data is not combined with other data, RANSAC ground fitting removal is performed on the sparse point cloud, and densification is performed based on the depth map, so that the method has the characteristics of high speed, capability of better ensuring the effectiveness of point cloud data information and the like.
Drawings
FIG. 1 is a design flow of a sparse point cloud densification and pavement removal method based on a depth map;
FIG. 2 is an input raw point cloud;
FIG. 3 is a dense point cloud and its corresponding depth map;
FIG. 4 is a dense point cloud with road surface removed and its corresponding depth map
Mask2 in FIG. 1 is a maskngMask1 is a maskdetailed
Detailed Description
The present invention will be described in further detail with reference to test examples and specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
The invention provides a sparse point cloud densification and pavement removal method based on a depth map, which can perform densification and pavement removal on point cloud data on the premise of not combining other data, has the characteristics of high speed, better guarantee of effectiveness of point cloud data information and the like, and the whole algorithm design scheme flow is shown in figure 1 and comprises the following steps:
step 1: inputting original sparse point cloud CpointsConverting point cloud coordinates by using an internal and external parameter matrix T obtained by sensor calibration, and extracting a sparse depth map DMoriginTurning to the step 2;
step 2: for the sparse depth map DM extracted in the step 1originPerforming Delaunay triangulation linear interpolation to obtain an initial rough dense depth map DDMwildTurning to step 3;
and step 3: sparse depth map DM extracted by step 1originGenerating a maskdetailedAnd applied to DDMwildTo obtain a fine dense depth map DDmdetailedTurning to step 4;
and 4, step 4: for the original sparse point cloud C in step 1pointsPerforming RANSAC ground fitting and removing the ground, then filtering outliers, and extracting a ground-free sparse depth map DMngTurning to step 5;
and 5: non-ground sparse depth map DM extracted by step 4ngGenerating a maskngAnd applied to DDMdetailedTo obtain a surface-free fine dense depth map DMngTurning to step 6;
and 6: for the fine dense depth map DDM obtained in the step 3detailedAnd step 5, obtaining a ground-free fine dense depth map DDMngAnd carrying out back projection by utilizing the internal and external parameter matrix T to obtain the dense point cloud DCpointsAnd dense point cloud DCpoints_ng
In the above technical solution, the step 1 specifically includes the following steps:
step 1.1: inputting original sparse point cloud CpointsThe data format is [ X, Y, Z]The sensor is calibrated to obtain an internal and external parameter matrix T, namely the space coordinate data of the point cloud;
step 1.2: utilizing internal and external parameter matrix T to sparse point cloud CpointsCoordinate projection conversion is carried out to obtain CimgThe data format is [ xX, yY, X]And the format is adjusted to obtain C'imgThe data format is [ X, y, X]I.e. CpointsAt coordinates X, y in image space and X coordinate (depth) in the spatial coordinate system, the transformation formula is as follows:
Cimg=T.Cpoints
step 1.3: to filter out C'imgPoints in the image outside the boundary (h, w), pair C'imgThe X and the y in the point cloud are rounded, then the point cloud is sorted from small to large according to the depth X, and the repeated points of the X and the y coordinates are removed to obtain the corrected sparse point cloud C ″img
Step 1.4: constructing an all-zero matrix I of size (h, w)0Using the sparse point cloud CimgData [ X, y, X ] of (2)]In I0Is filled with the corresponding depth X to obtain a sparse depth map DMorigin
In the above technical solution, the specific steps of step 2 are as follows:
step 2.1: sparse depth map DM using Delaunay triangulationoriginIs triangulated to produce an isolated set of triangulated meshes, Δ ═ δ1,…,δnEach triangle delta comprises three coordinates nk(k=1,2,3);
Step 2.2: linear interpolation is carried out by using the depth X on three coordinates of the triangle delta to estimate the depth on the triangle inner point, and finally the rough dense depth map DDM is obtainedwild
The linear interpolation comprises the following specific steps: setting point P as triangle delta P1P2P3One point of (1), P1、P2、P3Is known, the value of point P can be represented by the following equation 1.1:
p=(1-u-v)P1+uP2+vP3 (1.1)
associating the value with the coordinates of the point, P1、P2、P3And the coordinates of P are knownThe following formula can be listed:
Figure GDA0003587124810000061
and solving parameters u and v, and substituting the parameters into (1.1) to obtain the value of the point P.
In the above technical solution, the step 3 specifically comprises the following steps:
step 3.1: constructing an all-zero matrix mask of size (h, w)0
Step 3.2: for the sparse depth map DM obtained in the step 1originSearching the coordinate of the point with the first depth X not being 0 (the first non-zero point coordinate of each column) index u from top to bottom in each column, and matching the mask0All the u-th element to the last element in each column are set to be 1 to obtain mask'0
Step 3.3: mask 'is a square structural element pair with the height of 2 and the width of 10'0Performing morphological transverse expansion to obtain a maskdetailed
Step 3.4: will maskdetailedAnd DDMwildMultiplying to obtain a fine dense depth map DDMdetailed
In the above technical solution, the step 4 specifically includes the following steps:
step 4.1: inputting original sparse point cloud CpointsPerforming ground fitting by using RANSAC algorithm, and removing the ground to obtain Cpoints_ng(ii) a In this example, RANSAC ground fitting is calculated from functions in the PCL library, and the fitting parameter r is set to 0.1;
step 4.2: using Euclidean distance statistics Cpoints_ngThe nearest 50 points around each point in the image are calculated
Figure GDA0003587124810000071
Calculate the 50 points relative to
Figure GDA0003587124810000072
Marking the points with the deviation larger than 0.1 as outliers to remove to obtain Cpoints_ng_d
Step 4.3: to Cpoints_ng_dGenerating a ground-free sparse depth map DM by adopting the same steps as the step 1ng
In the above technical solution, the step 5 specifically includes the following steps:
step 5.1: constructing an all-zero matrix mask of size (h, w)01
Step 5.2: for the non-ground sparse depth map DM obtained in the step 4ngEach column searches the coordinate indexes u and v of the first point with the depth X not being 0 and the last non-zero point from top to bottom, and masks are used01All the u-th element to the v-th element in each column are set to be 1 to obtain mask'01
Step 5.3: the mask is a pair of elliptical structural elements positioned in a rectangle with the height of 15 and the width of 2'01Performing morphological vertical open operation to obtain maskng
Step 5.4: will maskngAnd DDMdetailedMultiplying to obtain a non-ground fine dense depth map DDMng
In the above technical solution, the reverse specific steps in the step 6 are as follows:
step 6.1: for the fine dense depth map DDM obtained in the step 3detailedAnd step 5, obtaining a ground-free fine dense depth map DDMngRespectively finding out the non-zero coordinates and the depth X corresponding to the coordinates, and constructing in the format of [ X, y, X]The shape is nx3, and n is the number of non-zero points to obtain point cloud C;
step 6.2: and multiplying C by the inverse of the internal and external parameter matrix T to obtain the dense point cloud DC.

Claims (7)

1. A sparse point cloud densification and pavement removal method based on a depth map is characterized by comprising the following steps:
step 1: inputting original sparse point cloud CpointsConverting point cloud coordinates by using an internal and external parameter matrix T obtained by sensor calibration, and extracting a sparse depth map DMoriginTurning to the step 2;
step 2: for the sparse depth map DM extracted in the step 1originPerforming Delaunay triangulation linear interpolation to obtain an initial rough dense depth map DDMwildTurning to step 3;
and step 3: utilizing the sparse depth map DM extracted in the step 1originGenerating a maskdetailedAnd applied to the initial coarse dense depth map DDMwildTo obtain a fine dense depth map DDMdetailedTurning to step 4;
and 4, step 4: for the original sparse point cloud C in step 1pointsPerforming RANSAC ground fitting and removing the ground, then filtering outliers, and extracting a ground-free sparse depth map DMngTurning to step 5;
and 5: non-ground sparse depth map DM extracted by step 4ngGenerating a maskngAnd applied to a fine dense depth map DDMdetailedObtaining a surface-free fine dense depth map DDMngTurning to step 6;
step 6: for the fine dense depth map DDM obtained in the step 3detailedAnd step 5, obtaining a ground-free fine dense depth map DDMngPerforming back projection by using the internal and external parameter matrix T to obtain dense point cloud DCpointsAnd dense point cloud DCpoints_ng
2. The method for sparse point cloud densification and pavement removal based on the depth map of claim 1, wherein the method comprises the following steps: the step 1 comprises the following specific steps:
step 1.1: inputting original sparse point cloud CpointsThe data format is [ X, Y, Z ]]The sensor is calibrated to obtain an internal and external parameter matrix T, namely the space coordinate data of the point cloud;
step 1.2: utilizing internal and external parameter matrix T to sparse point cloud CpointsCoordinate projection conversion is carried out to obtain CimgAnd is formatted to obtain C'imgThe data format is [ X, y, X ]]I.e. CpointsThe transformation formula is as follows at coordinates X, y in image space and X in the spatial coordinate system:
Cimg=T·Cpoints
step 1.3: c 'is filtered off'imgPoints in the image outside the boundary (h, w), pair C'imgThe X and the y in the point cloud are rounded, then the point cloud is sorted from small to large according to the depth X, and the repeated points of the X and the y coordinates are removed to obtain the corrected sparse point cloud C ″img
Step 1.4: constructing an all-zero matrix I of size (h, w)0Using the corrected sparse point cloud C ″imgData [ X, y, X ] of (2)]In I0Is filled with the corresponding depth X to obtain a sparse depth map DMorigin
3. The method for sparse point cloud densification and pavement removal based on the depth map of claim 1, wherein the method comprises the following steps: the specific steps of the step 2 are as follows:
step 2.1: sparse depth map DM using Delaunay triangulationoriginIs triangulated to produce an isolated set of triangulated meshes, Δ ═ δ { δ1,…,δnEach triangle delta comprises three coordinates n1、n2And n3(ii) a Where n is1、n2And n3In fact, the reference number of the coordinates of the depth map in the triangular mesh indicates that the interpolation operation is to be performed in the triangle formed by the three coordinates;
step 2.2: linear interpolation is carried out by using the depth X on three coordinates of the triangle delta to estimate the depth on the triangle inner point, and finally the rough dense depth map DDM is obtainedwild
4. The method for sparse point cloud densification and pavement removal based on the depth map of claim 1, wherein the method comprises the following steps: the step 3 comprises the following specific steps:
step 3.1: construct an all-zero matrix mask of size (h, w)0
Step 3.2: for the sparse depth map DM obtained in the step 1originEach column searches the coordinate index u of the point with the first depth X not being 0 from top to bottom, and masks0Each one of which isAll the u-th element to the last element in the row are set to be 1 to obtain mask'0
Step 3.3: mask 'is paired with transverse square structural elements'0Performing morphological transverse expansion to obtain a maskdetailed
Step 3.4: will maskdetailedAnd DDMwildMultiplying to obtain a fine dense depth map DDMdetailed
5. The method for sparse point cloud densification and pavement removal based on the depth map of claim 2, wherein the method comprises the following steps: the step 4 comprises the following specific steps:
step 4.1: inputting original sparse point cloud CpointsPerforming ground fitting by using RANSAC algorithm, and removing the ground to obtain a ground-free sparse point cloud Cpoints_ng
And 4.2: non-ground sparse point cloud C adopting Euclidean distance statisticspoints_ngM points around each point in the image and calculating the average distance between each point and its surrounding m points
Figure FDA0003587124800000031
Calculate the m points relative to
Figure FDA0003587124800000032
Marking the point with the deviation larger than the threshold value rho as an outlier to be removed to obtain Cpoints_ng_d
Step 4.3: to Cpoints_ng_dGenerating a ground-free sparse depth map DM by adopting the same steps as the step 1ng
6. The method for sparse point cloud densification and pavement removal based on the depth map of claim 1, wherein the method comprises the following steps: the step 5 specifically comprises the following steps:
step 5.1: construct an all-zero matrix mask of size (h, w)01
Step 5.2: for the non-ground sparse depth map DM obtained in the step 4ngEach column searches the coordinate index uv of the first point with depth X not being 0 and the last non-zero point from top to bottom, and masks are processed01All the u-th element to the v-th element in each column are set to be 1 to obtain mask'01
Step 5.3: mask 'is paired by oval structural elements'01Performing morphological vertical open operation to obtain maskng
Step 5.4: will maskngAnd DDMdetailedMultiplying to obtain a non-ground fine dense depth map DDMng
7. The method for sparse point cloud densification and pavement removal based on the depth map of claim 1, wherein the method comprises the following steps: the back projection in the step 6 comprises the following specific steps:
step 6.1: for the fine dense depth map DDM obtained in the step 3detailedAnd step 5, obtaining a ground-free fine dense depth map DDMngRespectively finding out the non-zero coordinates and the depth X corresponding to the coordinates, and constructing in the format of [ X, y, X]The shape is nx3, and n is the number of non-zero points to obtain point cloud C;
step 6.2: and multiplying C by the inverse of the internal and external parameter matrix T of the sensor to obtain the dense point cloud DC.
CN201911289613.0A 2019-12-13 2019-12-13 Sparse point cloud densification and pavement removal method based on depth map Active CN111079765B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911289613.0A CN111079765B (en) 2019-12-13 2019-12-13 Sparse point cloud densification and pavement removal method based on depth map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911289613.0A CN111079765B (en) 2019-12-13 2019-12-13 Sparse point cloud densification and pavement removal method based on depth map

Publications (2)

Publication Number Publication Date
CN111079765A CN111079765A (en) 2020-04-28
CN111079765B true CN111079765B (en) 2022-07-01

Family

ID=70314640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911289613.0A Active CN111079765B (en) 2019-12-13 2019-12-13 Sparse point cloud densification and pavement removal method based on depth map

Country Status (1)

Country Link
CN (1) CN111079765B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860321B (en) * 2020-07-20 2023-12-22 浙江光珀智能科技有限公司 Obstacle recognition method and system
CN112927251B (en) * 2021-03-26 2022-10-14 中国科学院自动化研究所 Morphology-based scene dense depth map acquisition method, system and device
CN113240813B (en) * 2021-05-12 2023-05-16 北京三快在线科技有限公司 Three-dimensional point cloud information determining method and device
CN114545440A (en) * 2022-01-05 2022-05-27 浙江零跑科技股份有限公司 Method for generating depth map based on solid-state laser radar point cloud
CN114463409B (en) 2022-02-11 2023-09-26 北京百度网讯科技有限公司 Image depth information determining method and device, electronic equipment and medium
CN114627351B (en) * 2022-02-18 2023-05-16 电子科技大学 Fusion depth estimation method based on vision and millimeter wave radar

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN104992441A (en) * 2015-07-08 2015-10-21 华中科技大学 Real human body three-dimensional modeling method specific to personalized virtual fitting
CN105021124A (en) * 2015-04-16 2015-11-04 华南农业大学 Planar component three-dimensional position and normal vector calculation method based on depth map
EP3349176A1 (en) * 2017-01-17 2018-07-18 Facebook, Inc. Three-dimensional scene reconstruction from set of two-dimensional images for consumption in virtual reality
CN109448041A (en) * 2018-10-29 2019-03-08 重庆金山医疗器械有限公司 A kind of capsule endoscope 3-dimensional reconstruction method and system
CN110379022A (en) * 2019-07-22 2019-10-25 西安因诺航空科技有限公司 Point cloud and grid method of partition in a kind of landform three-dimensional reconstruction system of taking photo by plane

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346608A (en) * 2013-07-26 2015-02-11 株式会社理光 Sparse depth map densing method and device
CN105021124A (en) * 2015-04-16 2015-11-04 华南农业大学 Planar component three-dimensional position and normal vector calculation method based on depth map
CN104992441A (en) * 2015-07-08 2015-10-21 华中科技大学 Real human body three-dimensional modeling method specific to personalized virtual fitting
EP3349176A1 (en) * 2017-01-17 2018-07-18 Facebook, Inc. Three-dimensional scene reconstruction from set of two-dimensional images for consumption in virtual reality
CN109448041A (en) * 2018-10-29 2019-03-08 重庆金山医疗器械有限公司 A kind of capsule endoscope 3-dimensional reconstruction method and system
CN110379022A (en) * 2019-07-22 2019-10-25 西安因诺航空科技有限公司 Point cloud and grid method of partition in a kind of landform three-dimensional reconstruction system of taking photo by plane

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DeepPerimeter:Indoor Boundary Estimation from Posed Monocular Sequences;Ameya Phalak 等;《arXiv》;20190701;第1-11页 *
Just-in-Time Reconstruction:Inpainting Sparse Maps using Single View Depth Predictors as Priors;Chamara Saroj Weerasekera 等;《2018 IEEE International Conference on Robotics and Automation 》;20180913;第4977-4984页 *
单目多视角三维重建算法设计与实现;成祺;《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》;20190915;第I138-767页 *
车载多传感器融合下的动态目标检测与跟踪;曾湘峰;《中国优秀博硕士学位论文全文数据库(硕士)》;20180415;第C035-97页 *

Also Published As

Publication number Publication date
CN111079765A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN111079765B (en) Sparse point cloud densification and pavement removal method based on depth map
CN110264567B (en) Real-time three-dimensional modeling method based on mark points
CN110264416B (en) Sparse point cloud segmentation method and device
CN107316325B (en) Airborne laser point cloud and image registration fusion method based on image registration
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN111563921B (en) Underwater point cloud acquisition method based on binocular camera
CN112270713B (en) Calibration method and device, storage medium and electronic device
CN112001958B (en) Virtual point cloud three-dimensional target detection method based on supervised monocular depth estimation
CN109752701A (en) A kind of road edge detection method based on laser point cloud
CN108474658B (en) Ground form detection method and system, unmanned aerial vehicle landing method and unmanned aerial vehicle
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN106225774B (en) A kind of unmanned agriculture tractor road measuring device and method based on computer vision
CN113362247A (en) Semantic live-action three-dimensional reconstruction method and system of laser fusion multi-view camera
CN113205604A (en) Feasible region detection method based on camera and laser radar
CN110942477B (en) Method for depth map fusion by using binocular camera and laser radar
CN113689483B (en) Ranging method based on monocular camera and laser radar
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN112669280A (en) Unmanned aerial vehicle oblique aerial photography right-angle image control point target detection method based on LSD algorithm
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
CN114519681A (en) Automatic calibration method and device, computer readable storage medium and terminal
CN114428259A (en) Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN117406234A (en) Target ranging and tracking method based on single-line laser radar and vision fusion
Shen et al. A 3D modeling method of indoor objects using Kinect sensor
CN114119718A (en) Binocular vision green vegetation matching and positioning method integrating color features and edge features
CN113674407A (en) Three-dimensional terrain reconstruction method and device based on binocular vision image and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant