CN112070770A - High-precision three-dimensional map and two-dimensional grid map synchronous construction method - Google Patents

High-precision three-dimensional map and two-dimensional grid map synchronous construction method Download PDF

Info

Publication number
CN112070770A
CN112070770A CN202010686222.9A CN202010686222A CN112070770A CN 112070770 A CN112070770 A CN 112070770A CN 202010686222 A CN202010686222 A CN 202010686222A CN 112070770 A CN112070770 A CN 112070770A
Authority
CN
China
Prior art keywords
point
point cloud
dimensional
coordinate system
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010686222.9A
Other languages
Chinese (zh)
Other versions
CN112070770B (en
Inventor
黄杰
何果
曾德龙
董翔宇
杨瑞金
奚纯
朱俊
汪太平
魏南
李永熙
杨波
刘鑫
张飞
尚伟伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Overhaul Branch of State Grid Anhui Electric Power Co Ltd
Original Assignee
Overhaul Branch of State Grid Anhui Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Overhaul Branch of State Grid Anhui Electric Power Co Ltd filed Critical Overhaul Branch of State Grid Anhui Electric Power Co Ltd
Priority to CN202010686222.9A priority Critical patent/CN112070770B/en
Publication of CN112070770A publication Critical patent/CN112070770A/en
Application granted granted Critical
Publication of CN112070770B publication Critical patent/CN112070770B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of robots, and discloses a high-precision three-dimensional map and two-dimensional grid map synchronous construction method, which comprises the following steps: sequentially storing point cloud data scanned by the three-dimensional laser radar; dividing ground points and non-ground points and clustering the non-ground points; extracting height, edge and plane characteristics from the ground point and the clustered point cloud; establishing a three-dimensional point cloud map by utilizing edge and plane characteristic point cloud registration; and (4) performing height feature extraction on the point cloud in the world coordinate system, and establishing a two-dimensional grid map. The invention provides a real-time three-dimensional map building method, and a two-dimensional grid map is synchronously built based on the height characteristics of a navigation robot, and can be directly applied to the navigation and positioning scenes of the robot.

Description

High-precision three-dimensional map and two-dimensional grid map synchronous construction method
Technical Field
The invention relates to the field of robots, in particular to a high-precision three-dimensional map and two-dimensional grid map synchronous construction method.
Background
With the continuous improvement of the technical level of the robot, the robot gradually shows important application in various industries. The mobile robot has the main tasks that in an unknown complex environment, a sensor carried by the mobile robot is used for sensing the surrounding environment, an environment map is built through the sampling of the sensor on the environment, and the self-positioning and navigation are realized. Therefore, the establishment of the environment map is the premise that the mobile robot realizes autonomous navigation action, and the map can help the robot to be matched with a sensor of the robot to perform real-time accurate positioning and is also used for subsequent navigation.
Patent No. CN201710598881.5 can use a line laser radar in conjunction with a motor with an encoder to create a high precision three-dimensional map. However, this technique has a very high requirement on the accuracy of the motor, and therefore the threshold of the technique is increased. In addition, the algorithm used by the technology has high requirements on the performance of the computer, so that the time for drawing the map is greatly reduced. Finally, the technique also does not directly generate a two-dimensional grid map.
The grid map creation method proposed by patent No. CN201310363428.8 is also a map created based on an X-Y two-dimensional plane coordinate system, and does not consider the three-dimensional height of the navigation robot in practice, and it is likely to collide with the chassis or other positions of the robot in the subsequent movement after the path planning. Secondly, a single two-dimensional grid map contains a very limited amount of information and is not well understood by humans intuitively.
Disclosure of Invention
The invention aims to provide a high-precision three-dimensional map and two-dimensional grid map synchronous construction method, the three-dimensional map and the two-dimensional grid map constructed by the method have rich information content and can be directly used for subsequent navigation, and segmentation clustering is applied in the three-dimensional point cloud processing process, so that the efficiency and precision of point cloud feature extraction and registration are improved, the computational complexity is reduced, and the three-dimensional height of a robot is more comprehensively considered in the generation of the two-dimensional grid map.
In order to achieve the purpose, the invention provides a high-precision three-dimensional map and two-dimensional grid map synchronous construction method, which comprises the following steps:
establishing a coordinate system and a space grid, wherein the coordinate system comprises a world coordinate system and a laser radar coordinate system;
real-time by three-dimensional lidarScanning to obtain point cloud data P with three-dimensional coordinates and distance informationt,Pt={p1,p2,…,pi,…,pnIn which p isiFor the point cloud data PtThe ith point;
setting a virtual image, and scanning the point cloud data P at the current momenttSequentially projecting into the corresponding virtual image;
point-to-point cloud data PtPerforming segmentation and clustering, and outputting ground point cloud GtAnd clustering point cloud no _ GSt
Segmenting ground point cloud GtAnd non-ground point cloud no _ GtAnd to a ground point cloud G representing the groundtMarking a designated label;
for the non-ground point cloud no _ GtClustering, forming a point cluster by point clouds representing the same object, giving a unique label as a class, abandoning the class with the point cloud number smaller than an influence threshold, and outputting a clustered point cloud no _ GSt
For GtAnd no _ GStExtracting the characteristics of height, edge and plane;
performing point cloud registration by using edge and plane features, and outputting point cloud in world coordinate system
Figure BDA0002587646140000021
Establishing a three-dimensional map;
for the point cloud in the world coordinate system
Figure BDA0002587646140000022
And extracting height features and establishing a two-dimensional grid map.
Preferably, the establishing a coordinate system and a spatial grid comprises:
establishing a laser radar coordinate system taking a three-dimensional laser radar as a center, establishing a world coordinate system at an initial position when the three-dimensional laser radar scans, wherein the directions of all axes are the same as the laser radar coordinate system;
and establishing a space grid based on the world coordinate system, and setting a small cube according to the required grid map resolution.
Preferably, the segmentation of the ground point cloud GtAnd non-ground point cloud no _ GtAnd to a ground point cloud G representing the groundtLabeling with a designated tag includes:
judging whether a point represented by the pixels on the ith row and the j column of the virtual image is a ground point or not according to the formula (1),
Figure BDA0002587646140000031
wherein, Ptr [ j + i × k ] represents the pixel of the ith row and j column of the virtual image, then Ptr [ j + i × k ] x, Ptr [ j + i × k ] y, Ptr [ j + i × k ] z represent the three-dimensional coordinate of the point under the laser radar coordinate system, wherein k represents the total number of columns;
dz=Ptr[j+(i+1)×k].z-Ptr[j+i×k].z,
dx=Ptr[j+(i+1)×k].x-Ptr[j+i×k].x,
dy=Ptr[j+(i+1)×k].y-Ptr[j+i×k].y,
judging whether angle is smaller than a ground segmentation threshold value:
if yes, the point represented by the pixel and the point corresponding to the next row, namely the point represented by the j-column pixel of the (i +1) th row are all regarded as ground points and designated label marks are given;
if not, the non-ground point is regarded as a non-ground point and is not marked.
Preferably, the three-dimensional lidar has a horizontal measurement angle range of 0 ° to 360 °, a vertical measurement angle range of-15 ° to 15 °, a horizontal angular resolution of 0.2 °, and a vertical angular resolution of 2 °.
Preferably, the pair of the non-ground point clouds no _ GtPerforming clustering processing, wherein point clouds representing the same object form a point cluster, and giving unique labels is regarded as a class comprising:
traversal no _ GtCarrying out breadth-first search on the points which are not subjected to clustering processing to obtain four adjacent points of the current point in the image, namely, the upper, the lower, the left and the right;
the value of the angle beta of the two points is calculated A, B according to equation (2),
Figure BDA0002587646140000032
wherein: dmax=max{A.intensity,B.intensity},
dmin=min{A.intensity,B.intensity},
The intensity and the B intensity respectively represent Euclidean distances from a point A and a point B to the three-dimensional laser radar sensor, alpha represents an included angle of two laser beams, A is a current point, and B is any one of four adjacent points;
judging whether the angle beta is larger than a preset clustering angle threshold value or not;
determining A, B that the two points represent the same object when the angle beta is judged to be larger than the clustering angle threshold value;
in the case where A, B two points represent the same object, A, B two points are put in the same point cluster to represent the same object and given unique tag marks.
Preferably, said pair GtAnd no _ GStThe extraction of the height, edge and plane features comprises the following steps:
traversal no _ GStAll points in GtFinding the nearest ground point on the X-Y plane from the current point by using a K nearest neighbor search algorithm, and calculating the height difference dz of the current point and the nearest ground point in the Z direction;
judging whether the height difference is within a preset height range (H)min,Hmax];
When the height difference is judged to be in the height range [ H ]min,Hmax]In the case of (2), the currently traversed point is given a height signature.
Preferably, said pair GtAnd no _ GStThe extraction of the height, edge and plane features comprises the following steps:
traversing each point in the virtual image, taking the current point as a central point, and extracting continuous points which are adjacent to each other at the left and right sides of the central point in a row where the central point is positionedForming a set of points S, calculating each roughness r in the virtual image according to formula (3)ij
Figure BDA0002587646140000041
Wherein N represents the number of point clouds in the point set S,
Figure BDA0002587646140000042
and
Figure BDA0002587646140000043
respectively the coordinate vectors of the center point and the points in the point set S,
Figure BDA0002587646140000044
is composed of
Figure BDA0002587646140000045
The 2-norm of (a) of (b),
Figure BDA0002587646140000046
Figure BDA0002587646140000051
is composed of
Figure BDA0002587646140000052
2-norm of (d);
extracting edge and plane features: dividing a virtual image representing a range of 0 DEG to 360 DEG into 6 sub-images representing 60 DEG, respectively, on average, based on the roughness values r of the sub-imagesijSorting the points of each row;
extracting the point cloud G belonging to the ground with the minimum roughness in each line of the subimagetAs plane feature points to form a set as shown in equation (4),
{P∈Gt|min{P.rij,j∈[colmin,colmax]},i∈[rowmin,rowmax]} (4)
p.r thereinijRoughness of the representing point PDegree, min { P.rij,j∈[colmin,colmax]Denotes the point in the subimage at the ith row with the smallest roughness, colmin、colmaxRespectively representing the minimum, maximum, row of the sub-image columnmin、rowmaxRespectively representing the minimum value and the maximum value of the sub-image line;
extraction of the clustering point cloud no _ GS with the greatest roughness in each line of the subimagestAs edge feature points to form a set as shown in equation (5),
{P∈no_GSt|max{P.rij,j∈[colmin,colmax]},i∈[rowmin,rowmax]} (4)
p.r thereinijRepresents the roughness of point P, max { P.rij,j∈[colmin,colmax]Denotes the point in the subimage at which the ith row has the greatest roughness, colmin、colmaxRespectively representing the minimum, maximum, row of the sub-image columnmin、rowmaxRespectively, the minimum and maximum values of the sub-image lines.
Preferably, the point cloud registration is carried out by utilizing the edge and plane features, and the point cloud in the world coordinate system is output
Figure BDA0002587646140000053
The three-dimensional map building method comprises the following steps:
ground point cloud G at the current momenttThe extracted plane characteristic points and the ground point cloud G at the last momentt-1The extracted plane feature points are clustered into point cloud no _ GS at the current momenttThe extracted edge feature points and the last-time clustering point cloud no _ GSt-1Carrying out point cloud registration by using an ICP (inductively coupled plasma) algorithm to obtain a displacement transformation matrix T and a rotation transformation matrix R of the three-dimensional laser radar moving from the previous moment to the current moment;
obtaining a relative world coordinate system by using the displacement transformation matrix T and the rotation transformation matrix R of the three-dimensional laser radar between two adjacent scans with the first scan as a starting point, wherein the displacement transformation matrix and the rotation transformation matrix of the laser radar at the current moment are shown in a formula (6),
Figure BDA0002587646140000061
wherein the content of the first and second substances,
Figure BDA0002587646140000062
respectively representing a displacement transformation matrix and a rotation transformation matrix of the relative world coordinate system at the current moment t; t ist、RtRespectively representing a displacement transformation matrix and a rotation transformation matrix of the three-dimensional laser radar between two continuous scans from the time t-1 to the time t;
according to equation (7), using
Figure BDA0002587646140000063
And
Figure BDA0002587646140000064
the point cloud data P scanned at the current momenttConversion into the world coordinate system:
Figure BDA0002587646140000065
wherein
Figure BDA0002587646140000066
Representing point cloud data PtCoordinates in a current laser radar coordinate system;
Figure BDA0002587646140000067
representing point cloud data PtCoordinates in the world coordinate system;
establishing a three-dimensional map: point cloud data P in world coordinate systemtAnd point cloud Q on three-dimensional map before scanningt-1Performing point cloud registration optimization again, and outputting the point cloud
Figure BDA0002587646140000068
And is added to the sweepPoint cloud Q in world coordinate system before tracingt-1Obtaining a three-dimensional map point cloud Q under a world coordinate system at the current momenttNamely a three-dimensional map established at the current moment.
Preferably, the pair of point clouds in the world coordinate system
Figure BDA0002587646140000069
Performing height feature extraction, and establishing a two-dimensional grid map comprises:
according to the height characteristics, the point cloud is processed
Figure BDA00025876461400000610
Extracting height features to obtain feature point cloud based on the height of the navigation robot
Figure BDA00025876461400000611
Utilizing the space grid to carry out point cloud containing the characteristic points
Figure BDA00025876461400000612
Marking the small cubes;
and projecting the space grid on an X-Y two-dimensional grid plane, obtaining a local two-dimensional grid map at the current moment according to whether a marked microcubes exist in a certain grid in the X-Y two-dimensional grid in the Z direction, and accumulating the local two-dimensional grid maps at each moment to obtain a global two-dimensional grid map under a world coordinate system at the current moment.
Preferably, the point cloud containing the characteristic points is obtained by utilizing the space grid pair
Figure BDA00025876461400000613
The labeling of the minicubes of (a) comprises:
if a certain small cube has the characteristic point cloud at the corresponding position
Figure BDA00025876461400000614
One or more points in (b), then mark the minicube as occupied; if small cubeIf the position corresponding to the body does not have any point, marking the small cube as unoccupied;
the obtaining of the local two-dimensional grid map of the current moment according to whether a marked microcube exists in a certain grid in the X-Y two-dimensional grid in the Z direction includes:
if one or more small cubes in the grid corresponding to a certain grid in the Z direction are marked to be occupied, marking the grid as occupied; if all the minicubes corresponding to a certain grid in the Z direction are marked as unoccupied, the grid is marked as unoccupied.
The method synchronously constructs the three-dimensional point cloud map and the two-dimensional grid map, not only has the advantages of rich information quantity of the three-dimensional point cloud map, easy visual understanding of a user and the like, but also has the advantages of the two-dimensional grid map, and can be directly applied to subsequent path planning.
In the process of constructing the map, the method applies segmentation clustering and abandons the categories with less point cloud number, improves the processing efficiency and precision of point cloud feature extraction and registration, reduces the computational complexity and has better real-time performance.
The two-dimensional grid map generated finally by the invention considers the three-dimensional height of the navigation robot, is more reliable than the common two-dimensional grid map, and is safer in the subsequent planning action.
Additional features and advantages of the invention will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for synchronously constructing a high-precision three-dimensional map and a two-dimensional grid map according to an embodiment of the invention;
FIG. 2 illustrates a schematic diagram of point cloud clustering representing two objects according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating two-point clustering determination according to an embodiment of the present invention;
fig. 4 shows a laser radar coordinate system diagram of an embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
Fig. 1 shows a flowchart of a synchronous construction method of a high-precision three-dimensional map and a two-dimensional grid map according to an embodiment of the present invention, and with reference to fig. 1, the implementation steps of the synchronous construction method of the high-precision three-dimensional map and the two-dimensional grid map provided by the present invention are as follows:
and establishing a coordinate system and a space grid, wherein the coordinate system comprises a world coordinate system and a laser radar coordinate system.
A16-line three-dimensional laser radar is installed and fixed on a navigation robot, a laser radar coordinate system taking the laser radar as a center is established, as shown in figure 4, a right front direction is represented by an axis Y, a right direction is represented by an axis X, and right hand rules are followed. And establishing a world coordinate system at the initial position of the laser radar during scanning, wherein the directions of all axes are the same as the laser radar coordinate system. And establishing a space grid based on a world coordinate system, and setting each grid as a small cube with a certain side length according to the required grid map resolution. The sides of the minicubes are 0.1 meter in this example.
Scanning in real time through a three-dimensional laser radar to obtain a point cloud P with three-dimensional coordinates and distance informationtCan be represented as Pt={p1,p2,…,pi,…,pnIn which p isiIs a point cloud PtThe ith point of the medium, which contains a point piThree-dimensional coordinates and point p in laser radar coordinate systemiEuclidean distance to the lidar sensor;
setting a virtual image, and scanning the point cloud data P at the current momenttAnd orderly projecting the images into the corresponding virtual images. The measuring angle range of the 16-line laser radar in the horizontal direction is 0-360 DEGThe vertical direction measures an angle in the range of 30 ° (+15 ° to-15 °). With a horizontal angular resolution of 0.2 deg. and a vertical angular resolution of 2 deg., the image is arranged with a number of rows of 16 and a number of columns of 360/0.2-1800, so that PtAt each point piThe method can be represented by unique pixels in the image, so that the domain relation is directly defined, and the calculation efficiency is higher. The 16 lines of laser beams of the laser radar from low to high are projected to the image rows 0 to 15 in sequence, and the laser points returning by 360 degrees in a counterclockwise direction with the positive direction of the Y axis as a starting point are projected to the image columns 0 to 1799 in sequence. Defining a point cloud pointer Ptr, and using the point cloud pointer to represent the ordered storage point cloud P of the virtual imagetI.e. Ptr [ j + i × k]It can be seen that k represents the total number of columns in the ith row and j columns of the image, and k is 1800 in this embodiment. Then Ptr [ j + i × 1800].x、Ptr[j+i×1800].y、Ptr[j+i×1800]Z represents the three-dimensional coordinates of the point in the lidar coordinate system, Ptr [ j + i × 1800]Intensity denotes the euclidean distance of the point to the lidar sensor.
Point-to-point cloud data PtPerforming segmentation and clustering, and outputting ground point cloud GtAnd clustering point cloud no _ GSt
Segmenting ground point cloud GtAnd non-ground point cloud no _ GtAnd labeling the ground point cloud representing the ground. Judging whether a point represented by the ith row and j column of pixels of the virtual image is a ground point or not by the formula (1):
Figure BDA0002587646140000091
wherein dz ═ Ptr [ j + (i +1) × 1800]. z-Ptr [ j + i × 1800]. z,
dx=Ptr[j+(i+1)×1800].x-Ptr[j+i×1800].x,
dy=Ptr[j+(i+1)×1800].y-Ptr[j+i×1800].y,
when angle is smaller than the ground segmentation threshold value, preferably 10 °, the point represented by the pixel and the point corresponding to the next row, i.e. the point represented by the j-th row and the i +1 st column of pixels, are all considered as ground points, otherwise, the points are considered as non-ground points, and the ground points are labeled with specific labels.
For non-ground point cloud no _ GtClustering, forming a point cluster by point clouds representing the same object, giving a unique label as a class, abandoning the class with the point cloud number smaller than an influence threshold, and outputting a clustered point cloud no _ GSt. The impact threshold here is derived from the current environment, and if the robot is working in a noisy outdoor environment, small objects such as leaves may form a trivial and unreliable feature, since the same leaf may not be seen in two consecutive scans. In order to perform fast and reliable feature extraction using the segmented point clouds, categories in which the number of point clouds is smaller than an influence threshold, which is preferably 30 in this embodiment, are omitted.
Traversal no _ GtAnd performing breadth-first search on the points which are not subjected to clustering processing to obtain four adjacent points of the current point in the image, namely, the upper, the lower, the left and the right, and judging whether the current point A and the adjacent point B represent the same object or not. As shown in fig. 2 and 3, A, B represents two returned laser spots, OA and OB represent the respective laser beams, and the angle β values of A, B two spots are calculated by the formula (2):
Figure BDA0002587646140000101
wherein d ismax=‖OA‖=A.intensity,dmin=‖OB‖=B.intensity。
As described above, a.intensity and b.intensity respectively represent the euclidean distance between A, B points and the laser radar sensor, where α represents the angle between the two laser beams, and is 0.2 ° when the left-right adjacency is considered and 2 ° when the top-bottom adjacency is considered.
When the angle β is greater than the cluster angle threshold, preferably 60 °, A, B indicates the same object. When the current point and the adjacent point represent the same object, the current point and the adjacent point are put into the same point cluster, so that the same object is represented and a unique label mark is given.
For GtAnd no _ GStAnd extracting the characteristics of height, edge and plane.
Height feature extraction: traversal no_GStAll points in GtThe K nearest neighbor searching algorithm is used for finding the ground point which is closest to the current point on the X-Y plane. I.e. calculating the plane distance d between the current point and the nearest ground point,
Figure BDA0002587646140000102
wherein dx is2Is the square of the distance, dy, between the ground point and the current point in the X direction2And finding out the nearest ground point according to the plane distance d for the square of the distance between the ground point and the current point in the Y direction.
And calculating the height difference dz between the current point and the ground point in the Z direction. Considering the height of the navigation robot, evaluating the height range [ H ] capable of generating collisionmin,Hmax]If the following inequality H is satisfiedmin≤dz≤HmaxThen to no _ GStThe currently traversed point of (a) is given a height signature.
Definition of roughness rij: traversing each point in the virtual image, taking the current point as the center point, namely the center point, extracting continuous points which are adjacent to each other at the left and the right of the center point from the line where the center point is positioned to form a point set S, and defining the roughness r of the points according to the point set SijAnd calculating the roughness r in the image by the formula (3)ij
Figure BDA0002587646140000111
Where N represents the number of point clouds in the point set S,
Figure BDA0002587646140000112
and
Figure BDA0002587646140000113
respectively the coordinate vectors of the center point and the points in the point set S,
Figure BDA0002587646140000114
is composed of
Figure BDA0002587646140000115
The 2-norm of (a) of (b),
Figure BDA0002587646140000116
Figure BDA0002587646140000117
is m ∈ S, m ≠ 2-norm of midOPm-OPmid.
Extracting edge and plane features: dividing a virtual image representing a range of 0 DEG to 360 DEG into 6 sub-images representing 60 DEG, respectively, on average, based on the roughness values r of the sub-imagesijSorting the points of each row;
extracting the point cloud G belonging to the ground with the minimum roughness in each line of the subimagetAs plane feature points to form a set as shown in equation (4),
{P∈Gt|min{P.rij,j∈[colmin,colmax]},i∈[rowmin,rowmax]} (4)
p.r thereinijRepresents the roughness of point P, min { P.rij,j∈[colmin,colmax]Denotes the point in the subimage at the ith row with the smallest roughness, colmin、colmaxRespectively representing the minimum, maximum, row of the sub-image columnmin、rowmaxRespectively representing the minimum value and the maximum value of the sub-image line;
extraction of the clustering point cloud no _ GS with the greatest roughness in each line of the subimagestAs edge feature points to form a set as shown in equation (5),
{P∈no_GSt|max{P.rij,j∈[colmin,colmax]},i∈[rowmin,rowmax]} (5)
p.r thereinijRepresents the roughness of point P, max { P.rij,j∈[colmin,colmax]Denotes the point in the subimage at which the ith row has the greatest roughness, colmin、colmaxRespectively representing the minimum, maximum, row of the sub-image columnmin、rowmaxRespectively representing sub-picture linesMinimum value and maximum value of (d).
Specifically, the virtual image is divided into 16 rows and 300 columns of 6 sub-images, and the points in each row are sorted according to the roughness values of the sub-images.
Extracting a point cloud G belonging to the ground and having a minimum roughness in each line of the subimagetAs a plane feature point. Taking the first sub-image as an example, the plane feature points satisfy the following set:
{P∈Gt|min{P.rij,j∈[0,299]},i∈[0,15]}
p.r thereinijRepresents the roughness of point P, min { P.rij,j∈[0,299]Denotes the point in the sub-image at which the ith row has the least roughness.
Extracting the clustering point cloud no _ GS with the maximum roughness from each line in the sub-imagetAs edge feature points. Taking the first sub-image as an example, the edge feature points satisfy the following set:
Figure BDA0002587646140000128
p.r thereinijRepresents the roughness of point P, max { P.rij,j∈[0,299]Denotes the point in the subimage at which the ith row has the greatest roughness.
And establishing a three-dimensional map by utilizing the point cloud registration of the edge and plane features.
Finding out the corresponding feature point set of the current scanning point cloud and the scanning point cloud at the previous moment by using the same label as a candidate set, namely, finding out the ground point cloud G at the current momenttThe extracted plane characteristic points and the ground point cloud G at the last momentt-1The extracted plane feature points are clustered into point cloud no _ GS at the current momenttThe extracted edge feature points and the last-time clustering point cloud no _ GSt-1The extracted edge feature points. And carrying out point cloud registration on the feature points at different moments by combining respective labels and using an ICP (inductively coupled plasma) algorithm, and calculating to obtain a displacement transformation matrix T and a rotation transformation matrix R of the laser radar between two continuous scans.
Taking the first scanning as a starting point, and utilizing a displacement transformation matrix T and a rotation transformation matrix R of the laser radar between two adjacent scanning, a relative world coordinate system can be obtained, wherein the displacement transformation matrix and the rotation transformation matrix of the laser radar at the current moment are shown in formula (6):
Figure BDA0002587646140000121
wherein the content of the first and second substances,
Figure BDA0002587646140000122
respectively representing a displacement transformation matrix and a rotation transformation matrix of the relative world coordinate system at the current moment t; t ist、RtRespectively representing a displacement transformation matrix and a rotation transformation matrix of the three-dimensional laser radar between two consecutive scans from time t-1 to time t.
Using according to equation (7)
Figure BDA0002587646140000123
And
Figure BDA0002587646140000124
the point cloud P scanned at the current momenttConversion into the world coordinate system:
Figure BDA0002587646140000125
wherein
Figure BDA0002587646140000126
Represents PtCoordinates in a current laser radar coordinate system;
Figure BDA0002587646140000127
represents PtCoordinates in the world coordinate system.
Establishing a three-dimensional map: point cloud data P in world coordinate systemtAnd point cloud Q on three-dimensional map before scanningt-1Performing point cloud registration optimization again, and outputting the point cloud
Figure BDA0002587646140000131
And adding the point cloud Q in the world coordinate system before scanningt-1Obtaining a three-dimensional map point cloud Q under a world coordinate system at the current momenttI.e. a three-dimensional map. Wherein Qt-1For accumulated scan point cloud data from the initial time to the time t-1, i.e. a three-dimensional point cloud map, Q, built at the time t-1tAnd accumulating the scanning point cloud data from the initial moment to the t moment, namely establishing a three-dimensional point cloud map at the t moment.
To pair
Figure BDA0002587646140000132
And extracting height features and establishing a two-dimensional grid map.
Optimizing point cloud under a world coordinate system according to the height characteristic mark obtained in the previous step
Figure BDA0002587646140000133
Extracting height features to obtain feature point cloud based on the height of the navigation robot
Figure BDA0002587646140000134
Using the initially established spatial grid, a minicube can be marked as occupied if it has one or more points in the point cloud at the location corresponding to the minicube; conversely, if no point is present at the location to which the minicubes correspond, the minicubes are marked as unoccupied, thereby enabling the marking of each minicube in the spatial grid.
Projecting a three-dimensional space grid under a world coordinate system on an X-Y two-dimensional grid plane, and if one or more small cubes in a grid corresponding to a certain grid in the X-Y two-dimensional grid in the Z direction are marked to be occupied, marking the grid as occupied; conversely, if all the minicubes to which the grid corresponds in the Z-direction are marked as unoccupied, the grid is marked as unoccupied. Thus, the local two-dimensional grid map at the current moment is obtained, and the local two-dimensional grid maps at each moment are accumulated to obtain the global two-dimensional grid map under the world coordinate system at the current moment.
The three-dimensional map and the two-dimensional grid map have rich information content and can be directly used for subsequent navigation. Segmentation clustering is applied in the three-dimensional point cloud processing process, so that the efficiency and the precision of point cloud feature extraction and registration are improved, and the computational complexity is reduced. The three-dimensional height of the robot is more comprehensively considered in the generation of the two-dimensional grid map.
The preferred embodiments of the present invention have been described in detail with reference to the accompanying drawings, however, the present invention is not limited to the specific details of the above embodiments, and various simple modifications can be made to the technical solution of the present invention within the technical idea of the present invention, and these simple modifications are within the protective scope of the present invention. It should be noted that the various technical features described in the above embodiments can be combined in any suitable manner without contradiction, and the invention is not described in any way for the possible combinations in order to avoid unnecessary repetition.
In addition, any combination of the various embodiments of the present invention is also possible, and the same should be considered as the disclosure of the present invention as long as it does not depart from the spirit of the present invention.

Claims (10)

1. A high-precision three-dimensional map and two-dimensional grid map synchronous construction method is characterized by comprising the following steps:
establishing a coordinate system and a space grid, wherein the coordinate system comprises a world coordinate system and a laser radar coordinate system;
real-time scanning is carried out through a three-dimensional laser radar to obtain point cloud data P with three-dimensional coordinates and distance informationt,Pt={p1,p2,…,pi,…,pnIn which p isiFor the point cloud data PtThe ith point;
setting a virtual image, and scanning the point cloud data P at the current momenttSequentially projecting into the corresponding virtual image;
point-to-point cloud data PtPerforming segmentation and clustering, and outputting ground point cloud GtAnd clustering point cloud no _ GSt
Segmenting ground point cloud GtAnd non-ground point cloud no _ GtAnd to a ground point cloud G representing the groundtMarking a designated label;
for the non-ground point cloud no _ GtClustering, forming a point cluster by point clouds representing the same object, giving a unique label as a class, abandoning the class with the point cloud number smaller than an influence threshold, and outputting a clustered point cloud no _ GSt
For GtAnd no _ GStExtracting the characteristics of height, edge and plane;
performing point cloud registration by using edge and plane features, and outputting point cloud in world coordinate system
Figure FDA0002587646130000011
Establishing a three-dimensional map;
for the point cloud in the world coordinate system
Figure FDA0002587646130000012
And extracting height features and establishing a two-dimensional grid map.
2. The method according to claim 1, wherein the establishing a coordinate system and a spatial grid comprises:
establishing a laser radar coordinate system taking a three-dimensional laser radar as a center, establishing a world coordinate system at an initial position when the three-dimensional laser radar scans, wherein the directions of all axes are the same as the laser radar coordinate system;
and establishing a space grid based on the world coordinate system, and setting a small cube according to the required grid map resolution.
3. The method of claim 1, wherein the method comprises synchronously constructing a three-dimensional map and a two-dimensional grid mapThen, the ground point cloud G is segmentedtAnd non-ground point cloud no _ GtAnd to a ground point cloud G representing the groundtLabeling with a designated tag includes:
judging whether a point represented by the pixels on the ith row and the j column of the virtual image is a ground point or not according to the formula (1),
Figure FDA0002587646130000021
wherein, Ptr [ j + i × k ] represents the pixel of the ith row and j column of the virtual image, then Ptr [ j + i × k ] x, Ptr [ j + i × k ] y, Ptr [ j + i × k ] z represent the three-dimensional coordinate of the point under the laser radar coordinate system, wherein k represents the total number of columns;
dz=Ptr[j+(i+1)×k].z-Ptr[j+i×k].z,
dx=Ptr[j+(i+1)×k].x-Ptr[j+i×k].x,
dy=Ptr[j+(i+1)×k].y-Ptr[j+i×k].y,
judging whether angle is smaller than a ground segmentation threshold value:
if yes, the point represented by the pixel and the point corresponding to the next row, namely the point represented by the j-column pixel of the (i +1) th row are all regarded as ground points and designated label marks are given;
if not, the non-ground point is regarded as a non-ground point and is not marked.
4. The method according to claim 1, wherein the three-dimensional lidar has a horizontal measurement angle range of O ° to 360 °, a vertical measurement angle range of-15 ° to 15 °, a horizontal angular resolution of 0.2 °, and a vertical angular resolution of 2 °.
5. The method as claimed in claim 1, wherein the non-ground point cloud no _ G is constructed by a synchronous methodtPerforming clustering processing, wherein point clouds representing the same object form a point cluster, and giving unique labels is regarded as a class comprising:
traversal no _ GtCarrying out breadth-first search on the points which are not subjected to clustering processing to obtain four adjacent points of the current point in the image, namely, the upper, the lower, the left and the right;
the value of the angle beta of the two points is calculated A, B according to equation (2),
Figure FDA0002587646130000031
wherein: dmax=max{A.intensity,B.intensity},
dmin=min{A.intensity,B.intensity},
The intensity and the B intensity respectively represent Euclidean distances from a point A and a point B to the three-dimensional laser radar sensor, alpha represents an included angle of two laser beams, A is a current point, and B is any one of four adjacent points;
judging whether the angle beta is larger than a preset clustering angle threshold value or not;
determining A, B that the two points represent the same object when the angle beta is judged to be larger than the clustering angle threshold value;
in the case where A, B two points represent the same object, A, B two points are put in the same point cluster to represent the same object and given unique tag marks.
6. The method of claim 1, wherein the pair G comprises a three-dimensional map and a two-dimensional grid map, and wherein the pair G comprises a first grid map and a second grid maptAnd no _ GStThe extraction of the height, edge and plane features comprises the following steps:
traversal no _ GStAll points in GtFinding the nearest ground point on the X-Y plane from the current point by using a K nearest neighbor search algorithm, and calculating the height difference dz of the current point and the nearest ground point in the Z direction;
judging whether the height difference is within a preset height range (H)min,Hmax];
When the height difference is judged to be in the height range [ H ]min,Hmax]In the case ofAnd giving a height characteristic mark to the currently traversed point.
7. The method of claim 1, wherein the pair G comprises a three-dimensional map and a two-dimensional grid map, and wherein the pair G comprises a first grid map and a second grid maptAnd no _ GStThe extraction of the height, edge and plane features comprises the following steps:
traversing each point in the virtual image, taking the current point as a central point, extracting continuous points which are adjacent to each other at the left and right of the central point in a line where the central point is positioned to form a point set S, and calculating each roughness r in the virtual image according to a formula (3)ij
Figure FDA0002587646130000041
Wherein N represents the number of point clouds in the point set S,
Figure FDA0002587646130000042
and
Figure FDA0002587646130000043
respectively the coordinate vectors of the center point and the points in the point set S,
Figure FDA0002587646130000044
is composed of
Figure FDA0002587646130000045
The 2-norm of (a) of (b),
Figure FDA0002587646130000046
Figure FDA0002587646130000047
is m belongs to S, and m is not equal to the 2-norm of midOPm-OPmid;
extracting edge and plane features: the virtual image representing the range of 0-360 is equally divided into 6 sub-images representing 60 respectively, according to the coarseness of the sub-imagesRoughness value rijSorting the points of each row;
extracting the point cloud G belonging to the ground with the minimum roughness in each line of the subimagetAs plane feature points to form a set as shown in equation (4),
{P∈Gt|min{P.rij,j∈[colmin,colmax]},i∈[rowmin,rowmax]} (4)
wherein, P.rijRepresents the roughness of point P, min { P.rij,j∈[colmin,colmax]Denotes the point in the subimage at the ith row with the smallest roughness, colmin、colmaxRespectively representing the minimum, maximum, row of the sub-image columnmin、rowmaxRespectively representing the minimum value and the maximum value of the sub-image line;
extraction of the clustering point cloud no _ GS with the greatest roughness in each line of the subimagestAs edge feature points to form a set as shown in equation (5),
{P∈no_GSt|max{P.rij,j∈[colmin,colmax]},i∈[rowmin,rowmax]} (5)
p.r thereinijRepresents the roughness of point P, max { P.rij,j∈[colmin,colmax]Denotes the point in the subimage at which the ith row has the greatest roughness, colmin、colmaxRespectively representing the minimum, maximum, row of the sub-image columnmin、rowmaxRespectively, the minimum and maximum values of the sub-image lines.
8. The method as claimed in claim 1, wherein the point cloud registration is performed by using edge and plane features, and the point cloud in the world coordinate system is outputted
Figure FDA0002587646130000051
The three-dimensional map building method comprises the following steps:
ground point cloud G at the current momenttThe extracted plane characteristic points and the ground point cloud G at the last momentt-1The extracted plane feature points are clustered into point cloud no _ GS at the current momenttThe extracted edge feature points and the last-time clustering point cloud no _ GSt-1Carrying out point cloud registration by using an ICP (inductively coupled plasma) algorithm to obtain a displacement transformation matrix T and a rotation transformation matrix R of the three-dimensional laser radar moving from the previous moment to the current moment;
obtaining a relative world coordinate system by using the displacement transformation matrix T and the rotation transformation matrix R of the three-dimensional laser radar between two adjacent scans with the first scan as a starting point, wherein the displacement transformation matrix and the rotation transformation matrix of the laser radar at the current moment are shown in a formula (6),
Figure FDA0002587646130000052
wherein the content of the first and second substances,
Figure FDA0002587646130000053
respectively representing a displacement transformation matrix and a rotation transformation matrix of the relative world coordinate system at the current moment t; t ist、RtRespectively representing a displacement transformation matrix and a rotation transformation matrix of the three-dimensional laser radar between two continuous scans from the time t-1 to the time t;
according to equation (7), using
Figure FDA0002587646130000054
And
Figure FDA0002587646130000055
the point cloud data P scanned at the current momenttConversion into the world coordinate system:
Figure FDA0002587646130000056
wherein
Figure FDA0002587646130000057
Representing point cloud data PtCoordinates in a current laser radar coordinate system;
Figure FDA0002587646130000058
representing point cloud data PtCoordinates in the world coordinate system;
establishing a three-dimensional map: point cloud data P in world coordinate systemtAnd point cloud Q on three-dimensional map before scanningt-1Performing point cloud registration optimization again, and outputting the point cloud
Figure FDA0002587646130000059
And adding the point cloud Q in the world coordinate system before scanningt-1Obtaining a three-dimensional map point cloud Q under a world coordinate system at the current momenttNamely a three-dimensional map established at the current moment.
9. The method of claim 1, wherein the pair of point clouds in the world coordinate system is a point cloud in the world coordinate system
Figure FDA0002587646130000061
Performing height feature extraction, and establishing a two-dimensional grid map comprises:
according to the height characteristics, the point cloud is processed
Figure FDA0002587646130000062
Extracting height features to obtain feature point cloud based on the height of the navigation robot
Figure FDA0002587646130000063
Utilizing the space grid to carry out point cloud containing the characteristic points
Figure FDA0002587646130000064
Marking the small cubes;
and projecting the space grid on an X-Y two-dimensional grid plane, obtaining a local two-dimensional grid map at the current moment according to whether a marked microcubes exist in a certain grid in the X-Y two-dimensional grid in the Z direction, and accumulating the local two-dimensional grid maps at each moment to obtain a global two-dimensional grid map under a world coordinate system at the current moment.
10. The method of claim 9, wherein said step of utilizing said spatial grid network to map said point cloud containing said features is performed in a synchronized manner
Figure FDA0002587646130000065
The labeling of the minicubes of (a) comprises:
if a certain small cube has the characteristic point cloud at the corresponding position
Figure FDA0002587646130000066
One or more points in (b), then mark the minicube as occupied; if no point is located at the position corresponding to the minicube, marking the minicube as unoccupied;
the obtaining of the local two-dimensional grid map of the current moment according to whether a marked microcube exists in a certain grid in the X-Y two-dimensional grid in the Z direction includes:
if one or more small cubes in the grid corresponding to a certain grid in the Z direction are marked to be occupied, marking the grid as occupied; if all the minicubes corresponding to a certain grid in the Z direction are marked as unoccupied, the grid is marked as unoccupied.
CN202010686222.9A 2020-07-16 2020-07-16 High-precision three-dimensional map and two-dimensional grid map synchronous construction method Active CN112070770B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010686222.9A CN112070770B (en) 2020-07-16 2020-07-16 High-precision three-dimensional map and two-dimensional grid map synchronous construction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010686222.9A CN112070770B (en) 2020-07-16 2020-07-16 High-precision three-dimensional map and two-dimensional grid map synchronous construction method

Publications (2)

Publication Number Publication Date
CN112070770A true CN112070770A (en) 2020-12-11
CN112070770B CN112070770B (en) 2022-11-01

Family

ID=73657387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010686222.9A Active CN112070770B (en) 2020-07-16 2020-07-16 High-precision three-dimensional map and two-dimensional grid map synchronous construction method

Country Status (1)

Country Link
CN (1) CN112070770B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767490A (en) * 2021-01-29 2021-05-07 福州大学 Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN113393423A (en) * 2021-05-18 2021-09-14 深圳拓邦股份有限公司 Cliff detection method and device based on point cloud and mobile robot
CN113640822A (en) * 2021-07-07 2021-11-12 华南理工大学 High-precision map construction method based on non-map element filtering
CN113848944A (en) * 2021-10-18 2021-12-28 云鲸智能科技(东莞)有限公司 Map construction method and device, robot and storage medium
CN114693862A (en) * 2020-12-29 2022-07-01 北京万集科技股份有限公司 Three-dimensional point cloud data model reconstruction method, target re-identification method and device
CN114750147A (en) * 2022-03-10 2022-07-15 深圳甲壳虫智能有限公司 Robot space pose determining method and device and robot
CN114817426A (en) * 2021-01-28 2022-07-29 中强光电股份有限公司 Map construction device and method
CN116226298A (en) * 2023-05-08 2023-06-06 上海维智卓新信息科技有限公司 Automatic assessment method for map quality
CN117146829A (en) * 2023-10-30 2023-12-01 江苏云幕智造科技有限公司 Multi-pose humanoid robot environment navigation method based on binocular and three-dimensional point cloud
CN117274527A (en) * 2023-08-24 2023-12-22 东方电气集团科学技术研究院有限公司 Method for constructing three-dimensional visualization model data set of generator equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN106204705A (en) * 2016-07-05 2016-12-07 长安大学 A kind of 3D point cloud segmentation method based on multi-line laser radar
US20180232947A1 (en) * 2017-02-11 2018-08-16 Vayavision, Ltd. Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140350839A1 (en) * 2013-05-23 2014-11-27 Irobot Corporation Simultaneous Localization And Mapping For A Mobile Robot
CN106204705A (en) * 2016-07-05 2016-12-07 长安大学 A kind of 3D point cloud segmentation method based on multi-line laser radar
US20180232947A1 (en) * 2017-02-11 2018-08-16 Vayavision, Ltd. Method and system for generating multidimensional maps of a scene using a plurality of sensors of various types
CN108932736A (en) * 2018-05-30 2018-12-04 南昌大学 Two-dimensional laser radar Processing Method of Point-clouds and dynamic robot pose calibration method
CN109443369A (en) * 2018-08-20 2019-03-08 北京主线科技有限公司 The method for constructing sound state grating map using laser radar and visual sensor
CN110221603A (en) * 2019-05-13 2019-09-10 浙江大学 A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI YI等: ""SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation"", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
赵凯等: ""基于均值高程图的城市环境三维LiDAR点云地面分割方法"", 《军事交通学院学报》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114693862A (en) * 2020-12-29 2022-07-01 北京万集科技股份有限公司 Three-dimensional point cloud data model reconstruction method, target re-identification method and device
CN114817426A (en) * 2021-01-28 2022-07-29 中强光电股份有限公司 Map construction device and method
CN112767490A (en) * 2021-01-29 2021-05-07 福州大学 Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN112767490B (en) * 2021-01-29 2022-06-21 福州大学 Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN113393423A (en) * 2021-05-18 2021-09-14 深圳拓邦股份有限公司 Cliff detection method and device based on point cloud and mobile robot
CN113640822A (en) * 2021-07-07 2021-11-12 华南理工大学 High-precision map construction method based on non-map element filtering
CN113640822B (en) * 2021-07-07 2023-08-18 华南理工大学 High-precision map construction method based on non-map element filtering
CN113848944A (en) * 2021-10-18 2021-12-28 云鲸智能科技(东莞)有限公司 Map construction method and device, robot and storage medium
CN114750147A (en) * 2022-03-10 2022-07-15 深圳甲壳虫智能有限公司 Robot space pose determining method and device and robot
CN114750147B (en) * 2022-03-10 2023-11-24 深圳甲壳虫智能有限公司 Space pose determining method and device of robot and robot
CN116226298A (en) * 2023-05-08 2023-06-06 上海维智卓新信息科技有限公司 Automatic assessment method for map quality
CN117274527A (en) * 2023-08-24 2023-12-22 东方电气集团科学技术研究院有限公司 Method for constructing three-dimensional visualization model data set of generator equipment
CN117274527B (en) * 2023-08-24 2024-06-11 东方电气集团科学技术研究院有限公司 Method for constructing three-dimensional visualization model data set of generator equipment
CN117146829A (en) * 2023-10-30 2023-12-01 江苏云幕智造科技有限公司 Multi-pose humanoid robot environment navigation method based on binocular and three-dimensional point cloud

Also Published As

Publication number Publication date
CN112070770B (en) 2022-11-01

Similar Documents

Publication Publication Date Title
CN112070770B (en) High-precision three-dimensional map and two-dimensional grid map synchronous construction method
CN111486855B (en) Indoor two-dimensional semantic grid map construction method with object navigation points
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN109345588B (en) Tag-based six-degree-of-freedom attitude estimation method
CN111882612B (en) Vehicle multi-scale positioning method based on three-dimensional laser detection lane line
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
Lenac et al. Fast planar surface 3D SLAM using LIDAR
CN109993793B (en) Visual positioning method and device
CN109947097B (en) Robot positioning method based on vision and laser fusion and navigation application
CN112785643A (en) Indoor wall corner two-dimensional semantic map construction method based on robot platform
CN113409410A (en) Multi-feature fusion IGV positioning and mapping method based on 3D laser radar
CN107917710B (en) Indoor real-time positioning and three-dimensional map construction method based on single line laser
CN104040590A (en) Method for estimating pose of object
CN111523545B (en) Article searching method combined with depth information
CN112799096B (en) Map construction method based on low-cost vehicle-mounted two-dimensional laser radar
CN109033989A (en) Target identification method, device and storage medium based on three-dimensional point cloud
CN111968177A (en) Mobile robot positioning method based on fixed camera vision
CN114004869A (en) Positioning method based on 3D point cloud registration
CN114088081B (en) Map construction method for accurate positioning based on multistage joint optimization
Zhao et al. Cbhe: Corner-based building height estimation for complex street scene images
CN116977628A (en) SLAM method and system applied to dynamic environment and based on multi-mode semantic framework
CN116468786A (en) Semantic SLAM method based on point-line combination and oriented to dynamic environment
Feng et al. Marker-assisted structure from motion for 3D environment modeling and object pose estimation
CN116679307A (en) Urban rail transit inspection robot positioning method based on three-dimensional laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: No. 397, Tongcheng South Road, Baohe District, Hefei City, Anhui Province 230061

Applicant after: Super high voltage branch of State Grid Anhui Electric Power Co.,Ltd.

Address before: No. 397, Tongcheng South Road, Baohe District, Hefei City, Anhui Province 230061

Applicant before: STATE GRID ANHUI POWER SUPPLY COMPANY OVERHAUL BRANCH

GR01 Patent grant
GR01 Patent grant