CN113838051B - Robot closed-loop detection method based on three-dimensional point cloud - Google Patents

Robot closed-loop detection method based on three-dimensional point cloud Download PDF

Info

Publication number
CN113838051B
CN113838051B CN202111410898.6A CN202111410898A CN113838051B CN 113838051 B CN113838051 B CN 113838051B CN 202111410898 A CN202111410898 A CN 202111410898A CN 113838051 B CN113838051 B CN 113838051B
Authority
CN
China
Prior art keywords
point cloud
dimensional
point
frame
closed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111410898.6A
Other languages
Chinese (zh)
Other versions
CN113838051A (en
Inventor
范彦福
周光召
连文康
顾建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202111410898.6A priority Critical patent/CN113838051B/en
Publication of CN113838051A publication Critical patent/CN113838051A/en
Application granted granted Critical
Publication of CN113838051B publication Critical patent/CN113838051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot closed-loop detection method based on three-dimensional point cloud, which enables a robot to have the capability of identifying a scene once reached. The method is characterized in that point cloud voxel filtering is carried out to carry out point cloud voxel filtering to carry out point cloud downsampling on the point cloud aiming at the characteristics of large point cloud data volume and dense distribution in the three-dimensional point cloud, the point cloud number is reduced under the condition that the original point cloud distribution characteristics are kept unchanged, and the algorithm operation efficiency is improved. Meanwhile, PCA algorithm processing is carried out on the point cloud after down-sampling, the direction of a main shaft is determined, and the problem that closed-loop detection is affected by rotation uncertainty of the robot is solved. And then, carrying out dimensionality reduction on the point cloud processed by the PCA algorithm, converting the point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud into a two-dimensional image. And finally, accelerating to search candidate frames by using a data structure Kdtree, and in order to reduce the influence of discretization and noise on the two-dimensional image, performing local column movement in the main shaft direction, so that the closed-loop detection efficiency is improved, and meanwhile, the closed-loop detection accuracy is ensured.

Description

Robot closed-loop detection method based on three-dimensional point cloud
Technical Field
The invention belongs to the field of autonomous navigation of robots, and particularly relates to a closed-loop detection method for three-dimensional point cloud of a robot.
Background
The closed-loop detection is also called loop detection, and refers to the ability of the robot to identify a scene that has arrived once, so that the map is closed, that is, the robot matches the scanning of the current frame with the map that has been constructed during the mapping and positioning process. If loop detection is successful, the accumulated pose errors of the composition of the robot can be reduced, and the subsequent back-end optimization is seriously influenced by error detection. Compared with the visual closed-loop detection, the three-dimensional point cloud closed-loop detection has higher stability and is not influenced by factors such as seasons, time, illumination conditions and the like. However, due to lack of texture and color information, the number of point clouds is large, and laser closed-loop detection also has certain difficulty. Meanwhile, even if the robot returns to the same scene, due to the influence of uncertainty of the first direction rotation, the coordinate values of the acquired three-dimensional observation point cloud under the coordinate system of the laser sensor are different, and certain difficulty is brought to closed loop detection.
Disclosure of Invention
The invention aims to provide a robot closed-loop detection method based on three-dimensional point cloud aiming at the defects of the prior art. According to the invention, point cloud downsampling is carried out on original point cloud voxel filtering, and the point cloud amount needing to be processed is reduced on the basis of ensuring that the point cloud distribution characteristics are unchanged. And meanwhile, the sampled point clouds are processed through a PCA algorithm, so that the direction of a main shaft is determined, point cloud transformation is carried out according to the direction of the main shaft, and the point cloud distribution of the same scene is ensured to be the same. Organizing the point cloud after dimensionality reduction and rasterization by using a data structure Kdtree, and accelerating the search of candidate frames. In order to avoid the influence of noise and discretization on the data after dimensionality reduction, local column movement is carried out in the direction of a main shaft, and finally similarity detection is carried out on the moved distance image and the candidate frame.
The purpose of the invention is realized by the following technical scheme: a robot closed loop detection method based on three-dimensional point cloud specifically comprises the following steps:
(1) acquiring an original input point cloud, and performing down-sampling on voxel filtering of the original point cloud by utilizing the voxel filtering to obtain a down-sampled point cloud; the voxel filtering determines a three-dimensional voxel grid according to the range of the input original point cloud, and selects a point in each voxel in the three-dimensional voxel grid as the representative of all point clouds of the voxel;
(2) processing the point cloud obtained in the step (1) after being subjected to down-sampling through a PCA algorithm to determine the direction of a main shaft, and transforming the point cloud after being subjected to down-sampling according to the direction of the main shaft to obtain transformed point cloud data;
(3) projecting the transformed point cloud, converting the projected point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud subjected to coordinate conversion to obtain a point cloud grid occupation condition; generating a point cloud dimensionality reduction two-dimensional image and a vector ringKey according to the occupation condition of the point cloud grid;
(4) judging whether the accumulated frame number is greater than a threshold value; if the accumulated frame number is smaller than the threshold value, repeating the steps (1) to (3) until the accumulated frame number is larger than the threshold value;
(5) generating a data structure Kdtree according to the ringKey vector sets of all point cloud frames with the distance from the current frame number greater than the threshold;
(6) the current frame searches a candidate frame in a data structure Kdtree by using a vector ringKey of the current frame through a nearest neighbor searching algorithm;
(7) and carrying out local column movement on the two-dimensional image of the current frame in the main axis direction, comparing the similarity of the frame obtained after the local column movement with the candidate frame which is not subjected to the local column movement, and if the distance similarity is smaller than a closed loop detection threshold value, determining that the closed loop is formed.
Further, the step (2) includes the sub-steps of:
(2.1) processing the point cloud obtained in the step (1) after down-sampling by a PCA algorithm to determine the direction of a main shaft;
(2.2) taking the down-sampled point cloud output in the step (1) as an input sample set
Figure 34146DEST_PATH_IMAGE001
Wherein the point cloud after down-sampling has a size of
Figure 357811DEST_PATH_IMAGE002
M is the number of point clouds, i represents the ith point cloud, centralizing all input samples:
Figure 75231DEST_PATH_IMAGE003
(2.3) calculating covariance matrix of input sample set
Figure 906658DEST_PATH_IMAGE004
Figure 252189DEST_PATH_IMAGE005
(2.4) covariance matrix
Figure 481176DEST_PATH_IMAGE006
Decomposing the eigenvalue to obtain
Figure 420313DEST_PATH_IMAGE007
Figure 150372DEST_PATH_IMAGE008
(2.5) extracting the eigenvectors corresponding to the maximum three eigenvalues
Figure 225775DEST_PATH_IMAGE009
Forming a new transformation matrix
Figure 750298DEST_PATH_IMAGE010
Figure 206425DEST_PATH_IMAGE011
(2.6) downsampled point cloud and transformation matrix
Figure 349961DEST_PATH_IMAGE010
Multiplying to obtain new three-dimensional point cloud data after transformation
Figure 404505DEST_PATH_IMAGE012
Comprises the following steps:
Figure 772032DEST_PATH_IMAGE013
further, the step (3) includes the sub-steps of:
(3.1) performing two-dimensional projection on the transformed three-dimensional point cloud to obtain a two-dimensional coordinate point;
(3.2) converting the coordinate values of each coordinate point in the Cartesian coordinate system into coordinate values of a polar coordinate system according to the coordinate values of each coordinate point in the Cartesian coordinate system obtained by the two-dimensional projection in the step (3.1); rasterizing the point cloud after coordinate transformation to obtain a grid occupation condition;
setting a point in the point cloud as a Cartesian coordinate system coordinate value
Figure 420182DEST_PATH_IMAGE014
The coordinate value of the polar coordinate system is
Figure 226464DEST_PATH_IMAGE015
After rasterization, each bin has a size of
Figure 509416DEST_PATH_IMAGE016
Then there are:
Figure 47845DEST_PATH_IMAGE017
Figure 307925DEST_PATH_IMAGE018
the coordinates of the point after rasterization are:
Figure 793264DEST_PATH_IMAGE019
(3.3) carrying out value taking on the grid bin according to the occupation condition of the point cloud grid to generate a point cloud dimension reduction two-dimensional image;
and (3.4) obtaining a vector ringKey according to the point cloud dimension reduction two-dimensional image.
Further, the step (7) comprises the sub-steps of:
(7.1) realizing similarity detection by adding and averaging similarity detection of each column; each column of each frame of image is a vector, and the similarity calculation of the corresponding columns of different images is mainly measured by calculating the cosine value of the included angle of two vectors:
Figure 822400DEST_PATH_IMAGE020
wherein:
Figure 734992DEST_PATH_IMAGE021
is the distance between the two images of the frame,
Figure 216789DEST_PATH_IMAGE022
as the current frame image, the image of the current frame,
Figure 10213DEST_PATH_IMAGE023
as the candidate frame images, the image of the frame,
Figure 159435DEST_PATH_IMAGE024
the number of columns of the image is,
Figure 242929DEST_PATH_IMAGE025
is the first of the current frame
Figure 212022DEST_PATH_IMAGE026
The columns of the image data are,
Figure 39163DEST_PATH_IMAGE027
is the first of the candidate frame
Figure 42891DEST_PATH_IMAGE028
And column q is the current frame and c is the candidate frame.
(7.2) selecting the minimum value of the distance measurement as the measurement of the similarity of the point clouds of the two frames, and if the minimum value is smaller than a closed loop detection threshold value, considering that the two frames have closed loops:
Figure 562866DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 19255DEST_PATH_IMAGE030
and measuring the final distance between the current frame two-dimensional image and the candidate frame two-dimensional image, namely measuring the distance similarity between the current frame point cloud and the candidate frame point cloud.
The invention has the following beneficial effects:
(1) the method has the advantages that the input original point cloud is subjected to downsampling through voxel filtering, the number of the point clouds is reduced as much as possible under the condition that the point cloud distribution characteristics are not changed, computing resources are saved, and the algorithm operation efficiency is improved.
(2) The main shaft direction is determined according to the point cloud distribution characteristics through a PCA algorithm, and the point cloud after down sampling is converted, so that the problem that closed loop detection is affected by the change of the robot head direction is solved.
(3) And converting the transformed point cloud from a Cartesian coordinate system to a polar coordinate system, rasterizing the point cloud in the polar coordinate system, and taking the value of each grid as the maximum value of the z value in all point cloud coordinate values of the grid. Therefore, a two-dimensional image of each frame of point cloud is generated, and possibility is provided for subsequent real-time similarity detection.
(4) Extracting a ringKey vector according to the approximate condition of point cloud distribution at different distances of each frame of point cloud two-dimensional image, comparing the ringKey of historical frames with enough distance from the current frame, and finding out candidate frames with close distance.
(5) In order to reduce the influence of sensor noise and rasterization on closed-loop detection, the two-dimensional image of the current frame is subjected to local column movement in the main axis direction, and then is compared with the two-dimensional image of the candidate frame to find out the candidate frame meeting the closed-loop condition, namely, a closed-loop relation is formed, so that the accuracy of a closed-loop detection algorithm is improved.
Drawings
FIG. 1 is a general flowchart of a robot closed-loop detection method based on three-dimensional point cloud;
fig. 2 is a distribution comparison diagram before and after filtering of point cloud voxels, wherein (a) in fig. 2 is a point cloud amount schematic diagram before filtering, and (B) in fig. 2 is a point cloud amount schematic diagram after filtering;
FIG. 3 is a graph of laser observation data for the same scene at different times; fig. 3 (a) is a laser observation data map of a first frame point cloud, and fig. 3 (B) is a laser observation data map of a second frame point cloud;
fig. 4 is a point cloud distribution diagram after PCA processing is performed on laser observation data of the same scene at different times, wherein (a) in fig. 4 is a first frame point cloud schematic diagram after PCA processing, and (B) in fig. 4 is a second frame point cloud schematic diagram after PCA processing;
fig. 5 (a) is a point cloud laser observation data map processed by the PCA algorithm at a certain time, and fig. 5 (B) is a corresponding rasterized point cloud dimension reduction two-dimensional image;
fig. 6 is a comparison graph of results of the ScanContext closed-loop detection algorithm and the method of the present invention, wherein (a) in fig. 6 is a graph of the ScanContext closed-loop detection algorithm PR, and (B) in fig. 6 is a graph of the closed-loop detection algorithm PR of the present invention.
Detailed Description
Fig. 1 shows a flowchart of a robot closed-loop detection method based on three-dimensional point cloud, which includes the following steps:
(1) firstly, acquiring an original input point cloud, and performing down-sampling on voxel filtering of the original point cloud in order to accelerate the processing speed of the point cloud and improve the operation efficiency of the algorithm. Voxel filtering determines a three-dimensional voxel grid according to the range of input original point clouds, and each voxel in the three-dimensional voxel grid selects a point cloud as a representative of all point clouds in the voxel. Under the condition that the size of each voxel in the voxel grid is small, the effect of reducing the number of point clouds and simultaneously keeping the point cloud distribution characteristics can be achieved. As shown in fig. 2, the point clouds of the three-dimensional lidar observing the surrounding environment at a certain time are shown in fig. 2 (a) and fig. 2 (B) respectively represent the distribution and number of the point clouds before and after filtering, voxel filtering does not change the distribution of the point clouds, and the number of the point clouds is reduced, so as to improve the processing efficiency of the subsequent algorithm.
Wherein the size of each voxel in the three-dimensional voxel grid is 0.3m x 0.3m, all points in each voxel grid are approximated by the center of gravity of the point set in the voxel, in order to better compare the point cloud change before and after filtering, fig. 2 is the three-dimensional point cloud distribution after dimensionality reduction, i.e. the z value in all point cloud coordinate values is 0 m.
(2) Processing the point cloud obtained in the step (1) after being subjected to down-sampling through a PCA algorithm to determine the direction of a main shaft, and transforming the point cloud after being subjected to down-sampling according to the direction of the main shaft to obtain transformed point cloud data; the method specifically comprises the following substeps:
(2.1) in order to accelerate the closed loop detection speed of the three-dimensional point cloud, processing the point cloud obtained in the step (1) after down sampling by a PCA algorithm to determine the direction of a main shaft;
the pca (principal Component analysis) algorithm, which is a principal Component analysis method, is one of the most widely used data dimension reduction algorithms. The principal idea of PCA is to map dimensional features onto k dimensions, which are completely new orthogonal features, also called principal components, and to reconstruct k-dimensional features on the basis of the original n-dimensional features. The task of PCA is to sequentially find a set of mutually orthogonal axes from the original space, the selection of new axes being strongly dependent on the data itself. The first new coordinate axis is selected to be the direction with the largest square difference in the original data, the second new coordinate axis is selected to be the plane which is orthogonal to the first coordinate axis and enables the square difference to be the largest, and the third axis is the plane which is orthogonal to the 1 st axis and the 2 nd axis and enables the square difference to be the largest. By analogy, n such coordinate axes can be obtained. The method uses the PCA algorithm to determine the main axis of the point cloud mainly according to the point cloud distribution characteristics and independently from the observation attitude of the sensor. And under the condition that the new point cloud main shaft is determined, converting the original point cloud coordinates. Therefore, the final coordinate value of the point cloud is only related to the point cloud distribution characteristics and is not related to the observation posture of the sensor, and the purpose of detecting the rotation invariance of the three-dimensional point cloud in a closed loop mode is achieved.
(2.2) taking the down-sampled point cloud output in the step (1) as an input sample set
Figure 148623DEST_PATH_IMAGE001
Wherein the point cloud after down-sampling has a size of
Figure 741278DEST_PATH_IMAGE002
M is the number of point clouds, centering (unbiased) all input samples:
Figure 432153DEST_PATH_IMAGE031
(2.3) calculating covariance matrix of input sample set
Figure 375839DEST_PATH_IMAGE004
Figure 810362DEST_PATH_IMAGE032
(2.4) covariance matrix
Figure 523103DEST_PATH_IMAGE006
Decomposing the eigenvalue to obtain
Figure 916038DEST_PATH_IMAGE007
Figure 720921DEST_PATH_IMAGE008
(2.5) extracting the eigenvectors corresponding to the maximum three eigenvalues
Figure 818190DEST_PATH_IMAGE009
Forming a new transformation matrix
Figure 526383DEST_PATH_IMAGE010
Figure 418116DEST_PATH_IMAGE011
(2.6) downsampled point cloud and transformation matrix
Figure 211759DEST_PATH_IMAGE010
Multiplying to obtain new three-dimensional point cloud data after transformation
Figure 112719DEST_PATH_IMAGE012
Comprises the following steps:
Figure 409840DEST_PATH_IMAGE013
FIG. 3 shows the observation of the same scene under different robot poses, wherein the first directions of the two observation robots are opposite, and the poses of the robots at different moments are different. Fig. 4 is a point cloud obtained by performing PCA algorithm on two times of observation respectively, determining a main axis direction, and then performing point cloud conversion according to the main axis direction, wherein the converted point cloud is irrelevant to a robot observation first direction and only relevant to point cloud distribution characteristics in the surrounding environment, and since the two frames of point clouds observed are in the same scene at different moments, the point cloud distribution characteristics are basically the same, and point cloud coordinate values obtained by performing PCA algorithm conversion on the two frames of data are basically the same.
(3) And projecting the transformed three-dimensional point cloud, converting the projected point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud after coordinate transformation.
(3.1) performing two-dimensional projection on the transformed three-dimensional point cloud to obtain a two-dimensional coordinate point;
and (3.2) converting the coordinate values of each coordinate point in the Cartesian coordinate system, which are obtained by the two-dimensional projection in the step (3.1), into coordinate values of a polar coordinate system, and rasterizing the point cloud after coordinate transformation according to the coordinate values of each coordinate point in the polar coordinate system.
Setting a point in the point cloud as a Cartesian coordinate system coordinate value
Figure 738053DEST_PATH_IMAGE033
The coordinate value of the polar coordinate system is
Figure 986369DEST_PATH_IMAGE034
,
Figure 691020DEST_PATH_IMAGE035
Is a coordinate of the radius, and is,
Figure 108226DEST_PATH_IMAGE036
for the angular coordinate, the size of each bin after rasterization is
Figure 607340DEST_PATH_IMAGE037
Then there are:
Figure 109997DEST_PATH_IMAGE038
Figure 352760DEST_PATH_IMAGE039
the coordinates of the point after rasterization are:
Figure 624472DEST_PATH_IMAGE040
obtaining the occupancy condition of the point cloud grid, wherein points in the grid bin fall into the grid bin, namely the occupancy condition is obtained; if no point in the grid bin falls in, no occupancy occurs.
And (3.3) after the current frame point cloud rasterization is finished, taking values of each bin after rasterization according to the point cloud raster occupation condition so as to achieve the purpose of generating a two-dimensional image by reducing the dimensions, wherein each bin takes the maximum value of the z values in all point cloud coordinate values falling into the current bin, and if no point falls into the bin, the value is 0.
And (3.4) obtaining a vector ringKey according to the point cloud dimension reduction two-dimensional image.
As shown in fig. 5, (a) in fig. 5 is a point cloud laser observation data map processed by PCA algorithm at a certain time, and (B) in fig. 5 is a corresponding rasterized point cloud dimension reduction two-dimensional image; and transforming a frame of point cloud by a coordinate system, rasterizing and finally reducing the dimension to form a two-dimensional image. Wherein the effective distance is 80m, the size of the two-dimensional image after the grid is 120 x 40,
Figure 294488DEST_PATH_IMAGE041
Figure 777116DEST_PATH_IMAGE042
(4) obtaining accumulated frame numbers according to the frame numbers of all the three-dimensional point clouds, setting a threshold value in a self-defined mode, and judging whether the accumulated frame numbers are larger than the threshold value or not; if the accumulated frame number is smaller than the threshold value, repeating the steps (1) to (3) until the accumulated frame number is larger than the threshold value.
(5) And generating a data structure Kdtree according to the ringKey vector sets of all the point cloud frames with the distance from the current frame number greater than the threshold value.
(6) The current frame searches a candidate frame in the nearest neighbor of a data structure Kdtree by utilizing the vector ringKey of the current frame.
In order to accelerate the closed-loop detection speed, the historical frames which obviously do not accord with the closed-loop condition can be removed, and the historical frames which accord with a certain basic condition, also called candidate frames, are left for further similarity detection. The method for screening the candidate frames mainly uses a vector ringKey and a data structure Kdtree. When the three-dimensional point cloud generates a two-dimensional image, the number of non-empty grids in each line of the two-dimensional image is counted, namely the number of laser point bins in the grids is counted. By the method, each two-dimensional image is abstracted into a k-dimensional vector, wherein k is the line number of the two-dimensional image, and the corresponding vector is the ringKey. With the increase of the number of the point cloud frames, the continuously increased ringKey dynamically generates a Kdtree, and the search of the candidate frame is accelerated. The candidate frame is detected by using the ringKey vector of the current frame to find the nearest historical frame on the Kdtree by using Knn nearest neighbor search algorithm.
(7) And carrying out local column movement on the two-dimensional image of the current frame in the main axis direction, comparing the similarity of the frame after the local column movement with the candidate frame without the local column movement, and if the distance similarity is smaller than a closed loop detection threshold value, determining that the closed loop is formed.
And (4) acquiring a two-dimensional image of each frame of 3-dimensional point cloud, and converting closed-loop detection based on the three-dimensional point cloud into similarity detection of the two-dimensional image. The closed loop detection can be summarized as the detection of the similarity between the current frame and all historical frames at a certain distance from the current frame.
At the moment, the three-dimensional point cloud corresponding to the two-dimensional image is converted by determining the direction of the main shaft through the PCA algorithm, so that the point cloud distribution characteristics independent of the head direction of the robot are basically the same in the same scene, and the similarity of the two-dimensional image is higher. Although the environment is the same scene, the observation data acquired by the laser sensor is influenced by noise to a certain degree, and the point cloud is influenced by discretization in a polar coordinate system, so that two-dimensional images acquired by the observation data of the same environment at different moments can slightly translate along the column direction. The final closed loop detection is therefore a similarity detection between the candidate frame image and the image of the current frame with a local column shift around the main axis.
The method specifically comprises the following substeps:
(7.1) the similarity detection between images is mainly realized by the addition and the average of the similarity detection of each column. Each column of the image is a vector, and the similarity calculation of the corresponding columns of different images is mainly measured by calculating the cosine value of the included angle between the two vectors.
Figure 557991DEST_PATH_IMAGE043
Wherein:
Figure 746526DEST_PATH_IMAGE021
is the distance between the two images of the frame,
Figure 462810DEST_PATH_IMAGE044
as the current frame image, the image of the current frame,
Figure 267955DEST_PATH_IMAGE023
as the candidate frame images, the image of the frame,
Figure 524624DEST_PATH_IMAGE024
the number of columns of the image is,
Figure 895562DEST_PATH_IMAGE045
is the first of the current frame
Figure 281282DEST_PATH_IMAGE046
The columns of the image data are,
Figure 104881DEST_PATH_IMAGE027
is the first of the candidate frame
Figure 368504DEST_PATH_IMAGE047
And column q is the current frame and c is the candidate frame.
(7.2) because the three-dimensional point cloud generating the two-dimensional image is converted by the PCA algorithm, only the candidate frame image and the current frame image which is moved by the local column are required to be compared in similarity. And then selecting the minimum value of the distance measurement as the measurement of the similarity of the point clouds of the two frames, and if the minimum value is smaller than a closed loop detection threshold value, determining that the two frames have closed loops.
Figure 593949DEST_PATH_IMAGE048
Wherein the content of the first and second substances,
Figure 652034DEST_PATH_IMAGE049
and measuring the final distance between the current frame two-dimensional image and the candidate frame two-dimensional image, namely measuring the distance similarity between the current frame point cloud and the candidate frame point cloud.
The closed-loop detection algorithm provided by the invention is tested by using the KITTI data set 00 sequence, and compared with the ScanContext closed-loop detection algorithm, the same parameters related to the two algorithms are configured in the same way. The left graph in FIG. 6 is the ScanContext closed-loop detection result PR graph, and the running time of the whole algorithm is 1388.38 s. The right diagram of fig. 6 is a diagram of the result PR of the proposed closed-loop detection algorithm with the running time of 1057.35 s. By comparison, the closed-loop detection algorithm provided by the invention can improve the operation efficiency of the algorithm by about 23% on the basis of ensuring the same effect as the current closed-loop detection algorithm with a better effect.
In conclusion, the invention performs downsampling on the input original point cloud through voxel filtering, reduces the number of the point cloud as much as possible under the condition of ensuring that the distribution characteristics of the point cloud are not changed, saves computing resources and improves the operation efficiency of the algorithm. The invention determines the direction of the main shaft according to the point cloud distribution characteristics through the PCA algorithm, and transforms the point cloud after down sampling, thereby overcoming the problem of influence on closed loop detection caused by the change of the robot head direction. The invention converts the transformed point cloud from a Cartesian coordinate system to a polar coordinate system, performs rasterization processing on the point cloud in the polar coordinate system,each grid value is set to fall into the coordinate value of all point clouds of the grid
Figure 228509DEST_PATH_IMAGE050
The value is maximum. Therefore, a two-dimensional image of each frame of point cloud is generated, and possibility is provided for subsequent real-time similarity detection. According to the method, the ring Key vector is extracted according to the approximate condition of point cloud distribution at different distances of each frame of point cloud two-dimensional image, the ring Key comparison is carried out on historical frames which are far enough from the current frame, candidate frames which are close to the current frame are found, and the searching efficiency can be greatly improved through a Kdtree and a nearest neighbor searching algorithm. In order to reduce the influence of sensor noise and rasterization on closed-loop detection, the method of the invention carries out local column movement on the current frame two-dimensional image in the main axis direction, and then compares the current frame two-dimensional image with the two-dimensional image of a candidate frame to find out the candidate frame meeting the closed-loop condition, namely forming a closed-loop relation, thereby improving the accuracy of a closed-loop detection algorithm.

Claims (4)

1. A robot closed loop detection method based on three-dimensional point cloud is characterized by comprising the following steps:
(1) acquiring an original input point cloud, and performing down-sampling on voxel filtering of the original point cloud by utilizing the voxel filtering to obtain a down-sampled point cloud; the voxel filtering determines a three-dimensional voxel grid according to the range of the input original point cloud, and selects a point in each voxel in the three-dimensional voxel grid as the representative of all point clouds of the voxel;
(2) processing the point cloud obtained in the step (1) after being subjected to down-sampling through a PCA algorithm to determine the direction of a main shaft, and transforming the point cloud after being subjected to down-sampling according to the direction of the main shaft to obtain transformed point cloud data;
(3) projecting the transformed point cloud, converting the projected point cloud from a Cartesian coordinate system to a polar coordinate system, and rasterizing the point cloud subjected to coordinate conversion to obtain a point cloud grid occupation condition; generating a point cloud dimensionality reduction two-dimensional image and a vector ringKey according to the occupation condition of the point cloud grid;
(4) judging whether the accumulated frame number is greater than a threshold value; if the accumulated frame number is less than the threshold value, repeating the steps (1) to (3) until the accumulated frame number is greater than the threshold value;
(5) generating a data structure Kdtree according to the ringKey vector sets of all point cloud frames with the distance from the current frame number greater than the threshold;
(6) the current frame searches a candidate frame in a data structure Kdtree by using a vector ringKey of the current frame through a nearest neighbor searching algorithm;
(7) and carrying out local column movement on the two-dimensional image of the current frame in the main axis direction, comparing the similarity of the frame obtained after the local column movement with the candidate frame which is not subjected to the local column movement, and if the distance similarity is smaller than a closed loop detection threshold value, determining that the closed loop is formed.
2. The method for detecting the closed loop of the robot based on the three-dimensional point cloud of claim 1, wherein the step (2) comprises the following sub-steps:
(2.1) processing the point cloud obtained in the step (1) after down-sampling by a PCA algorithm to determine the direction of a main shaft;
(2.2) taking the down-sampled point cloud output in the step (1) as an input sample set
Figure FDA0003484260610000011
Wherein, the point cloud scale after the downsampling is mx 3, m is the number of the point clouds, i represents the ith point cloud, centralizes all the input samples:
Figure FDA0003484260610000012
(2.3) calculating the covariance matrix cov (x) of the input sample set:
Figure FDA0003484260610000013
(2.4) carrying out eigenvalue decomposition on the covariance matrix cov (X) to obtain Q Lambda Q-1
Cov(X)=QΛQ-1
(2.5) extracting a feature vector (Q) corresponding to the maximum three feature values(1),Q(2),Q(3)) Forming a new transformation matrix W:
W=[Q(1),Q(2),Q(3)]
(2.6) multiplying the down-sampled point cloud by a transformation matrix W to obtain new transformed three-dimensional point cloud data X' which is:
X′=XW。
3. the method for detecting the closed loop of the robot based on the three-dimensional point cloud of claim 1, wherein the step (3) comprises the following sub-steps:
(3.1) performing two-dimensional projection on the transformed three-dimensional point cloud to obtain a two-dimensional coordinate point;
(3.2) converting the coordinate values of each coordinate point in the Cartesian coordinate system into coordinate values of a polar coordinate system according to the coordinate values of each coordinate point in the Cartesian coordinate system obtained by the two-dimensional projection in the step (3.1); rasterizing the point cloud after coordinate transformation to obtain a grid occupation condition;
setting the coordinate value of a point Cartesian coordinate system in the point cloud as p (x, y, z), and the coordinate value of a polar coordinate system as pr=[r,θ]Each bin after rasterization has a size [ gap, angle]Then there are:
Figure FDA0003484260610000021
Figure FDA0003484260610000025
the coordinates of the point after rasterization are:
ps=[r/gap,θ/angle]
(3.3) carrying out value taking on the grid bin according to the occupation condition of the point cloud grid to generate a point cloud dimension reduction two-dimensional image;
and (3.4) obtaining a vector ringKey according to the point cloud dimension reduction two-dimensional image.
4. The method for closed-loop detection of a robot based on three-dimensional point cloud according to claim 1, wherein the step (7) comprises the following sub-steps:
(7.1) realizing similarity detection by adding and averaging similarity detection of each column; each column of each frame of image is a vector, and the similarity calculation of the corresponding columns of different images is mainly measured by calculating the cosine value of the included angle of two vectors:
Figure FDA0003484260610000022
wherein: d (I)q,Ic) Is the distance between two images, IqFor the current frame picture, IcAs candidate frame images, NsThe number of columns of the image is,
Figure FDA0003484260610000023
for the jth column of the current frame,
Figure FDA0003484260610000024
is the jth column of the candidate frame, q is the current frame, c is the candidate frame;
(7.2) selecting the minimum value of the distance measurement as the measurement of the similarity of the point clouds of the two frames, and if the minimum value is smaller than a closed loop detection threshold value, considering that the two frames have closed loops:
Figure FDA0003484260610000031
wherein D (I)q,Ic) And measuring the final distance between the current frame two-dimensional image and the candidate frame two-dimensional image, namely measuring the distance similarity between the current frame point cloud and the candidate frame point cloud.
CN202111410898.6A 2021-11-25 2021-11-25 Robot closed-loop detection method based on three-dimensional point cloud Active CN113838051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111410898.6A CN113838051B (en) 2021-11-25 2021-11-25 Robot closed-loop detection method based on three-dimensional point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111410898.6A CN113838051B (en) 2021-11-25 2021-11-25 Robot closed-loop detection method based on three-dimensional point cloud

Publications (2)

Publication Number Publication Date
CN113838051A CN113838051A (en) 2021-12-24
CN113838051B true CN113838051B (en) 2022-04-01

Family

ID=78971767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111410898.6A Active CN113838051B (en) 2021-11-25 2021-11-25 Robot closed-loop detection method based on three-dimensional point cloud

Country Status (1)

Country Link
CN (1) CN113838051B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242873A (en) * 2018-08-22 2019-01-18 浙江大学 A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild
US10634793B1 (en) * 2018-12-24 2020-04-28 Automotive Research & Testing Center Lidar detection device of detecting close-distance obstacle and method thereof
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242873A (en) * 2018-08-22 2019-01-18 浙江大学 A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild
US10634793B1 (en) * 2018-12-24 2020-04-28 Automotive Research & Testing Center Lidar detection device of detecting close-distance obstacle and method thereof
CN111929699A (en) * 2020-07-21 2020-11-13 北京建筑大学 Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Pose-Graph SLAM Using Forward-Looking Sonar》;Jie Li;《IEEE Robotics and Automation Letters》;20180227;第2330-2337页 *
基于随机有限集的水下机器人同步定位与建图方法研究;范彦福;《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》;20210415;正文第1-92页 *

Also Published As

Publication number Publication date
CN113838051A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113012203B (en) High-precision multi-target tracking method under complex background
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
JP5677798B2 (en) 3D object recognition and position and orientation determination method in 3D scene
JP6681729B2 (en) Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object
CN111583369A (en) Laser SLAM method based on facial line angular point feature extraction
CN114677418B (en) Registration method based on point cloud feature point extraction
JP5063776B2 (en) Generalized statistical template matching based on geometric transformation
CN110930456A (en) Three-dimensional identification and positioning method of sheet metal part based on PCL point cloud library
CN112651944B (en) 3C component high-precision six-dimensional pose estimation method and system based on CAD model
CN111598172B (en) Dynamic target grabbing gesture rapid detection method based on heterogeneous depth network fusion
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN111402330A (en) Laser line key point extraction method based on plane target
CN113168729A (en) 3D shape matching method and device based on local reference coordinate system
CN111798453A (en) Point cloud registration method and system for unmanned auxiliary positioning
JP3252941B2 (en) Image segmentation recognition device
CN113838051B (en) Robot closed-loop detection method based on three-dimensional point cloud
Liu et al. Robust 3-d object recognition via view-specific constraint
JP3499807B2 (en) Target classification method and apparatus for radar
Shang et al. Model-based tracking by classification in a tiny discrete pose space
CN113112471B (en) Target detection method based on RI-HOG characteristics and rapid pyramid
JPH07146121A (en) Recognition method and device for three dimensional position and attitude based on vision
CN114782689A (en) Point cloud plane segmentation method based on multi-frame data fusion
JP2001143072A (en) Object shape identifying apparatus
CN113723468A (en) Object detection method of three-dimensional point cloud
CN111915632A (en) Poor texture target object truth value database construction method based on machine learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant