CN115773747A - High-precision map generation method, device, equipment and storage medium - Google Patents

High-precision map generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN115773747A
CN115773747A CN202211566846.2A CN202211566846A CN115773747A CN 115773747 A CN115773747 A CN 115773747A CN 202211566846 A CN202211566846 A CN 202211566846A CN 115773747 A CN115773747 A CN 115773747A
Authority
CN
China
Prior art keywords
point cloud
frame
current
laser point
estimation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211566846.2A
Other languages
Chinese (zh)
Inventor
沈嘉成
巫远
黄永明
夏井新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Network Communication and Security Zijinshan Laboratory
Original Assignee
Network Communication and Security Zijinshan Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Network Communication and Security Zijinshan Laboratory filed Critical Network Communication and Security Zijinshan Laboratory
Priority to CN202211566846.2A priority Critical patent/CN115773747A/en
Publication of CN115773747A publication Critical patent/CN115773747A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a high-precision map generation method, a high-precision map generation device, a high-precision map generation equipment and a high-precision map generation storage medium, wherein a first pose estimation value is estimated according to IMU data between a recent key frame and a current laser point cloud frame and the pose of a last laser point cloud frame after back-end optimization; according to the first pose estimation value and IMU data corresponding to the time for scanning the current laser point cloud, carrying out distortion removal on the current laser point cloud and extracting a feature point; obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and if loop is detected, estimating a second attitude estimation value; obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second posture estimation value and the third posture estimation value into a factor graph to obtain a posture estimation value optimized at the rear end of the current frame; and splicing the current laser point cloud frame subjected to distortion removal into a historical point cloud map according to the pose estimation value optimized by the rear end to generate a static point cloud map, so that the map precision is improved.

Description

High-precision map generation method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of computer vision image processing, in particular to a high-precision map generation method, device, equipment and storage medium.
Background
With the development of artificial intelligence technology, mapping and positioning are carried out by fusing a laser radar, a camera and a GPS multi-sensor, and the method is an important basis for automatic navigation of an intelligent unmanned vehicle. The existing common navigation map is generally a road-level map, and has less map information and meter-level precision. The high-precision map can be accurate to the level of 10-20cm, contains map information such as traffic signs, lane center lines, zebra crossings and the like, and can provide a more comprehensive data base for automatic driving.
The existing high-precision mapping mode based on the laser radar has the defects that when a long-distance similar scene such as a tunnel is encountered, a map drifting phenomenon occurs, the robustness is poor, and the influence of errors of a single sensor is easy to occur. In a general high-precision mapping method, a device carrying a single/binocular/depth camera or a 2D/3D laser radar is used for data acquisition, adjacent data frame matching is performed on the acquired data, an ICP (inductive plasma) algorithm (ICP Point) or an NDT (Normal distribution Transform) matching algorithm is usually used for calculation, a track (track including a three-dimensional position and a posture of the device, which are simply referred to as a pose) passed by the device (i.e., a device integrated with the sensor, the camera or the radar, which is also equivalent to an automatically driven vehicle body) is obtained, and the track obtained by matching calculation of adjacent frames is generally referred to as a front-end odometer. And (3) splicing point cloud data obtained by a camera or a laser radar through the front-end odometer to obtain a local point cloud map. Because the front-end odometer is calculated by adjacent frame data, errors inevitably exist, and the errors are continuously accumulated in the process of advancing the equipment. In order to prevent map offset caused by excessive accumulated errors, joint optimization of all point cloud data in a local map is required, wherein IMU (Inertial Measurement Unit) data (including acceleration and angular velocity), RTK (Real-time kinematic) data (i.e., a Real-time dynamic Measurement technique, a sensor capable of outputting 1-2cm precision longitude and latitude high data, a loop detection data, and the like may also be added for auxiliary optimization, and the joint optimization part is usually used as a back-end optimization thread. And finally, splicing the corresponding point cloud data by using the optimized track to obtain a complete map, wherein the process is called a map processing thread. Furthermore, the loop detection mentioned above is a common means to suppress accumulated errors. Specifically, the device passes through a place which is reached once in the data acquisition process, the similarity between the current frame point cloud data and the historical map data is judged through loop detection, and if the similarity is larger than a threshold value, the device is judged to reach the historical track point. At this time, an adjacent data frame matching strategy is started, and the position and posture conversion from the current frame to the historical map is calculated through ICP or other matching algorithms to eliminate accumulated errors. Therefore, a high-precision mapping method generally includes threads such as front-end odometer, back-end optimization, loop detection, and map processing.
In the process of the front-end odometer, a pose estimation value is obtained in an adjacent frame data matching mode, if an ICP (inductively coupled plasma) algorithm is adopted, all points between adjacent frames are matched, so that mismatching is easily caused, the pose estimation precision is reduced, the map precision is further influenced, and the calculated amount is large; if the NDT algorithm is adopted to store the voxel map, loop detection cannot be performed, and the accuracy of long-distance map building is greatly influenced. Meanwhile, in most algorithms, the ICP algorithm is directly adopted as a matching similar frame strategy in a loop detection thread, the loop detection precision is low, the robustness is poor, and for example, a loop cannot be formed in a scene with a large scanning angle difference of a laser radar.
Disclosure of Invention
The invention discloses a high-precision map generation method, a high-precision map generation device, high-precision map generation equipment and a high-precision map generation storage medium, and solves the problems that the existing map generation method is low in precision and poor in robustness, pose estimation is inaccurate, and map generation precision is further influenced.
In a first aspect of the embodiments of the present invention, a high-precision map generation method is provided, including:
acquiring current laser point cloud frame data and IMU data;
if the current laser point cloud frame data is first frame data acquired by a laser radar, taking the current laser point cloud frame data as a first key frame, generating a point cloud map through the first key frame, and executing the steps again; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame after back-end optimization;
according to the first pose estimation value and IMU data corresponding to the time for scanning the current laser point cloud, carrying out distortion removal on the current laser point cloud to obtain a current laser point cloud frame after distortion removal, and extracting feature points from the current laser point cloud frame;
obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and estimating to obtain a second position and attitude estimation value of the current laser point cloud frame if the loop is detected; otherwise, if no loop is detected, setting the second position estimation value to be null;
obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic point; inputting the second posture estimation value and the third posture estimation value into a factor graph to obtain a rear-end optimized posture estimation value of the current frame;
judging whether the current laser point cloud frame can be used as a key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to the pose estimation value optimized at the rear end of the current frame, removing a dynamic object, and generating a static point cloud map; and if not, re-executing the steps until the map generation is stopped.
In a second aspect of an embodiment of the present invention, there is provided a high-precision map generating apparatus, including:
the acquisition module is used for acquiring current laser point cloud frame data and IMU data;
the IMU preprocessing module is used for taking the current laser point cloud frame data as a first key frame if the current laser point cloud frame data is first frame data acquired by a laser radar, sending the first key frame to the map generating module and returning the first key frame to the acquisition module; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame fed back by the back-end optimization module through back-end optimization;
the front-end odometer module is used for carrying out distortion removal on the current laser point cloud according to the first position and attitude estimation value and IMU data corresponding to the scanning time of the current laser point cloud to obtain a current laser point cloud frame after distortion removal and extracting feature points from the current laser point cloud frame;
the loop detection module is used for obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and estimating to obtain a second position posture estimation value of the current laser point cloud frame if the loop is detected; otherwise, if no loop is detected, setting the second position estimation value to be null;
the rear-end optimization module is used for obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second pose estimation value and the third pose estimation value into a factor graph to obtain a pose estimation value optimized at the rear end of the current frame;
the map generation module is used for generating a point cloud map through the first key frame if the current frame is the first frame, otherwise, judging whether the current frame can be used as the key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to the pose estimation value optimized at the rear end of the current frame, and removing a dynamic object to generate a static point cloud map; and if not, returning to the acquisition module until the generation of the map is stopped.
In a third aspect of the embodiments of the present invention, there is provided a map generating apparatus, including: a memory, a processor, and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the high accuracy map generating method according to any one of the embodiments of the present invention.
A fourth aspect of the embodiments of the present invention provides a readable storage medium, in which a computer program is stored, where the computer program is used to implement the high-precision map generation method according to any one of the embodiments of the present invention when the computer program is executed by a processor.
Has the beneficial effects that: estimating according to IMU data from a key frame closest to the current laser point cloud frame in time to the current laser point cloud frame and the pose of a last laser point cloud frame after back-end optimization to obtain a first pose estimation value of the current laser point cloud frame; according to the first pose estimation value and IMU data corresponding to the radar scanning time of the current laser point cloud, carrying out distortion removal on the current laser point cloud to obtain a current laser point cloud frame after distortion removal, and extracting feature points from the current laser point cloud frame; the first pose is estimated by adopting the pose optimized by the rear end, so that the estimated first pose value is more accurate, and the current laser point cloud frame after distortion removal is more accurate;
obtaining a current descriptor according to the current laser point cloud frame after distortion removal; judging whether a loop is detected or not according to the current descriptor, if the loop is detected, estimating to obtain a second position posture estimation value of the current laser point cloud frame, otherwise, setting the second position posture estimation value to be null if the loop is not detected; according to the method, the descriptor is constructed to detect the loop instead of detecting the loop according to all points in the frame, so that the false detection rate is reduced, the loop detection precision is improved, the robustness is good, and the precision of a point cloud map generated subsequently is improved.
Obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second pose estimation value and the third pose estimation value into a factor graph to obtain the pose of the current laser point cloud frame after back-end optimization; and judging whether the current laser point cloud frame can be used as a key frame, if so, splicing the undistorted current laser point cloud into a historical point cloud map according to the rear-end optimized pose, and removing the dynamic object to generate a static point cloud map.
According to the method, IMU data and pose after rear-end optimization processing are received, and first pose estimation is performed on the current laser point cloud together, so that the obtained first pose estimation value is more accurate; then, correcting the current laser point cloud generating the motion distortion, and extracting characteristic points, so that the corrected pose and the extracted characteristic points are more accurate; through the descriptor, mismatching is reduced, the pose estimation precision is improved, the map generation precision is further improved, and the calculated amount is small.
The high-precision map generation device, equipment and storage medium provided by the invention also have the beneficial effects.
Drawings
Fig. 1 is a schematic diagram of a high-precision map generation method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a high-precision map generating device according to an embodiment of the present invention.
Detailed Description
The high-precision map generation method, apparatus, device and storage medium of the present invention will be further described and explained with reference to the accompanying drawings and embodiments.
The key frame is a laser point cloud frame which is used for generating a point cloud map after being screened; the point cloud map is generated from a number of keyframes.
As shown in fig. 1, the high-precision map generating method provided in this embodiment includes the following steps:
acquiring laser point cloud data acquired by a laser radar as a current laser point cloud frame (hereinafter referred to as a current frame) and IMU data acquired by an IMU sensor;
furthermore, current longitude and latitude height data acquired by the RTK sensor can be acquired;
specifically, the IMU data acquired by the IMU sensor includes angular velocity and acceleration of the device; the device is a device integrated with sensors and radars, and can also be understood as an autonomous vehicle body.
The laser points acquired by the laser radar every scanning circle are called a frame of laser point cloud data, and because the acquisition frequency of the laser point cloud data is lower than that of the IMU data, a plurality of groups of IMU data are contained between two frames of laser point cloud data.
And respectively storing the data acquired each time after the laser radar and the IMU sensor start to acquire. The IMU data are all stored in the data processing Queue Opt _ Queue.
Step two, if the current laser point cloud frame data is first frame data acquired by a laser radar, the current laser point cloud frame data is used as a first key frame, a point cloud map is generated through the first key frame, and the steps are executed again; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame after back-end optimization;
namely: if the laser point cloud data is not the first frame data collected by the laser radar, estimating the pose of the last laser point cloud frame after the last laser point cloud frame is optimized according to the IMU data between the K-1 key frame and the current laser point cloud frame to obtain a first pose estimation value of the current laser point cloud frame; otherwise, storing the laser point cloud data as a first frame key frame, setting the corresponding optimized pose as an identity matrix, generating a point cloud map through the first frame key frame (generated through a subsequent map generation module), and returning to the first step until the map generation is stopped.
This step is performed in the IMU pre-processing module.
The key frame is a laser point cloud frame for generating a high-precision map, and is a point cloud frame with a large number of feature points (i.e., angular feature points and surface feature points mentioned below) selected from all collected laser point cloud frames according to a set distance. Numbering the key frames, wherein the numbering range is 1 to K-1, judging whether the current laser point cloud frame can be used as a K-th frame key frame or not, and if so, splicing the current laser point cloud frame into a historical point cloud map to generate a new point cloud map;
the pose, i.e., the pose of the autonomous vehicle body, may also be referred to as the equipment pose for short.
And setting the optimized pose as an identity matrix to show that the pose of the equipment is not transformed.
Specifically, the estimating according to the IMU data between the K-1 th key frame and the current laser point cloud frame and the pose after the back end optimization to obtain the first pose estimation value of the current laser point cloud frame includes:
performing combined nonlinear optimization according to IMU data between the K-1 key frame and the current laser point cloud frame timestamp and the optimized pose transmitted by the rear-end optimization module, wherein the aim is to obtain the first pose estimation value of the current laser point cloud frame with the maximum probability, and the optimization algorithm adopts a Gauss-Newton method;
and thirdly, according to the first pose estimation value and IMU data corresponding to the scanning time of the current laser point cloud, carrying out distortion removal on the laser point cloud of the current frame to obtain the laser point cloud data of the current frame after distortion removal, and extracting characteristic points from the laser point cloud data of the current frame.
This step is performed in the front-end odometer module.
The method comprises the following steps of carrying out distortion removal on laser point cloud of a current frame to obtain laser point cloud data of the current frame after distortion removal, and extracting feature points from the laser point cloud data, wherein the method comprises the following steps:
(1) Calculating the time of each point in the current laser point cloud frame scanned by the laser radar;
specifically, the laser radar also corresponds to the IMU data within the time range of one scanning circle; the time stamp of each frame of laser radar point cloud data is the scanning starting time, and when each scanning is finished, the laser radar sends an ending time, so that the time of each point scanned can be calculated.
(2) Calculating the relative rotation amount of each point in the laser radar according to the corresponding IMU data in the current laser point cloud scanning time and the scanning time of each point; calculating the relative displacement of each point according to the pose change from the K-1 key frame to the current frame; according to the relative rotation amount and the relative displacement amount, carrying out distortion removal on the laser point cloud of the current frame to obtain the laser point cloud data of the current frame after distortion removal;
adding the relative rotation amount and the relative displacement amount to the original point cloud coordinates of the laser point respectively, and correcting the point with observation distortion caused by equipment movement to a correct pose;
wherein, the relative rotation amount = the angular velocity in the corresponding IMU data within the time of scanning the current laser point cloud (time of scanning each point-start scanning time);
relative displacement = velocity (time of each point scanned-start scanning time), velocity = (three-dimensional position in the first pose estimate-three-dimensional position in the back-end optimized pose corresponding to the K-1 th keyframe)/two-frame time interval of the current frame and the K-1 th keyframe.
(3) Calculating the curvature of each point of the current frame laser point cloud data after distortion removal, extracting the points with the curvature exceeding an angle threshold value as angle characteristic points, extracting the points with the curvature smaller than a surface threshold value as surface characteristic points, and finally obtaining an angle characteristic point set and a surface characteristic point set which are collectively called as characteristic point sets;
the smaller the curvature is, the higher the probability that the current point belongs to a plane (wall surface or ground surface), and the larger the curvature is, the higher the probability that the current point belongs to the edge of an object is.
The curvature of each point is calculated, as exemplified by: the sum of the depth differences of the current point and several neighboring points (e.g., 10 points) is calculated. The angle threshold is greater than the surface threshold and is obtained according to the empirical value;
further, when the feature points are extracted, the point cloud is uniformly point-picked according to the area divided by the angle, so that the feature points are prevented from being excessively gathered.
For example, the point cloud data of 360 degrees, which is scanned for one circle, is divided into 6 regions, the angular feature points and the surface feature points in each region are uniformly distributed as much as possible, and if the angular feature points in a certain region are too many, a part of the angular feature points with smaller curvature are removed; if the number of surface feature points in a certain region is too large, a part of surface feature points with large curvature are removed. This prevents the feature points from being excessively concentrated in some areas.
In addition, ground points are separated from the current frame laser point cloud data after distortion removal, and all ground points are marked on the current frame laser point cloud data after distortion removal.
Step four, obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and estimating to obtain a second attitude estimation value of the current laser point cloud frame if loop is detected; otherwise, if no loop is detected, setting the second position estimation value to be null;
specifically, the method comprises the following steps:
obtaining a current descriptor according to a current frame laser point cloud frame after distortion removal, judging whether a loop is detected according to the current descriptor, if the loop is detected, selecting a point with a distance smaller than a set distance threshold value from a historical point cloud map as a matching point pair, and estimating to obtain a second attitude estimation value; otherwise, setting the second position and posture estimation value to be null, and executing the next step;
through the loop detection, the initial pose estimation is optimized to obtain a second pose estimation value, so that the accuracy of a subsequently generated map is higher.
This step is performed in the loop detection module. The historical point cloud map is the latest point cloud map which is stored at present, and the map to be generated is the historical map relative to the map to be generated by the method.
Specifically, obtaining the descriptor according to the current frame laser point cloud frame after distortion removal comprises the following steps:
dividing the current frame laser point cloud frame data subjected to distortion removal into a plurality of fan-shaped areas according to the radar scanning angle and depth, recording the maximum value of the emission intensity of all points in each area, and storing the maximum values of the emission intensity in all the areas into an array, wherein the length of the array is equal to the number of the divided areas; for example, the division into 120 sector areas results in an array with a length of 120, and the value in the array, i.e. the reflection intensity, is referred to as a descriptor.
Each key frame for generating the history map also corresponds to a descriptor.
That is, the descriptor is: dividing the area into a plurality of sector areas according to the radar scanning angle and depth, recording the maximum value of the emission intensity of all laser points in each area, and forming an array by the maximum values of the emission intensity in all areas.
Specifically, the estimating step obtains a second posture estimation value, and includes:
(1) And (3) projecting the current frame laser point cloud after distortion removal to a coordinate system of a historical point cloud map through the first pose estimation value, judging whether the coincidence degree of the projected current frame laser point cloud and the historical point cloud map is greater than a coincidence degree threshold value, if so, indicating that loop returning is preliminarily detected, and executing the next step (2), otherwise, judging that loop returning is not detected, and indicating that the scene in the historical point cloud map is not reached.
For example, if 80% of the laser point clouds in the current frame are set to be within the range of the historical point cloud map, it is considered that a loop is detected, which indicates that the scene in the historical point cloud map may be reached, and at this time, the overlap ratio threshold is set to be 80%. In the step, the coincidence degree is judged first, so that the detection precision and efficiency of the loop can be improved.
(2) Respectively comparing the current descriptor with the descriptors corresponding to each key frame in the historical point cloud map to obtain the similarity corresponding to each key frame, if a certain similarity is greater than a loop threshold, judging the similarity as a loop, selecting points, the distance between which and the current frame laser point after distortion removal is less than a set distance threshold, in the corresponding key frame to form a matching point pair, and according to an interframe matching algorithm, the following steps are carried out: calculating a second attitude estimation value of the current laser Point cloud frame by using an ICP (Iterative closed position) algorithm; otherwise, it is determined that no loop is detected.
It should be noted that the step of estimating to obtain the second position estimate value may include only the process of step (2) above. In the loop detection part, in the prior art, loop detection matching is directly carried out through an ICP (inductively coupled plasma) algorithm, and the false detection rate is high. According to the invention, a descriptor is constructed through the reflection intensity data of the point cloud, the current descriptor is compared with the descriptor of each key frame in the historical point cloud map, whether the point cloud map returns or not is judged, the return detection precision can be improved, the false detection rate is reduced, the position and posture are further optimized through the detection of the return, and the precision of the subsequent point cloud map generation is further improved.
Before the step (2) is executed, the coincidence degree determination in the step (1) may be performed, and the accuracy and efficiency of loop detection may be further improved.
Step five, obtaining a third posture estimation value according to the first posture estimation value and the characteristic point; and inputting the second posture estimation value and the third posture estimation value into a factor graph to obtain a rear-end optimized posture estimation value of the current frame.
This step is performed in the backend optimization module.
The obtaining of the third pose estimation value specifically includes:
(1) And splicing a local map according to a plurality of key frames closest to the current frame. Specifically, the method comprises the following steps: and during splicing, the pose values corresponding to a plurality of key frames closest to the current frame are the accurate pose values optimized by the rear-end optimization module.
Specifically, the method comprises the following steps: if no loop is found in the fourth step, extracting a plurality of key frames closest to the current frame within a set distance or set time to splice into a local map, and if a loop is found, extracting a plurality of key frames closest to the current frame within the set distance or set time and extracting historical key frames corresponding to the loop in the closed-loop container to splice into the local map, so that the map generation precision can be improved; the closed loop container stores the corresponding historical key frames for which a loop back was detected.
(2) And carrying out voxel filtering on the local map, and carrying out down-sampling, namely reducing the number of the middle points of the local map to reduce the calculation time, then converting the characteristic points in the current frame into a local map coordinate system through the first pose estimation value, searching the nearest points of the characteristic points in the local map, and obtaining a third pose estimation value of the current laser point cloud frame through factor graph calculation.
Specifically, the method comprises the following steps: for the angular feature points, searching a plurality of nearest points (such as 2 or 3 points) in a local map to form a line segment, and calculating the minimum distance from the point to the line to be used as a point-line residual error required by optimization; for the face feature point, the nearest points (for example, 5 points) are searched, a plane is constructed by the points, and the minimum distance from the point to the face is calculated to be the point-face residual error required by optimization. And after the calculation is finished, inputting all residual errors (residual errors corresponding to all the angular feature points and the surface feature points) into a nonlinear optimizer, namely a factor graph (existing) constructed by a GTSAM library, and outputting a third pose estimation value of the current laser point cloud frame.
Further, speed judgment is carried out on the third pose estimation value, and if the ratio of the moving speed from the K-1 key frame to the current frame to the moving speed from the K-2 key frame to the K-1 key frame is larger than a preset speed ratio (such as 2 times of the relation), the current frame is judged to have a degradation phenomenon, namely a track drift phenomenon which can occur in a long-distance similar scene such as a tunnel. And then abandoning the calculated third posture estimation value, and recalculating the third posture estimation value according to the uniform speed, namely: defaulting to use a constant-speed model, and using the pose transformation from the K-2 key frame to the K-1 key frame as the pose transformation from the K-1 key frame to the current frame; wherein, the moving speed from the K-1 key frame to the current frame is as follows: the pose transformation from the K-1 key frame to the current frame/the time difference from the current frame to the K-1 key frame;
recalculating a third posture estimation value according to the uniform speed, specifically:
the recalculated third pose estimate = pose + of K-1 key frame (speed of K-2 key frame to K-1 key frame) × (time difference of current frame to K-1 key frame).
And finally, performing joint factor graph optimization on the second position posture estimation value and the third position posture estimation value through a GTSAM library, so as to obtain a position posture after rear-end optimization through the factor graph, which is called as: the pose estimation value of the current laser point cloud frame after the rear end is optimized is used as the final pose data of the current frame and is stored as the track of the map; the factor graph input method has the advantages of less parameter quantity and less calculation quantity.
Further, in order to make the pose after the back-end optimization more accurate, a pose transformation matrix from the K-1 key frame to the current frame (i.e., pose transformation obtained through GPS information) can be calculated according to the current longitude and latitude height data acquired by the RTK sensor acquired in the step one and the longitude and latitude height data of the K-1 key frame, and then the pose transformation matrix, the second pose estimation value and the third pose estimation value are input into a factor graph together to obtain a more accurate pose after the back-end optimization.
Step six, judging whether the current laser point cloud frame can be used as a key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to the pose estimation value optimized at the rear end of the current frame, removing a dynamic object, and generating a static point cloud map; and if not, re-executing the steps until the map generation is stopped.
The method for judging whether the current frame is the key frame comprises the following steps: if the distance between the current frame and the K-1 key frame is greater than a key frame threshold (such as one meter), and the number of the feature points in the current frame is greater than a set feature point threshold, the current frame is determined as the key frame. Calculating a distance according to a difference value between the three-dimensional position in the pose optimized at the rear end of the current frame and the three-dimensional position optimized by the K-1 key frame;
by removing dynamic objects, such as vehicles and pedestrians, a static high-precision point cloud map is obtained. The existing method is adopted for removing the dynamic object.
Estimating according to IMU data from a key frame closest to the current laser point cloud frame in time to the current laser point cloud frame and the pose of a last laser point cloud frame after back-end optimization to obtain a first pose estimation value of the current laser point cloud frame; according to the first pose estimation value and IMU data corresponding to the radar scanning time of the current laser point cloud, carrying out distortion removal on the current laser point cloud to obtain a current laser point cloud frame after distortion removal, and extracting feature points from the current laser point cloud frame; the first pose is estimated by adopting the pose optimized by the rear end, so that the estimated first pose value is more accurate, and the current laser point cloud frame after distortion removal is more accurate;
obtaining a current descriptor according to the current laser point cloud frame after distortion removal; judging whether a loop is detected or not according to the current descriptor, if the loop is detected, estimating to obtain a second position posture estimation value of the current laser point cloud frame, otherwise, setting the second position posture estimation value to be null if the loop is not detected; the invention constructs the descriptor to detect the loop instead of detecting the loop according to all the points in the frame, thereby reducing the false detection rate, improving the loop detection precision, having good robustness and further improving the precision of the point cloud map.
Obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second pose estimation value and the third pose estimation value into a factor graph to obtain the pose of the current laser point cloud frame after back-end optimization; and judging whether the current laser point cloud frame can be used as a key frame, if so, splicing the undistorted current laser point cloud into a historical point cloud map according to the rear-end optimized pose, and removing the dynamic object to generate a static point cloud map.
The method not only receives IMU data, but also receives the pose subjected to back-end optimization processing, and carries out first pose estimation on the current laser point cloud together, so that the obtained first pose estimation value is more accurate; then, correcting the current laser point cloud generating the motion distortion, and extracting characteristic points, so that the corrected pose and the extracted characteristic points are more accurate; through the descriptor, mismatching is reduced, pose estimation precision is improved, map generation precision is further improved, and the calculated amount is small.
As shown in fig. 2, an embodiment of the present invention further provides a high-precision map generating apparatus, including:
the acquisition module is used for acquiring current laser point cloud frame data and IMU data;
the IMU preprocessing module is used for taking the current laser point cloud frame data as a first key frame if the current laser point cloud frame data is first frame data acquired by a laser radar, sending the first key frame to the map generating module and returning the first key frame to the acquisition module; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame fed back by the back-end optimization module through back-end optimization;
the front-end odometer module is used for carrying out distortion removal on the current laser point cloud according to the first position posture estimation value and IMU data corresponding to the time of scanning the current laser point cloud to obtain a current laser point cloud frame after distortion removal and extracting feature points from the current laser point cloud frame;
the loop detection module is used for obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and estimating to obtain a second position posture estimation value of the current laser point cloud frame if the loop is detected; otherwise, if no loop is detected, setting the second position estimation value to be null;
the rear-end optimization module is used for obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second posture estimation value and the third posture estimation value into a factor graph to obtain a rear-end optimized posture estimation value of the current frame;
the map generation module is used for generating a point cloud map through the first key frame if the current frame is the first frame, otherwise, judging whether the current frame can be used as the key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to a pose estimation value optimized by the rear end of the current frame, and generating a static point cloud map after removing a dynamic object; and if not, returning to the acquisition module until the map generation is stopped.
The specific steps of the method adopted in each module are the same as the map generation method, and are not described herein again.
According to the high-precision map generation device provided by the embodiment of the invention, an IMU preprocessing module estimates the pose of the last laser point cloud frame after back-end optimization according to IMU data between a key frame which is closest to the current laser point cloud frame in terms of time and the current laser point cloud frame, so as to obtain a first pose estimation value of the current laser point cloud frame; the front-end odometer module is used for carrying out distortion removal on the current laser point cloud according to the first position and attitude estimation value and IMU data corresponding to the laser point cloud scanning time by the radar to obtain a current laser point cloud frame after distortion removal and extracting characteristic points from the current laser point cloud frame; the first pose is estimated by adopting the pose optimized by the rear end, so that the estimated first pose value is more accurate, and the current laser point cloud frame after distortion removal is more accurate; the loop detection module obtains a current descriptor according to the current laser point cloud frame after distortion removal; judging whether a loop is detected or not according to the current descriptor, if the loop is detected, estimating to obtain a second position posture estimation value of the current laser point cloud frame, otherwise, setting the second position posture estimation value to be null if the loop is not detected; the invention constructs the descriptor to detect the loop instead of detecting the loop according to all the points in the frame, thereby reducing the false detection rate, improving the loop detection precision, having good robustness and further improving the precision of the point cloud map. The rear-end optimization module obtains a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second pose estimation value and the third pose estimation value into a factor graph to obtain the pose of the current laser point cloud frame after back-end optimization; and the map generation module judges whether the current laser point cloud frame can be used as a key frame, if so, the undistorted current laser point cloud is spliced into a historical point cloud map according to the rear-end optimized pose, and a static point cloud map is generated after a dynamic object is removed.
According to the method, IMU data and pose after rear-end optimization processing are received, and first pose estimation is performed on the current laser point cloud together, so that the obtained first pose estimation value is more accurate; then, correcting the current laser point cloud generating the motion distortion, and extracting characteristic points, so that the corrected pose and the extracted characteristic points are more accurate; through the descriptor, mismatching is reduced, the pose estimation precision is improved, the map generation precision is further improved, and the calculated amount is small.
An embodiment of the present invention provides a map generation device, including: a memory, a processor, and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the high accuracy map generation method of any of the above embodiments.
The embodiment of the invention provides a readable storage medium, wherein a computer program is stored in the readable storage medium, and the computer program is used for realizing the high-precision map generation method in any one of the above embodiments when being executed by a processor.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (14)

1. A high-precision map generation method is characterized by comprising the following steps:
acquiring current laser point cloud frame data and IMU data;
if the current laser point cloud frame data is first frame data acquired by a laser radar, taking the current laser point cloud frame data as a first key frame, generating a point cloud map through the first key frame, and executing the steps again; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame after back-end optimization;
according to the first pose estimation value and IMU data corresponding to the time for scanning the current laser point cloud, carrying out distortion removal on the current laser point cloud to obtain a current laser point cloud frame after distortion removal, and extracting feature points from the current laser point cloud frame;
obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and if loop is detected, estimating to obtain a second position attitude estimation value of the current laser point cloud frame; otherwise, if no loop is detected, setting the second position estimation value to be null;
obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic point; inputting the second pose estimation value and the third pose estimation value into a factor graph to obtain a pose estimation value optimized at the rear end of the current frame;
judging whether the current laser point cloud frame can be used as a key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to the pose estimation value optimized at the rear end of the current frame, removing a dynamic object, and generating a static point cloud map; and if not, re-executing the steps until the map generation is stopped.
2. The method as claimed in claim 1, wherein obtaining the current descriptor from the undistorted current laser point cloud frame comprises:
dividing points in the current laser point cloud frame after distortion removal into a plurality of areas according to the radar scanning angle and depth;
the maximum value of the emission intensity of the laser spot in each area is saved into an array, which is taken as the current descriptor.
3. The method as claimed in claim 1, wherein the estimating of the second pose estimation value of the current laser point cloud frame includes:
respectively comparing the current descriptor with the descriptor corresponding to each key frame in the historical point cloud map to obtain the similarity corresponding to each key frame;
and if the similarity is greater than a loop threshold, judging to detect a loop, selecting points with the distance from the points in the undistorted current laser cloud frame to be less than a set distance threshold from the corresponding key frame to form a matching point pair, and estimating according to a matching algorithm to obtain a second pose estimation value of the current laser point cloud frame.
4. A high accuracy map generation method according to claim 1, further comprising, before detecting the loop, preliminarily detecting the loop, including:
and projecting the undistorted current laser point cloud frame to a coordinate system of a historical point cloud map through the first pose estimation value, judging whether the coincidence degree of the projected current laser point cloud and the historical point cloud map is greater than a coincidence degree threshold value, if so, indicating that loop is preliminarily detected, otherwise, judging that no loop is detected.
5. The method as claimed in claim 1, wherein the step of performing the distortion removal on the current laser point cloud to obtain a current laser point cloud frame after the distortion removal comprises:
calculating the time of each point in the current laser point cloud frame scanned by the laser radar;
calculating the relative rotation amount of each point in the laser radar according to the corresponding IMU data in the current laser point cloud scanning time and the scanning time of each point;
calculating the relative displacement of each point according to the pose change from the nearest key frame to the current laser point cloud frame;
and according to the relative rotation amount and the relative displacement amount, carrying out distortion removal on points in the current laser point cloud frame to obtain the current laser point cloud frame after distortion removal.
6. The method as claimed in claim 1, wherein extracting feature points from the undistorted current laser point cloud frame comprises:
and calculating the curvature of each point of the current laser point cloud frame after distortion removal, taking the point with the curvature exceeding a preset angle threshold value as an angle characteristic point, and taking the point with the curvature smaller than a preset surface threshold value as a surface characteristic.
7. A high-precision map generation method according to claim 1, wherein the obtaining a third pose estimation value according to the first pose estimation value and the feature point comprises:
splicing a plurality of key frames closest to the current laser point cloud frame into a local map;
and converting the characteristic points into the local map coordinate system through the first pose estimation value, calculating point-line residual errors of the angular characteristic points and point-surface residual errors of the surface characteristic points, and inputting the point-line residual errors and the point-surface residual errors into a factor graph to obtain a third pose estimation value of the current laser point cloud frame.
8. A high accuracy map generating method according to claim 7,
for the angle feature point, searching a plurality of points which are closest to the angle feature point in the local map to form a line segment, and calculating the minimum distance from the angle feature point to the line segment as the point line residual error;
and for the surface feature point, searching a plurality of points which are closest to the surface feature point in the local map, constructing a plane, and calculating the minimum distance from the surface feature point to the plane as the point-surface residual error.
9. A high-precision map generation method according to claim 1, further comprising, after obtaining the third pose estimation value:
and judging the speed of the third posture estimation value, if the ratio of the moving speed from the latest key frame to the current frame to the moving speed from the next-nearest key frame to the latest key frame is greater than a preset speed ratio, judging that the current frame has a degradation phenomenon, and recalculating the third posture estimation value according to the uniform speed.
10. The method as claimed in claim 1, wherein the determining whether the current laser point cloud frame can be used as a key frame comprises:
if the distance between the current laser point cloud frame and the nearest key frame is larger than the key frame threshold value, judging that the current laser point cloud frame is a key frame;
or, if the distance between the current laser point cloud frame and the nearest key frame is greater than the key frame threshold value, and the number of the feature points in the current frame is greater than the set feature point threshold value, determining that the current laser point cloud frame is the key frame.
11. A high-precision map generation method according to claim 1, further comprising:
and acquiring current longitude and latitude height data, calculating a pose transformation matrix from the latest key frame to the current frame by combining the current longitude and latitude height data with the longitude and latitude height data of the latest key frame, and inputting the pose transformation matrix, the second pose estimation value and the third pose estimation value into a factor graph to obtain a pose estimation value optimized at the rear end of the current frame.
12. A high-precision map generation apparatus, comprising:
the acquisition module is used for acquiring current laser point cloud frame data and IMU data;
the IMU preprocessing module is used for taking the current laser point cloud frame data as a first key frame if the current laser point cloud frame data is first frame data acquired by a laser radar, sending the first key frame to the map generating module and returning the first key frame to the acquisition module; otherwise, estimating to obtain a first pose estimation value of the current laser point cloud frame according to IMU data between the latest key frame and the current laser point cloud frame and the pose of the last laser point cloud frame fed back by the back-end optimization module through back-end optimization;
the front-end odometer module is used for carrying out distortion removal on the current laser point cloud according to the first position posture estimation value and IMU data corresponding to the time of scanning the current laser point cloud to obtain a current laser point cloud frame after distortion removal and extracting feature points from the current laser point cloud frame;
the loop detection module is used for obtaining a current descriptor according to the current laser point cloud frame after distortion removal, judging whether loop is detected or not according to the current descriptor, and estimating to obtain a second position posture estimation value of the current laser point cloud frame if the loop is detected; otherwise, if no loop is detected, setting the second position and posture estimation value to be null;
the rear-end optimization module is used for obtaining a third attitude estimation value according to the first attitude estimation value and the characteristic points; inputting the second posture estimation value and the third posture estimation value into a factor graph to obtain a rear-end optimized posture estimation value of the current frame;
the map generation module is used for generating a point cloud map through the first key frame if the current frame is the first frame, otherwise, judging whether the current frame can be used as the key frame, if so, splicing the undistorted current laser point cloud frame into a historical point cloud map according to a pose estimation value optimized by the rear end of the current frame, and generating a static point cloud map after removing a dynamic object; and if not, returning to the acquisition module until the generation of the map is stopped.
13. A map generation apparatus characterized by comprising: a memory, a processor, and a computer program, the computer program being stored in the memory, the processor running the computer program to perform the high accuracy map generation method of any one of claims 1 to 11.
14. A readable storage medium, wherein a computer program is stored in the readable storage medium, and when executed by a processor, the computer program is configured to implement the high accuracy map generation method according to any one of claims 1 to 11.
CN202211566846.2A 2022-12-07 2022-12-07 High-precision map generation method, device, equipment and storage medium Pending CN115773747A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211566846.2A CN115773747A (en) 2022-12-07 2022-12-07 High-precision map generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211566846.2A CN115773747A (en) 2022-12-07 2022-12-07 High-precision map generation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115773747A true CN115773747A (en) 2023-03-10

Family

ID=85391690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211566846.2A Pending CN115773747A (en) 2022-12-07 2022-12-07 High-precision map generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115773747A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974926A (en) * 2024-03-28 2024-05-03 苏州艾吉威机器人有限公司 Positioning and mapping method and system based on mechanical rotation type three-dimensional laser

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117974926A (en) * 2024-03-28 2024-05-03 苏州艾吉威机器人有限公司 Positioning and mapping method and system based on mechanical rotation type three-dimensional laser

Similar Documents

Publication Publication Date Title
EP3886048B1 (en) Slam map joining method and system
EP3519770B1 (en) Methods and systems for generating and using localisation reference data
CN110675307B (en) Implementation method from 3D sparse point cloud to 2D grid graph based on VSLAM
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN115407357B (en) Low-harness laser radar-IMU-RTK positioning mapping algorithm based on large scene
CN111830953A (en) Vehicle self-positioning method, device and system
CN114812581A (en) Cross-country environment navigation method based on multi-sensor fusion
CN114565674B (en) Method and device for purely visually positioning urban structured scene of automatic driving vehicle
CN114549738A (en) Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
CN115494533A (en) Vehicle positioning method, device, storage medium and positioning system
CN115773747A (en) High-precision map generation method, device, equipment and storage medium
CN115908539A (en) Target volume automatic measurement method and device and storage medium
CN116242374A (en) Direct method-based multi-sensor fusion SLAM positioning method
US20210304518A1 (en) Method and system for generating an environment model for positioning
CN114593739A (en) Vehicle global positioning method and device based on visual detection and reference line matching
CN113544034B (en) Apparatus and method for determining correction information of a vehicle sensor
CN112733971A (en) Pose determination method, device and equipment of scanning equipment and storage medium
CN115494845A (en) Navigation method and device based on depth camera, unmanned vehicle and storage medium
JPH10187974A (en) Physical distribution measuring instrument
CN115280960A (en) Combine harvester steering control method based on field vision SLAM
CN111462321B (en) Point cloud map processing method, processing device, electronic device and vehicle
CN114862953A (en) Mobile robot repositioning method and device based on visual features and 3D laser
Kühner et al. Automatic generation of training data for image classification of road scenes
WO2024148551A1 (en) Apparatus and methods for tracking features within images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination