CN116608873A - Multi-sensor fusion positioning mapping method for automatic driving vehicle - Google Patents

Multi-sensor fusion positioning mapping method for automatic driving vehicle Download PDF

Info

Publication number
CN116608873A
CN116608873A CN202310415468.6A CN202310415468A CN116608873A CN 116608873 A CN116608873 A CN 116608873A CN 202310415468 A CN202310415468 A CN 202310415468A CN 116608873 A CN116608873 A CN 116608873A
Authority
CN
China
Prior art keywords
imu
vehicle
point cloud
integration
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310415468.6A
Other languages
Chinese (zh)
Inventor
熊模英
兰凤崇
陈吉清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202310415468.6A priority Critical patent/CN116608873A/en
Publication of CN116608873A publication Critical patent/CN116608873A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a multi-sensor fusion positioning mapping method for an automatic driving vehicle, which comprises the following steps: after the original data of the laser radar, the IMU and the wheel speed odometer are acquired, firstly scanning a scanning accumulated laser radar scanning surface, simultaneously carrying out odometer integration and pre-integration between continuous laser key frames by the IMU, and simultaneously carrying out vehicle speed pre-integration by the wheel speed odometer; after the initial data of the three sensors are processed, the IMU pre-integration and the vehicle speed pre-integration are used as optimization factors for the optimization of the back-end factor graph; the laser odometer part performs movement de-distortion through an odometer integral of the IMU, and detects whether loop back occurs or not; and carrying out residual calculation in the process of point cloud registration and data fusion on the historical key frames and the current frames in the local map through an iEKF method to obtain the pose of the front-end odometer, and finally, inserting factors into the factor graph optimization model to carry out fusion positioning solution.

Description

Multi-sensor fusion positioning mapping method for automatic driving vehicle
Technical Field
The invention belongs to the technical field of data fusion of a laser radar, an inertia measurement unit and a wheel speed encoder, and particularly relates to a multi-sensor fusion positioning map building method of an automatic driving vehicle.
Background
Under the environment without characteristics, such as scenes of tunnels, mines, underground pipelines and the like, the traditional SLAM algorithm faces great challenges. Because of the lack of typical feature points in these environments, it is difficult to implement high-precision positioning and mapping with conventional feature point-based SLAM algorithms. In addition, the low-illumination environment also brings difficulty to laser radar measurement, and influences the performance of SLAM algorithm. The existing solutions to the problem of vehicle positioning in the field of automatic driving aiming at such scenes are a method deployed through an infrastructure which consumes a great deal of manpower and material resources and a solution adopting an expensive high-precision sensor, but the positioning precision of the solution cannot meet the problem of vehicle positioning and map building in such scenes.
Because the measurement of the laser radar is not influenced by illumination conditions and weather conditions, and accurate distance information of surrounding environment can be obtained, the accuracy of a positioning algorithm based on the laser radar is ensured in principle, and therefore, the positioning and mapping technology with the laser radar as a core sensor becomes a scheme of most autopilot companies. Because the positioning and mapping scheme using the pure laser radar cannot solve the problems of environmental degradation, low sampling frequency and the like, the robustness, the precision and the output frequency of the positioning of the scheme cannot meet the positioning requirements of an automatic driving face on various complex working conditions (CN 202210285283.3).
At present, the IMU is an auxiliary sensor most commonly used in a laser radar fusion scheme, on one hand, the IMU can provide high-frequency triaxial acceleration and triaxial angular velocity, so that the problem of low laser radar acquisition frequency can be solved, on the other hand, the measurement of the IMU is not influenced by external environments such as illumination conditions, texture structures, weather conditions and the like, and the robustness of a laser and IMU fusion SLAM algorithm can be improved. The fusion mode of the laser radar and the IMU comprises loose coupling and tight coupling. The loosely coupled method based on lidar typically processes the measurement results of the lidar and IMU separately and then performs weighted fusion of the results. The loosely coupled method separately processes the data of the laser radar and the IMU, the data are not updated mutually, and although the calculation efficiency of the method is higher, the theoretical accuracy is not as high as that of a tightly coupled algorithm, so that no economical and accurate positioning method suitable for automatic driving of a vehicle in a non-characteristic tunnel scene exists in the prior art.
Disclosure of Invention
The invention provides an economical multi-sensor fusion positioning method for meeting the navigation requirement of an automatic driving vehicle on the basis of the technical problem of a positioning and mapping method of the existing automatic driving vehicle under a GPS-free and edge-free characteristic scene, and relates to a simultaneous positioning and mapping (SLAM) algorithm in a low-illumination and feature-free tunnel environment based on the fusion of a laser radar, an Inertial Measurement Unit (IMU) and a wheel encoder, which can realize the accurate robust positioning for improving the accurate robust positioning of the ground vehicle in the GPS-free signal, low-illumination and feature-free tunnel environment;
the invention is realized at least by one of the following technical schemes.
A multi-sensor fusion positioning mapping method for an automatic driving vehicle comprises the following steps:
step 1, calibrating a vehicle-mounted sensor;
step 2, collecting laser radar point cloud data of an actual scene, pose data of an IMU and wheel speed data of a wheel speed meter in a vehicle motion process received by a calibrated sensor, and storing the data in a buffer;
step 3, initializing the gravity direction of the IMU by using the attitude angle measured by the integrated navigation system to eliminate the influence of gravity at different geographic positions and reduce the attitude estimation error;
step 4, an IMU pre-integration model and a wheel pre-integration model are built to obtain an IMU pre-integration and a vehicle speed pre-integration;
step 5, laser point cloud pretreatment: voxel filtering is carried out on each frame of point cloud, and distortion is removed from the point cloud by combining with the odometer information of the IMU;
step 6, registering point clouds and establishing a local map to obtain the pose of the front-end odometer;
step 7, when the pose change obtained by the point cloud registration in the step 6 exceeds a threshold value, extracting a key frame, and taking the last frame of one laser scanning as a key frame;
step 8, according to the three-dimensional point cloud obtained by the laser radar, representing the spatial structural characteristics of the point cloud by using a two-dimensional characteristic graph, and constructing a loop detection factor when the loop is detected;
and 9, constructing a factor graph optimization model, and placing loop detection factors into the factor graph optimization model for fusion positioning.
Further, step 4 includes constructing an IMU pre-integration factor between two lidar keyframes, the pre-integration term being ΔR imu 、Δv imu 、Δp imu Wherein DeltaR imu Pre-integrating, deltav for IMU rotation imu For IMU speed pre-integration Δp imu Pre-integrating the IMU position;
solving the longitudinal speed, the lateral speed and the yaw speed of the vehicle by utilizing a two-degree-of-freedom vehicle model to construct a wheel pre-integral factor, wherein the wheel pre-integral factor comprises a vehicle speed pre-integral term delta R vehicle 、Δp vehicle Wherein DeltaR vehicle Δp for vehicle rotation pre-integration vehicle Pre-integrating the vehicle position.
Further, in step 4, the obtaining of the IMU pre-integral and the vehicle speed pre-integral specifically includes:
IMU pre-integration:
vehicle speed pre-integration:
wherein the method comprises the steps ofIs a pre-integral measurement of rotation, δφ ij Is rotational noise; />Is a pre-integral measurement of velocity δv ij Speed noise; />Is a pre-integral measurement of position δp ij Is position noise, R i 、R j Rotation matrix, deltaR, from moment i, j to world coordinate system ij A rotation matrix at the moment i and j, v i 、v j The speeds at the moments i and j are respectively deltat ij Is the time difference between i and j, Δp ij Represents the position change quantity at the moment i and j, p i Representing the position at time i, p j Representing the position at time j.
Further, the laser point cloud preprocessing process specifically comprises the following steps: and filtering the original laser points by adopting a voxel filtering algorithm, dividing the 3D point cloud into a space voxel grid, and replacing other data points in the voxels by using a gravity center point in each voxel so as to improve the accuracy of point cloud registration.
Further, the point cloud registration and the local map establishment include: and searching a set of historical key frames around the current point cloud frame by using a map structure of a ikd-tree structure for the extracted key frames, and then constructing a local map maintained by a ikd-tree data structure and a pose transformation relation by combining global pose information on each historical key frame.
Further, in the process of constructing the ikd-tree structure, firstly, the whole map is divided into a plurality of voxels, the ikd-tree structure of the point cloud in each voxel is established, and each ikd-tree corresponds to one voxel;
residual calculation in the process of point cloud registration and data fusion is carried out by using an iEKF, the pose transformation relation between the current point cloud frame and the local map is calculated, firstly, characteristic points in the local map are extracted, a characteristic point map is built according to the relative positions of the characteristic points, after a new point cloud frame is received, the current point cloud frame and the characteristic point map are matched by using a point cloud registration algorithm, initial pose estimation is obtained, and pose estimation is further optimized based on the characteristic points, so that more accurate pose estimation is obtained.
Further, in step 7, key frames are extracted by checking pose changes and time intervals;
pose change: when the displacement or rotation between adjacent laser frames exceeds a preset threshold, taking the current frame as a key frame;
time interval: and setting a minimum time interval to ensure that at least a time difference exists between adjacent key frames so as to improve the operation efficiency.
Further, the spatial structure characteristics of the point cloud are represented by a two-dimensional characteristic diagram through a descriptor extraction method based on a Scan-Context descriptor.
In step 8, the IMU pre-integration factor, the laser odometer factor, the wheel pre-integration factor and the loop detection factor are modeled as maximum posterior probability problems, each state node corresponds to one laser radar key frame, the IMU pre-integration and the vehicle speed pre-integration respectively provide constraints for two key frames, and the loop factor provides constraints for the key frames between the two loop points, so that the whole multi-sensor fusion objective function and the factor graph are formed.
Further, the optimization objective function of the factor graph optimization model is:
wherein the method comprises the steps ofFor the pre-integral factor residual term,/->For the laser odometer factor residual term,for the loop factor residual term,/->Pre-integrating factor residual terms for vehicle speed, e ij Representing residual error, I pre Representing the pre-integration of the IMU, i, j representing noAt the same time, OD represents Odometry as a laser odometer, L represents a loop, and LO represents a vehicle speed odometer.
Compared with the prior art, the invention has the beneficial effects that:
the invention combines the laser radar, the wheel encoder and the IMU data to improve the robustness and reliability of the algorithm in the characteristic missing environment. Even under the condition that the laser radar is influenced by environmental noise, higher positioning and mapping accuracy can be maintained. The method not only realizes high-precision positioning and mapping under a common multi-feature scene, but also ensures the positioning precision of the vehicle even if GPS signals are not available in a tunnel scene without edge features. The invention adopts the 16-line laser radar with low cost, the IMU with low precision and the wheel speed meter, and realizes the high-precision positioning of the vehicle in the tunnel scene on the premise of ensuring the cost advantage. The method is very helpful for the expansion and supplement of the automatic driving technology on the application scene.
Drawings
FIG. 1 is a flow chart of a multi-sensor fusion positioning mapping method for an autonomous vehicle according to an embodiment of the present invention;
FIG. 2 is an Allan variance chart of the calibration result of an N200IMU according to the embodiment of the present invention;
FIG. 3 is a schematic view of voxel filtering according to an embodiment of the present invention;
FIG. 4 is a schematic view of pose estimation according to an embodiment of the present invention;
fig. 5 is a cloud of points created in an embodiment of an underground parking garage in accordance with an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, based on the embodiments of the invention, which a person of ordinary skill in the art would obtain without inventive faculty, are within the scope of the invention;
the embodiment discloses a multi-sensor fusion positioning mapping method for an automatic driving vehicle, which is used for improving the autonomous navigation performance of a ground vehicle in a low-illumination and featureless tunnel environment, and the algorithm flow is shown in fig. 1, and specifically comprises the following steps:
step 1, calibrating a sensor;
firstly, data acquisition is carried out, and before the data acquisition is carried out, the vehicle-mounted sensor is required to be calibrated, wherein the calibration comprises IMU internal parameter calibration and multi-sensor combined calibration. The vehicle body coordinate system locates the IMU sensor coordinate system, the IMU internal reference calibration is carried out, the internal reference calibration is mainly aimed at two error sources of white Gaussian noise (white noise) and bias (bias, random walk), the internal reference calibration method uses an imu_uteils tool to calibrate, after calibration, whether the calibration result is accurate is judged by drawing an alan variance chart (shown in figure 2), the measurement value and the predicted value are well fitted, the calibration result is accurate, and the external parameters of the IMU and the laser radar are well calibrated in an online combined calibration mode.
Step 2, data acquisition
The collected data comprise laser radar point cloud data of an actual scene, pose data of an IMU and wheel speed data of a wheel speed meter of a vehicle in the working process in real time. The collected data are stored in a cache area designed by an algorithm.
In this embodiment, the raw data of the IMU needs to be acquired in a stationary state. During data acquisition, the IMU is installed on a stable platform, so that the influence of an external environment is avoided. The data acquisition time period is typically greater than 120 minutes to ensure that sufficient data is available for calibration. Usually an increased acquisition time will make the calibration result somewhat more accurate.
In the embodiment, a laser radar arranged on an automobile and an IMU (inertial measurement unit) arranged right below the laser radar respectively acquire real-time information of the automobile, wherein the laser radar adopts a VLP-16 laser radar, the IMU adopts a ten-axis IMU of N200 model, and a wheel speed odometer adopts a wheel speed odometer of a vehicle factory. The laser radar acquires point cloud data and heading information of the IMU vehicle, wherein the heading information comprises a heading angle of the vehicle and an angular speed of the heading angle, and the vehicle wheel speed encoder acquires speed information of the vehicle.
Step 3, positioning initialization, namely initializing the gravity direction of the IMU, and initializing the gravity direction of the IMU by utilizing an attitude angle measured by the integrated navigation system so as to eliminate the influence of gravity at different geographic positions and reduce the attitude estimation error;
and 4, constructing an IMU pre-integration model and a wheel pre-integration model, which are taken as two optimization factors for the algorithm rear end based on factor graph optimization.
Constructing an IMU pre-integration factor between two laser radar key frames, wherein the pre-integration term is delta R imu 、Δv imu 、Δp imu Wherein DeltaR imu Pre-integrating, deltav for IMU rotation imu For IMU speed pre-integration Δp imu Pre-integration for IMU position.
Solving the longitudinal speed, the lateral speed and the yaw rate of the vehicle by using a two-degree-of-freedom vehicle model, and then constructing a vehicle speed pre-integral term delta R vehicle 、Δp vehicle Wherein DeltaR vehicle Δp for vehicle rotation pre-integration vehicle Pre-integrating the vehicle position.
As a preferred embodiment, the inertial measurement unit is 100Hz, the period is 0.01s, the scanning frequency of the laser radar is 10Hz, a large number of IMU measurement values exist between one laser key frame, and the integration calculation is carried out on all the IMU measurement values in real time, so that the calculation amount is large.
Therefore, the IMU pre-integration model is obtained by acquiring the motion information between two key frames in the form of constructing IMU pre-integration between continuous laser frames:
solving the longitudinal speed, the lateral speed and the yaw rate of the vehicle by using a two-degree-of-freedom vehicle model, and then constructing a vehicle speed pre-integration model:
wherein the method comprises the steps ofIs a pre-integral measurement of rotation, δφ ij Is rotational noise; />Is a pre-integral measurement of velocity δv ij Speed noise; />Is a pre-integral measurement of position δp ij Is position noise, R i 、R j Rotation matrix from IMU to world coordinate system under index values of i and j respectively, delta R ij A rotation matrix at the moment i and j, v i 、v j The speeds at the moments i and j are respectively deltat ij Is the time difference between i and j, Δp ij Represents the position change quantity at the moment i and j, p i Representing the position at time i, p j Representing the position at time j.
And 5, preprocessing the laser point cloud.
In this embodiment, the original point cloud data collected by the lidar generally includes a relatively large amount of data, and noise points and outliers inevitably occur in the point cloud data. These all increase the computational complexity and accuracy of point cloud registration. Therefore, by filtering the point cloud, the number of laser points in the point cloud can be reduced, thereby improving the calculation efficiency. Meanwhile, in order to ensure accuracy of point cloud registration, the filtered point cloud should keep the geometric features of the original point cloud as much as possible. For an input lidar origin cloud.
As a preferred embodiment, in this embodiment, the point cloud raw data is accumulated at a frequency of 50Hz as one scan of the lidar, and then voxel filtering is performed on each input frame of point cloud, as shown in fig. 3, which is a filtering diagram in this embodiment, where a voxel is a small cube in three-dimensional space. Within each voxel, other data points within the voxel are replaced with centroid points. Therefore, the number of the point clouds can be reduced, the complexity of calculation is reduced, the geometric characteristics of the point clouds can be saved, and the accuracy of the subsequent point cloud registration is ensured. The original laser points are filtered, and the 3D point cloud is divided into a space voxel grid, wherein the voxel is a small cube in the three-dimensional space. Within each voxel, other data points within the voxel are replaced with centroid points. Therefore, the number of the point clouds can be reduced, the complexity of calculation is reduced, and the accuracy of the subsequent point cloud registration is ensured. And then, combining with the odometer information of the IMU, converting the pose information of a plurality of IMUs in the continuous laser frames into the coordinate system at the same moment, and removing the motion distortion generated by the laser radar motion from the point cloud.
And 6, carrying out point cloud registration and establishing a local map.
The point cloud registration and local map establishment comprises the following steps: and searching a set of historical key frames around the current point cloud frame by using a map structure of a ikd-tree (incremental k-dimensional tree) structure for the extracted key frames, and then constructing a local map maintained by a ikd-tree data structure and a pose transformation relation by combining global pose information on each historical key frame. In constructing the ikd-tree, the entire map is first divided into a plurality of voxels (which are not voxels in the voxel filtering described above, and which are one data structure storing a point cloud map), and the point cloud data within each voxel is referred to as "point cloud within voxel". A ikd-tree structure for each intra-voxel point cloud is then built up for subsequent querying. In this way, the whole map is divided into a plurality of ikd-tree, each ikd-tree corresponds to a voxel, residual calculation in the process of point cloud registration and data fusion is performed by using an iEKF (iterative extended kalman filter), the pose transformation relationship between the current point cloud frame and the local map is calculated, specifically, the feature points in the local map are firstly extracted, and a feature point map is constructed according to the relative positions of the feature points. And then, after receiving the new point cloud frame, matching the current point cloud frame with the characteristic point map by using a point cloud registration algorithm to obtain initial pose estimation between the current point cloud frame and the characteristic point map. And then, further optimizing the pose by using an optimization method based on the feature points so as to obtain more accurate pose estimation. Point cloud registration and building of a local map in this embodiment can be seen as a process of pose estimation.
As shown in fig. 4, the forward propagation of IMU odometry and the backward propagation through IMU integration results are included in one laser scan in this process.
Forward propagation:
x i+1 =x i +(Δtf(x i ,u i ,w i ))
wherein x is i Representing the state quantity of the IMU, i is the moment, deltat is the sampling interval of the IMU, u i As input, w i Is noise.
Backward propagation:
x j-1 =x j +(-Δtf(x j ,u j ,0))
wherein x is j Representing the state quantity of the IMU, j is the moment, deltat is the sampling interval of the IMU, u j Is input.
And 7, extracting the key frames to obtain laser odometer factors formed by the pose of the key frames. The strategy for extracting the historical key frames is mainly by checking pose changes and time intervals. In a specific implementation, the following method is used to select key frames:
pose change: when the displacement or rotation between adjacent laser frames exceeds a predetermined threshold, the current frame is taken as a key frame. The purpose of this is to ensure that there is enough scene change between key frames so that the subsequent optimization process is more stable and efficient.
Time interval: to limit the number of key frames, a minimum time interval may be set to ensure that there is at least a certain time difference between adjacent key frames. This helps to reduce the amount of computation and improve the efficiency of the algorithm.
By selecting the key frames through the strategy, the algorithm provided by the invention can reduce the computational complexity and the memory requirement while keeping higher precision.
And 8, loop detection. According to the three-dimensional point cloud obtained by the laser radar, the space structure characteristics of the point cloud are represented by a simple two-dimensional characteristic diagram through a descriptor extraction method based on a Scan-Context descriptor, so that the efficiency and the precision of loop detection are improved, and a loop detection factor is constructed when the loop is detected. In this embodiment, the matching detection is performed by detecting the Scan-Context obtained by the current Scan and the Scan-Context database stored in the global map, to determine whether the current Scan is looped.
And 9, rear-end optimization. In the back-end factor map optimization in this embodiment, the IMU pre-integration factor in step 4, the wheel pre-integration factor in step 8, the loop detection factor, and the laser odometer factor formed by the pose of the key frame obtained in step 7 are placed into the factor map optimization model to construct an optimization objective function, and finally the pose is obtained after optimization.
Wherein the method comprises the steps ofFor the pre-integral factor residual term,/->For the laser odometer factor residual term,for the loop factor residual term,/->E is the residual term of the vehicle speed and the integral woolen factor ij Representing the residual. Wherein I is pre Representing the pre-integration of the IMU, i, j representing different moments, OD representing the odometer being a laser odometer, L representing the loop, and LO representing the vehicle speed odometer.
The multi-sensor fusion positioning mapping method for the automatic driving vehicle adopts three sensors of the laser radar, the inertia measurement unit and the wheel speed odometer, and fuses the data of the three sensors through the extended Kalman filter, thereby realizing centimeter-level positioning accuracy. After the original data of the laser radar, the IMU and the wheel speed odometer are acquired, firstly scanning a scanning accumulated laser radar scanning surface, simultaneously carrying out odometer integration and pre-integration between continuous laser key frames by the IMU, and simultaneously carrying out vehicle speed pre-integration by the wheel speed odometer; after the initial data of the three sensors are processed, the IMU pre-integral and the vehicle speed pre-integral are used as optimization factors for the optimization of the back-end factor graph. The laser odometer part carries out motion de-distortion through an odometer integral of the IMU, and detects the similarity between the global Scan-Context database and the current Scan-Context to judge whether loop-back exists or not; and carrying out residual calculation in the process of point cloud registration and data fusion on the historical key frame and the current frame in the local map by using an iterative extended Kalman filtering method to obtain the pose of the front-end odometer. And finally, four factors are inserted into the factor graph optimization model to carry out fusion positioning solution.
The invention not only realizes high-precision positioning and mapping under the common multi-feature scene, but also can ensure the positioning precision of the vehicle even without GPS signals in the tunnel scene without edge features. The invention adopts the 16-line laser radar with low cost, the IMU with low precision and the wheel speed meter, realizes the high-precision positioning of the vehicle in the tunnel scene on the premise of ensuring the cost advantage, and can improve the autonomous navigation performance of the ground vehicle in the tunnel environment without GPS signals, low illumination and no characteristics.
A system for realizing a multi-sensor fusion positioning mapping method of an automatic driving vehicle comprises a hardware sensor module 1, a synchronous controller module 2 and a calculation module 3.
The sensor module 1 comprises three sensors of a vehicle-mounted laser radar, an IMU and a wheel speed odometer;
the synchronous controller module 2 is used for realizing the time-space synchronization problem of data collected by different sensors;
the calculation module 3 realizes the high-precision positioning function of the automatic driving vehicle under the scene without GPS signals through the automatic driving vehicle positioning and mapping method based on the sensor data acquired in the hardware sensor module 1, and improves the robustness and reliability of the algorithm under the characteristic missing environment through combining laser radar, a wheel encoder and IMU data. Even under the condition that the laser radar is influenced by environmental noise, higher positioning and mapping accuracy can be maintained.
In the embodiment, the vehicle adopts a sensor fusion scheme based on 16-line laser radar, an economical IMU and a wheel speed meter to realize the vehicle positioning under the scene without GPS signals and edge characteristics; the multi-sensor fusion positioning mapping method for the automatic driving vehicle realizes high-precision positioning mapping of the automatic driving vehicle under the condition of no GPS signal, as shown in fig. 5.
The preferred embodiments of the invention disclosed above are intended only to assist in the explanation of the invention. The preferred embodiments are not exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best understand and utilize the invention. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (10)

1. The multi-sensor fusion positioning mapping method for the automatic driving vehicle is characterized by comprising the following steps of:
step 1, calibrating a vehicle-mounted sensor;
step 2, collecting laser radar point cloud data of an actual scene, pose data of an IMU and wheel speed data of a wheel speed meter in a vehicle motion process received by a calibrated sensor, and storing the data in a buffer;
step 3, initializing the gravity direction of the IMU by using the attitude angle measured by the integrated navigation system to eliminate the influence of gravity at different geographic positions and reduce the attitude estimation error;
step 4, an IMU pre-integration model and a wheel pre-integration model are built to obtain an IMU pre-integration and a vehicle speed pre-integration;
step 5, laser point cloud pretreatment: voxel filtering is carried out on each frame of point cloud, and distortion is removed from the point cloud by combining with the odometer information of the IMU;
step 6, registering point clouds and establishing a local map to obtain the pose of the front-end odometer;
step 7, when the pose change obtained by the point cloud registration in the step 6 exceeds a threshold value, extracting a key frame, and taking the last frame of one laser scanning as a key frame;
step 8, according to the three-dimensional point cloud obtained by the laser radar, representing the spatial structural characteristics of the point cloud by using a two-dimensional characteristic graph, and constructing a loop detection factor when the loop is detected;
and 9, constructing a factor graph optimization model, and placing loop detection factors into the factor graph optimization model for fusion positioning.
2. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 1, wherein: step 4 includes constructing IMU pre-integral factor between two laser radar key frames, the pre-integral term being DeltaR imu 、Δv imu 、Δp imu Wherein DeltaR imu Pre-integrating, deltav for IMU rotation imu For IMU speed pre-integration Δp imu Pre-integrating the IMU position;
solving the longitudinal speed, the lateral speed and the yaw speed of the vehicle by utilizing a two-degree-of-freedom vehicle model to construct a wheel pre-integral factor, wherein the wheel pre-integral factor comprises a vehicle speed pre-integral term delta R vehicle 、Δp vehicle Wherein DeltaR vehicle Δp for vehicle rotation pre-integration vehicle Pre-integrating the vehicle position.
3. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 1, wherein: in step 4, the obtaining of the IMU pre-integral and the vehicle speed pre-integral specifically includes:
IMU pre-integration:
vehicle speed pre-integration:
wherein the method comprises the steps ofIs a pre-integral measurement of rotation, δφ ij Is rotational noise; />Is a pre-integral measurement of velocity δv ij Speed noise; />Is a pre-integral measurement of position δp ij Is position noise, R i 、R j Rotation matrix, deltaR, from moment i, j to world coordinate system ij A rotation matrix at the moment i and j, v i 、v j The speeds at the moments i and j are respectively deltat ij Is the time difference between i and j, Δp ij Represents the position change quantity at the moment i and j, p i Representing the position at time i, p j Representing the position at time j.
4. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 1, wherein: the laser point cloud preprocessing process specifically comprises the following steps: and filtering the original laser points by adopting a voxel filtering algorithm, dividing the 3D point cloud into a space voxel grid, and replacing other data points in the voxels by using a gravity center point in each voxel so as to improve the accuracy of point cloud registration.
5. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 1, wherein: the point cloud registration and local map establishment comprises the following steps: and searching a set of historical key frames around the current point cloud frame by using a map structure of a ikd-tree structure for the extracted key frames, and then constructing a local map maintained by a ikd-tree data structure and a pose transformation relation by combining global pose information on each historical key frame.
6. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 5, wherein: in the process of constructing a ikd-tree structure, firstly, dividing the whole map into a plurality of voxels, and establishing a ikd-tree structure of point cloud in each voxel, wherein each ikd-tree corresponds to one voxel;
residual calculation in the process of point cloud registration and data fusion is carried out by using an iEKF, the pose transformation relation between the current point cloud frame and the local map is calculated, firstly, characteristic points in the local map are extracted, a characteristic point map is built according to the relative positions of the characteristic points, after a new point cloud frame is received, the current point cloud frame and the characteristic point map are matched by using a point cloud registration algorithm, initial pose estimation is obtained, and pose estimation is further optimized based on the characteristic points, so that more accurate pose estimation is obtained.
7. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 1, wherein: in step 7, extracting key frames by checking pose changes and time intervals;
pose change: when the displacement or rotation between adjacent laser frames exceeds a preset threshold, taking the current frame as a key frame;
time interval: and setting a minimum time interval to ensure that at least a time difference exists between adjacent key frames so as to improve the operation efficiency.
8. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 1, wherein: and the space structure characteristics of the point cloud are represented by a two-dimensional characteristic diagram through a descriptor extraction method based on the Scan-Context descriptor.
9. The multi-sensor fusion localization mapping method of an autonomous vehicle according to any of claims 1 to 8, characterized in that: in step 8, modeling the IMU pre-integration factor, the laser odometer factor, the wheel pre-integration factor and the loop detection factor as the maximum posterior probability problem, wherein each state node corresponds to one laser radar key frame, the IMU pre-integration and the vehicle speed pre-integration respectively provide constraints for two key frames, and the loop factor provides constraints for the key frames between the two loop points so as to form the whole multi-sensor fusion objective function and the factor graph.
10. The multi-sensor fusion localization mapping method of an autonomous vehicle of claim 1, wherein: the optimization objective function of the factor graph optimization model is as follows:
wherein the method comprises the steps ofFor the pre-integral factor residual term,/->For the laser odometer factor residual term, +.>For the loop factor residual term,/->Pre-integrating factor residual terms for vehicle speed, e ij Representing residual error, I pre Representing pre-integration of the IMU, i and j representing different moments, OD representing Odometry being a laser odometer, L representing a loop, and LO representing a vehicle speed odometer.
CN202310415468.6A 2023-04-18 2023-04-18 Multi-sensor fusion positioning mapping method for automatic driving vehicle Pending CN116608873A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310415468.6A CN116608873A (en) 2023-04-18 2023-04-18 Multi-sensor fusion positioning mapping method for automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310415468.6A CN116608873A (en) 2023-04-18 2023-04-18 Multi-sensor fusion positioning mapping method for automatic driving vehicle

Publications (1)

Publication Number Publication Date
CN116608873A true CN116608873A (en) 2023-08-18

Family

ID=87684422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310415468.6A Pending CN116608873A (en) 2023-04-18 2023-04-18 Multi-sensor fusion positioning mapping method for automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN116608873A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278941A (en) * 2023-09-15 2023-12-22 广东省机场管理集团有限公司工程建设指挥部 Vehicle driving auxiliary positioning method and device based on 5G network and data fusion
CN117761663A (en) * 2023-10-31 2024-03-26 新石器慧通(北京)科技有限公司 External parameter calibration method and device and automatic driving vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117278941A (en) * 2023-09-15 2023-12-22 广东省机场管理集团有限公司工程建设指挥部 Vehicle driving auxiliary positioning method and device based on 5G network and data fusion
CN117761663A (en) * 2023-10-31 2024-03-26 新石器慧通(北京)科技有限公司 External parameter calibration method and device and automatic driving vehicle

Similar Documents

Publication Publication Date Title
CN111457902B (en) Water area measuring method and system based on laser SLAM positioning
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
US20190302801A1 (en) Georeferenced trajectory estimation system
CN111578957B (en) Intelligent pure vehicle tracking method based on three-dimensional point cloud map positioning
CN116608873A (en) Multi-sensor fusion positioning mapping method for automatic driving vehicle
CN111551958A (en) Mining area unmanned high-precision map manufacturing method
CN110411462A (en) A kind of GNSS/ inertia/lane line constraint/odometer multi-source fusion method
CN105674993A (en) Binocular camera-based high-precision visual sense positioning map generation system and method
CN112987065B (en) Multi-sensor-integrated handheld SLAM device and control method thereof
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
Sujiwo et al. Monocular vision-based localization using ORB-SLAM with LIDAR-aided mapping in real-world robot challenge
CN115407357A (en) Low-beam laser radar-IMU-RTK positioning mapping algorithm based on large scene
Wen et al. TM 3 Loc: Tightly-coupled monocular map matching for high precision vehicle localization
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN115451948A (en) Agricultural unmanned vehicle positioning odometer method and system based on multi-sensor fusion
CN116359905A (en) Pose map SLAM (selective level mapping) calculation method and system based on 4D millimeter wave radar
CN115127543A (en) Method and system for eliminating abnormal edges in laser mapping optimization
CN113959437A (en) Measuring method and system for mobile measuring equipment
Gao et al. Vido: A robust and consistent monocular visual-inertial-depth odometry
CN113155126A (en) Multi-machine cooperative target high-precision positioning system and method based on visual navigation
CN112432653A (en) Monocular vision inertial odometer method based on point-line characteristics
CN116380079A (en) Underwater SLAM method for fusing front-view sonar and ORB-SLAM3
CN115355904A (en) Slam method for Lidar-IMU fusion of ground mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination