CN115290090A - SLAM map construction method based on multi-sensor information fusion - Google Patents

SLAM map construction method based on multi-sensor information fusion Download PDF

Info

Publication number
CN115290090A
CN115290090A CN202210810601.3A CN202210810601A CN115290090A CN 115290090 A CN115290090 A CN 115290090A CN 202210810601 A CN202210810601 A CN 202210810601A CN 115290090 A CN115290090 A CN 115290090A
Authority
CN
China
Prior art keywords
information
robot
laser
pose
filtering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210810601.3A
Other languages
Chinese (zh)
Inventor
张永芳
常欢
陈岩松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202210810601.3A priority Critical patent/CN115290090A/en
Publication of CN115290090A publication Critical patent/CN115290090A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C1/00Measuring angles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/18Stabilised platforms, e.g. by gyroscope
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a SLAM map construction method based on multi-sensor information fusion, which comprises the steps of collecting data by using a two-dimensional laser radar, a depth camera, an IMU (inertial measurement unit) and a photoelectric encoder, filtering pose information collected by the IMU and the photoelectric encoder to obtain fused pose information, fusing laser information and depth image information to obtain fused laser information, and finally transmitting the fused pose information and the laser information into an SLAM algorithm to complete map construction. The invention overcomes the defect that a single sensor cannot meet the accurate pose estimation and the construction of a map consistent with the surrounding environment of the robot in a complex environment, so that the SLAM method of information fusion of the two-dimensional laser radar, the depth camera, the IMU and the photoelectric encoder is beneficial to improving the positioning precision and the mapping effect of the robot, and is more beneficial to realizing the application requirements of autonomous path planning, autonomous positioning, autonomous obstacle avoidance and the like of the robot in the complex environment.

Description

SLAM map construction method based on multi-sensor information fusion
Technical Field
The invention belongs to the technical field of SLAM, and particularly relates to a SLAM map construction method based on multi-sensor information fusion.
Background
Meanwhile, positioning and mapping (SLAM) can effectively solve the problems of positioning, navigation, mapping and the like when the autonomous mobile robot moves in an unknown environment. The robot senses the surrounding environment and the information of the robot through a sensor carried by the robot, and estimates the position and posture of the robot and constructs a map consistent with the surrounding environment according to the sensed information. After the map is constructed, the robot can meet the application requirements of autonomous path planning, autonomous positioning, autonomous obstacle avoidance and the like according to the constructed map, and accordingly the purpose of autonomous movement of the robot is achieved.
However, with the wide application of robots, the environments of the robots are increasingly complex, and a single sensor cannot meet the pose estimation of the robots in the complex environments and the construction of a map consistent with the surrounding environments, so that the use of the robots in the complex environments is limited.
Disclosure of Invention
The invention aims to provide a SLAM map construction method based on multi-sensor information fusion, which solves the problems that a single sensor cannot meet the pose estimation of a robot in a complex environment and a map consistent with the surrounding environment is constructed, and the use of the robot in the complex environment is limited.
The technical scheme adopted by the invention is that,
the SLAM map construction method based on multi-sensor information fusion is implemented according to the following steps:
the method comprises the following steps that 1, a two-dimensional laser radar, a depth camera, an IMU and a photoelectric encoder are mounted on a robot, the IMU comprises a gyroscope and an accelerometer, a traveling wheel of the robot is connected with the photoelectric encoder, and the speed, the acceleration, the angular speed, the laser information and the depth image information of the robot are collected;
step 2, calculating the three-dimensional coordinate system change of the robot according to the acquired acceleration and angular velocity, resolving the robot attitude angle by using quaternion, performing complementary filtering on the attitude angle, and performing sliding filtering on the acquired velocity;
fusing the acceleration, the angular velocity, the attitude angle after filtering and the velocity after sliding filtering by using extended Kalman filtering so as to obtain pose information of the robot;
step 3, carrying out information fusion on the laser information and the depth image information so as to obtain fused laser information;
and 4, transmitting the latest updated pose information and the fused laser information into an SLAM algorithm, and constructing a map.
The specific process of complementary filtering of the attitude angle is as follows:
Figure BDA0003740651490000021
in the formula, /) a For the attitude angle, psi, of the accelerometer g The attitude angle of the gyroscope is represented, psi is the filtered attitude angle, dt/(tau + dt) is low-pass filtering, tau/(tau + dt) is high-pass filtering, tau is a filter time constant, 1/tau is a filter cut-off frequency, and dt is filter sampling time.
Wherein, the gyroscope attitude angle during the nth filtering is as follows:
Figure BDA0003740651490000022
the attitude angle after the nth filtering is:
Figure BDA0003740651490000023
in the step 2, the specific process of fusing the acceleration, the angular velocity, the attitude angle after filtering and the velocity after sliding filtering by using the extended Kalman filtering is as follows:
a, performing prior estimation on the pose of the robot at the k moment through the speed measured by the photoelectric encoder after filtering at the k-1 moment, wherein a prior estimation formula is as follows:
Figure BDA0003740651490000031
Figure BDA0003740651490000032
represents an a priori estimate of the pose of the robot at time k,
Figure BDA0003740651490000033
a posteriori estimate of the pose of the robot at time k-1, u k-1 Representing the speed measured by the encoder after filtering at the k-1 moment;
b, calculating a prior estimation error covariance matrix at the moment k;
P k/k-1 =AP k-1 A T +Q k-1
wherein P is k/k-1 Representing a prior estimation error covariance matrix at the moment k; a represents the gain matrix of the pose of the robot, P k-1 Covariance matrix, Q, representing the posterior estimation error at time k-1 k-1 A covariance matrix representing the excitation noise of the process at time k-1;
step C, calculating Kalman gain K at the moment K k
Figure BDA0003740651490000034
H represents a gain matrix from the current robot pose to an observed value, R k Representing the covariance matrix of the observation noise at the k-1 moment;
d, updating posterior estimation of the pose of the robot at the moment k according to the acceleration, the angular velocity and the filtered attitude angle of the robot at the moment k;
Figure BDA0003740651490000035
Figure BDA0003740651490000036
represents a posterior estimate of the pose of the robot at time k,
Figure BDA0003740651490000037
posterior estimation, Z, representing the pose of the robot at time k-1 k Representing acceleration, angular velocity and attitude angle after filtering at the moment k;
e, updating the error covariance matrix, returning to the step A, and updating the error covariance matrix formula as follows:
P k/k =(I-K k H)P k/k-1
and the posterior estimation of the pose of the robot at the moment k is the pose information of the robot.
The specific process of information fusion of the laser information and the depth image information in the step 3 is as follows: converting depth image information into pseudo laser information, fusing the pseudo laser information and the laser information, firstly converting a coordinate system of the obtained pseudo laser information into a coordinate system of the laser information through coordinates, and scanning an angle range { theta ] of a two-dimensional laser radar laser |0°≤θ laser Less than or equal to 360 degrees, and setting an angle threshold value theta th Taken at an angle threshold of | θ laserdepth |≤θ th Minimum value of inner distance r θ =min{r laser ,r depth And the distance is corresponding to the angle, so that the fusion of the two-dimensional laser radar information and the pseudo laser information is realized, and the fused laser information is obtained.
The specific process of converting the depth image information into the pseudo laser information comprises the following steps: the depth image information comprises a multi-frame depth image, the depth image is divided into n rows according to pixels, pixel points on each row which are nearest to the origin of the camera are taken as pixel conversion pseudo laser data of the row, and all pixel points in the mth row are converted into pseudo laser data and then expressed as
Figure BDA0003740651490000041
Mth column pixel point conversion pseudo laser point cloud representationIs composed of
Figure BDA0003740651490000042
Converting the depth image information into two-dimensional laser point cloud: r is a radical of hydrogen depth ={(r mm ) And converting the depth image information into the pseudo laser information.
The invention has the beneficial effects that:
according to the SLAM map construction method based on multi-sensor information fusion, errors generated in the acceleration, angular velocity and speed measurement process of the robot can be effectively solved through complementary filtering and sliding filtering, the filtered pose information of the robot is fused based on extended Kalman filtering, and the positioning accuracy of the robot is further improved; the laser information and the depth image information are fused, so that the defect that a two-dimensional laser radar can only sense the environmental information of a scanning plane is overcome, and the sensing capability of the robot on the environmental information is improved; therefore, the SLAM method based on multi-sensor information fusion is beneficial to improving the positioning accuracy and the mapping effect of the robot, and is further beneficial to realizing the application requirements of the robot such as autonomous path planning, autonomous positioning and autonomous obstacle avoidance in a complex environment.
Drawings
FIG. 1 is a flow chart of a SLAM map construction method based on multi-sensor information fusion according to the present invention;
FIG. 2 is a schematic diagram of the conversion of depth image information to pseudo laser information as employed in the present invention;
FIG. 3 is a schematic diagram of pseudo laser information and laser information fusion employed in the present invention;
FIG. 4 (A) is a ground truth diagram of a physical simulation environment;
fig. 4 (B) is a two-dimensional grid map of laser SLAM based on gmaping;
fig. 4 (C) is a two-dimensional grid map of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a SLAM map construction method based on multi-sensor information fusion, which is implemented according to the following steps as shown in figure 1:
the method comprises the following steps that 1, a two-dimensional laser radar, a depth camera and an IMU (inertial measurement unit) are installed on a robot, the IMU comprises a gyroscope and an accelerometer, a photoelectric encoder is connected to a walking wheel of the robot, and the speed, the acceleration, the angular velocity, laser information and depth image information of the robot are collected;
step 2, calculating the three-dimensional coordinate system change of the robot according to the acquired acceleration and angular velocity, resolving the robot attitude angle by using quaternion, performing complementary filtering on the attitude angle, and performing sliding filtering on the acquired velocity;
the specific process of complementary filtering of the attitude angle is as follows:
Figure BDA0003740651490000051
in the formula, /) a For the attitude angle, psi, of the accelerometer g The attitude angle of the gyroscope is represented, psi is the filtered attitude angle, dt/(tau + dt) is low-pass filtering, tau/(tau + dt) is high-pass filtering, tau is a filter time constant, 1/tau is a filter cut-off frequency, and dt is filter sampling time.
Wherein, the gyroscope attitude angle during the nth filtering is as follows:
Figure BDA0003740651490000061
the attitude angle after the nth filtering is:
Figure BDA0003740651490000062
the specific process of fusing the acceleration, the angular velocity, the attitude angle after filtering and the velocity after sliding filtering by using the extended Kalman filtering is as follows:
step A, carrying out prior estimation on the pose of the robot at the time k through the speed measured by the photoelectric encoder after filtering at the time k-1, wherein the prior estimation formula is as follows:
Figure BDA0003740651490000063
Figure BDA0003740651490000064
represents an a priori estimate of the pose of the robot at time k,
Figure BDA0003740651490000065
a posteriori estimate, u, representing the pose of the robot at time k-1 k-1 Representing the speed measured by the encoder after filtering at the k-1 moment;
b, calculating a prior estimation error covariance matrix at the moment k;
P k/k-1 =AP k-1 A T +Q k-1
wherein P is k/k-1 Representing a prior estimation error covariance matrix at the k moment; a represents the gain matrix of the pose of the robot, P k-1 Covariance matrix, Q, representing the posterior estimation error at time k-1 k-1 A covariance matrix representing the process excitation noise at time k-1;
step C, calculating Kalman gain K at the moment K k
Figure BDA0003740651490000066
H represents a gain matrix from the current robot pose to an observed value, R k Representing the covariance matrix of the observation noise at the k-1 moment;
d, updating posterior estimation of the pose of the robot at the moment k according to the acceleration, the angular velocity and the filtered attitude angle of the robot at the moment k;
Figure BDA0003740651490000067
Figure BDA0003740651490000071
represents a posterior estimate of the pose of the robot at time k,
Figure BDA0003740651490000072
a posteriori estimate of the pose of the robot at time k-1, Z k Representing acceleration, angular velocity and attitude angle after filtering at the moment k;
e, updating the error covariance matrix, returning to the step A, and updating the error covariance matrix formula as follows:
P k/k =(I-K k H)P k/k-1
and the posterior estimation of the robot pose at the time k is the pose information of the robot.
Step 3, carrying out information fusion on the laser information and the depth image information so as to obtain fused laser information; the specific process is as follows:
converting first part depth image information into pseudo laser information, wherein the depth image information comprises a multi-frame depth image, dividing the depth image into n rows according to pixels, taking a pixel point on each row closest to the origin of the camera as pixel conversion pseudo laser data of the row, and then converting all pixel points in the mth row into pseudo laser data and then expressing the pixel points as the pseudo laser data
Figure BDA0003740651490000073
The m-th row of pixel points is converted into pseudo laser point cloud expression
Figure BDA0003740651490000074
Converting the depth image information into two-dimensional laser point cloud: r is depth ={(r mm ) Converting the depth image information into pseudo laser information;
the second part of pseudo laser information is fused with the laser information, firstly, the coordinate system of the pseudo laser information is transformed into the coordinate system of the laser information, and the two-dimensional laser radar scans the angle range { theta } laser |0°≤θ laser Less than or equal to 360 degrees, and setting an angle threshold value theta th Is taken at an angle threshold | theta laserdepth |≤θ th Minimum value of inner distance r θ =min{r laser ,r depth And the distance is corresponding to the angle, so that the fusion of the two-dimensional laser radar information and the pseudo laser information is realized, and the fused laser information is obtained.
And 4, transmitting the latest updated pose information and the fused laser information into an SLAM algorithm, and constructing a map.
The invention collects the speed, acceleration and angular velocity information of the robot in real time through the photoelectric encoder and the IMU, but the pose information directly collected has a large amount of errors, so that the problems that an acceleration accelerometer in the IMU is sensitive to the acceleration, an error of an inclination angle calculated by taking an instantaneous value is large, and an error caused by integral drift and temperature drift of a gyroscope along with the increase of time is large can be effectively solved through complementary filtering; filtering out speed fluctuation caused by noise interference through sliding filtering; and finally, the filtered pose information of the robot is fused through extended Kalman filtering, so that the positioning precision of the robot is further improved. Through fusing laser information and depth image information, the problem that two-dimensional laser radar is insufficient in complex environment perception can be effectively solved, such as: when the robot is in an environment with obstacles, the height of which is lower than that of a scanning plane of the two-dimensional laser radar, the two-dimensional laser radar cannot sense the obstacles due to the fact that the laser cannot scan the obstacles, and then the robot cannot construct a map which is consistent with the surrounding environment, so that the perception capability of the robot to the surrounding environment can be improved by fusing laser information and depth image information. And transmitting the fused pose information and laser information into an SLAM algorithm, estimating the pose by using particle filtering and constructing a local grid map, and constructing a complete two-dimensional grid map by continuously updating the map.
According to the SLAM map construction method based on multi-sensor information fusion, acceleration and angular velocity of a robot are collected through an Inertial Measurement Unit (IMU).
Accurate positioning of a complex indoor environment is taken as the basis of normal operation of the robot, however, GPS (global positioning system) signals are weak in a building, and even positioning failure occurs, and the IMU is taken as a device for measuring the three-axis attitude angle and acceleration of an object, and can provide accurate acceleration and angular velocity for the robot in a short time, so that the IMU is an effective method for positioning the robot in a complex environment.
In the SLAM map construction method based on multi-sensor information fusion, an IMU comprises an accelerometer and a gyroscope, and acceleration information is acquired through the accelerometer; angular velocity information is collected by a gyroscope.
The IMU is composed of three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect robot acceleration information, the gyroscopes detect robot angular velocity information, after the signals are processed, the posture of an object can be calculated, and the velocity and displacement of the robot are calculated through integration.
And acquiring the speed of the robot through a photoelectric encoder. And calculating the speed value corresponding to each wheel by using pulse signals collected by photoelectric encoders arranged on the wheels of the robot, then calculating the overall moving speed of the robot according to a differential moving model, and obtaining the displacement of the robot through calculation.
And converting the depth image information into pseudo laser information, and performing information fusion on the laser information and the pseudo laser information to obtain fused laser information. By converting the depth image information into the pseudo laser information and fusing the pseudo laser information and the laser information, the defect of insufficient perception of the two-dimensional laser radar is overcome, and the mapping effect of the SLAM is improved.
The depth image information comprises a multi-frame depth image, the depth image is divided into n rows according to pixels, and pixel points closest to the origin of a camera on each row are taken as the pixel conversion pseudo laser data of the row.
Referring to fig. 2, after all the pixels in the mth column are converted into the pseudo laser data, the data are shown
Figure BDA0003740651490000091
The mth column of pixel points converted into the pseudo laser point cloud can be represented as
Figure BDA0003740651490000092
Thus, depth image information may be converted to a two-dimensional laser point cloud: r is a radical of hydrogen depth ={(r mm ) And converting the depth image information into the pseudo laser information.
Converting the coordinate system of the pseudo laser information into a laser coordinate system, and taking the minimum distance r under the same laser radar scanning angle θ =min{r laser ,r depth The distance corresponding to the angle.
Referring to fig. 3, the coordinate system of the obtained pseudo laser information is transformed into the coordinate system of the laser information, and since the update speed of the laser information is faster than that of the depth image information, it cannot be guaranteed that the two kinds of information are corresponding to each other one by one, therefore, the two-dimensional lidar scans the angular range { theta } in the angular range laser |0°≤θ laser Less than or equal to 360 degrees, and setting an angle threshold value theta th Taken at an angle threshold of | θ laserdepth |≤θ th Minimum value of inner distance r θ =min{r laser ,r depth And the distance is corresponding to the angle, so that the fusion of the two-dimensional laser radar information and the pseudo laser information is realized.
FIG. 4 (a) shows a ground truth diagram of the physical simulation environment; fig. 4 (b) is a two-dimensional grid map in which measured laser information and encoder information are directly transmitted to the gmaping laser SLAM, a motion model is established, and pose estimation and construction are performed by using particle filtering. Fig. 4 (c) is a two-dimensional grid map obtained by fusing the acquired laser information and depth image information to obtain fused laser information, performing extended kalman filtering on the IMU information and encoder information after complementary filtering and sliding filtering to obtain fused pose information, then transmitting the fused laser information and the fused pose information to a gmaping laser SLAM, building a motion model, and performing pose estimation and construction by using particle filtering.
Obstacles with heights lower than the two-dimensional laser radar laser scanning plane exist in the simulation environment, so that the existence of the obstacles cannot be sensed by the original method, and further the obstacles cannot be mapped, as shown in fig. 4 (b), which may further cause failure of robot path planning and autonomous obstacle avoidance. The method solves the problem that the robot cannot sense the obstacle with the height lower than the two-dimensional laser radar laser scanning plane, and improves the positioning accuracy of the robot through multi-sensor information fusion, so that a more accurate and complete two-dimensional grid map can be constructed by the method, as shown in fig. 4 (c), and a more reliable basis is provided for subsequent robot path planning and autonomous obstacle avoidance.
According to the method, the two-dimensional laser radar, the depth camera, the IMU and the photoelectric encoder are used for acquiring data, the pose information acquired by the IMU and the photoelectric encoder is processed to obtain the fused pose information, the laser information and the depth image information are fused to obtain the fused laser information, and finally the fused pose information and the laser information are transmitted to the SLAM algorithm to complete map building. The invention overcomes the defect that a single sensor cannot meet the accurate pose estimation and the construction of a map consistent with the surrounding environment of the robot in a complex environment, so that the SLAM method of information fusion of the two-dimensional laser radar, the depth camera, the IMU and the photoelectric encoder is beneficial to improving the positioning precision and the mapping effect of the robot, and is more beneficial to realizing the application requirements of autonomous path planning, autonomous positioning, autonomous obstacle avoidance and the like of the robot in the complex environment.

Claims (5)

1. The SLAM map construction method based on multi-sensor information fusion is characterized by being implemented according to the following steps:
the method comprises the following steps that 1, a two-dimensional laser radar, a depth camera, an IMU and a photoelectric encoder are installed on a robot, the IMU comprises a gyroscope and an accelerometer, a photoelectric encoder is connected to a walking wheel of the robot, and the speed, the acceleration, the angular speed, laser information and depth image information of the robot are collected;
step 2, calculating the three-dimensional coordinate system change of the robot according to the acquired acceleration and angular velocity, resolving the robot attitude angle by using quaternion, performing complementary filtering on the attitude angle, and performing sliding filtering on the acquired velocity;
fusing the acceleration, the angular velocity, the attitude angle after filtering and the velocity after sliding filtering by using extended Kalman filtering so as to obtain pose information of the robot;
step 3, carrying out information fusion on the laser information and the depth image information so as to obtain fused laser information;
and 4, transmitting the latest updated pose information and the fused laser information into an SLAM algorithm, and constructing a map.
2. The SLAM map construction method based on multi-sensor information fusion as claimed in claim 1, wherein the detailed process of complementary filtering to attitude angle is:
Figure FDA0003740651480000011
in the formula, # a For the attitude angle, psi, of the accelerometer g The attitude angle of the gyroscope is represented, psi is the filtered attitude angle, dt/(tau + dt) is low-pass filtering, tau/(tau + dt) is high-pass filtering, tau is a filter time constant, 1/tau is a filter cut-off frequency, and dt is filter sampling time.
Wherein, the attitude angle of the gyroscope during the nth filtering is as follows:
Figure FDA0003740651480000021
the attitude angle after the nth filtering is:
Figure FDA0003740651480000022
3. the method for constructing the SLAM map based on the multi-sensor information fusion of claim 1, wherein the specific process of fusing the acceleration, the angular velocity, the attitude angle after the filtering and the velocity after the sliding filtering by using the extended kalman filter in the step 2 is as follows:
a, performing prior estimation on the pose of the robot at the k moment through the speed measured by the photoelectric encoder after filtering at the k-1 moment, wherein a prior estimation formula is as follows:
Figure FDA0003740651480000023
Figure FDA0003740651480000024
represents a priori estimates of the pose of the robot at time k,
Figure FDA0003740651480000025
a posteriori estimate of the pose of the robot at time k-1, u k-1 Representing the speed measured by the encoder after filtering at the k-1 moment;
b, calculating a prior estimation error covariance matrix at the moment k;
P k/k-1 =AP k-1 A T +Q k-1
wherein P is k/k-1 Representing a prior estimation error covariance matrix at the k moment; a represents the gain matrix of the pose of the robot, P k-1 Covariance matrix, Q, representing the posterior estimation error at time k-1 k-1 A covariance matrix representing the process excitation noise at time k-1;
step C, calculating Kalman gain K at the moment K k
Figure FDA0003740651480000026
H represents a gain matrix from the current robot pose to an observed value, R k Representing the covariance matrix of the observation noise at the k-1 moment;
d, updating posterior estimation of the pose of the robot at the k moment according to the acceleration, the angular velocity and the filtered pose angle of the robot at the k moment;
Figure FDA0003740651480000031
Figure FDA0003740651480000032
represents the posterior estimation of the pose of the robot at the moment k,
Figure FDA0003740651480000033
posterior estimation, Z, representing the pose of the robot at time k-1 k Representing acceleration, angular velocity and attitude angle after filtering at the moment k;
e, updating the error covariance matrix, returning to the step A, wherein the formula of the updated error covariance matrix is as follows:
P k/k =(I-K k H)P k/k-1
and the posterior estimation of the robot pose at the time k is the pose information of the robot.
4. The SLAM map construction method based on multi-sensor information fusion as claimed in claim 1, wherein the specific process of information fusion of laser information and depth image information in step 3 is as follows: converting depth image information into pseudo laser information, fusing the pseudo laser information and the laser information, firstly converting a coordinate system of the obtained pseudo laser information into a coordinate system of the laser information through coordinates, and scanning an angle range { theta ] of a two-dimensional laser radar laser |0°≤θ laser Less than or equal to 360 degrees, and setting an angle threshold value theta th Taken at an angle threshold of | θ laserdepth |≤θ th Minimum value of inner distance r θ =min{r laser ,r depth And the distance is corresponding to the angle, so that the fusion of the two-dimensional laser radar information and the pseudo laser information is realized, and the fused laser information is obtained.
5. The composition of claim 1The SLAM map construction method based on multi-sensor information fusion is characterized in that the specific process of converting depth image information into pseudo laser information is as follows: the depth image information comprises a multi-frame depth image, the depth image is divided into n rows according to pixels, pixel points closest to the origin of the camera on each row are taken as pixel conversion pseudo laser data of the row, and all pixel points on the mth row are converted into pseudo laser data and then expressed as
Figure FDA0003740651480000034
The m-th row of pixel points is converted into pseudo laser point cloud representation
Figure FDA0003740651480000035
Converting the depth image information into two-dimensional laser point cloud: r is depth ={(r mm ) And converting the depth image information into the pseudo laser information.
CN202210810601.3A 2022-07-11 2022-07-11 SLAM map construction method based on multi-sensor information fusion Pending CN115290090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210810601.3A CN115290090A (en) 2022-07-11 2022-07-11 SLAM map construction method based on multi-sensor information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210810601.3A CN115290090A (en) 2022-07-11 2022-07-11 SLAM map construction method based on multi-sensor information fusion

Publications (1)

Publication Number Publication Date
CN115290090A true CN115290090A (en) 2022-11-04

Family

ID=83821306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210810601.3A Pending CN115290090A (en) 2022-07-11 2022-07-11 SLAM map construction method based on multi-sensor information fusion

Country Status (1)

Country Link
CN (1) CN115290090A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117405118A (en) * 2023-10-16 2024-01-16 西安工业大学 Multi-sensor fusion mapping method, system, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117405118A (en) * 2023-10-16 2024-01-16 西安工业大学 Multi-sensor fusion mapping method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN113781582B (en) Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration
CN110706279B (en) Global position and pose estimation method based on information fusion of global map and multiple sensors
CN108519615B (en) Mobile robot autonomous navigation method based on combined navigation and feature point matching
CN106017463B (en) A kind of Aerial vehicle position method based on orientation sensing device
CN109282808B (en) Unmanned aerial vehicle and multi-sensor fusion positioning method for bridge three-dimensional cruise detection
CN109933056B (en) Robot navigation method based on SLAM and robot
CN110702091B (en) High-precision positioning method for moving robot along subway rail
CN112987065B (en) Multi-sensor-integrated handheld SLAM device and control method thereof
CN111338383B (en) GAAS-based autonomous flight method and system, and storage medium
CN111025366B (en) Grid SLAM navigation system and method based on INS and GNSS
CN111982114B (en) Rescue robot for estimating three-dimensional pose by adopting IMU data fusion
Mercado et al. Gps/ins/optic flow data fusion for position and velocity estimation
CN112562052B (en) Real-time positioning and mapping method for near-shore water area
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN112652001B (en) Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering
CN113639722B (en) Continuous laser scanning registration auxiliary inertial positioning and attitude determination method
CN112556681B (en) Vision-based navigation and positioning method for orchard machine
CN112923934A (en) Laser SLAM technology suitable for combining inertial navigation in unstructured scene
CN115290090A (en) SLAM map construction method based on multi-sensor information fusion
CN111679308A (en) Unmanned vehicle positioning method based on multi-sensor fusion
US11992961B2 (en) Pose determination method, robot using the same, and computer readable storage medium
CN113503872B (en) Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU
CN114812554A (en) Multi-source fusion robot indoor absolute positioning method based on filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination