CN117168441B - Multi-sensor fusion SLAM positioning and reconstructing method and system - Google Patents
Multi-sensor fusion SLAM positioning and reconstructing method and system Download PDFInfo
- Publication number
- CN117168441B CN117168441B CN202311445001.2A CN202311445001A CN117168441B CN 117168441 B CN117168441 B CN 117168441B CN 202311445001 A CN202311445001 A CN 202311445001A CN 117168441 B CN117168441 B CN 117168441B
- Authority
- CN
- China
- Prior art keywords
- matrix
- laser
- laser radar
- imu
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000004927 fusion Effects 0.000 title claims abstract description 24
- 239000011159 matrix material Substances 0.000 claims description 105
- 230000033001 locomotion Effects 0.000 claims description 43
- 238000001514 detection method Methods 0.000 claims description 22
- 238000005457 optimization Methods 0.000 claims description 17
- 238000007781 pre-processing Methods 0.000 claims description 16
- 230000004807 localization Effects 0.000 claims description 11
- 238000004040 coloring Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 9
- 230000017105 transposition Effects 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 5
- 230000000452 restraining effect Effects 0.000 claims description 4
- 230000008878 coupling Effects 0.000 claims description 3
- 238000010168 coupling process Methods 0.000 claims description 3
- 238000005859 coupling reaction Methods 0.000 claims description 3
- 239000003550 marker Substances 0.000 claims description 2
- 230000000295 complement effect Effects 0.000 abstract description 2
- 238000013461 design Methods 0.000 abstract description 2
- 102000008115 Signaling Lymphocytic Activation Molecule Family Member 1 Human genes 0.000 description 36
- 108010074687 Signaling Lymphocytic Activation Molecule Family Member 1 Proteins 0.000 description 36
- 230000006870 function Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 8
- 230000001133 acceleration Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000013507 mapping Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Navigation (AREA)
Abstract
The application discloses a multi-sensor fusion SLAM positioning and reconstructing method, which can complement the advantages of a laser vision sensor, and meanwhile, a system can still normally operate under the condition of lack of vision data, and has the advantage of being more flexible compared with the coupled laser vision design; compared with the current mainstream SLAM multisensor fusion scheme, the SLAM multisensor fusion system has higher positioning accuracy and faster running speed, and has better comprehensive performance.
Description
Technical Field
The invention relates to the technical field of intelligent perception of robots, in particular to a multi-sensor fusion SLAM positioning and reconstructing method and system.
Background
Meanwhile, positioning and mapping (Simultaneous Localization and Mapping, SLAM) means that a main body carrying a specific sensor estimates own motion in the motion process and simultaneously builds an environment map under the condition of no environment priori. The robot can move and position in an unknown environment through SLAM in the autonomous navigation system, and meanwhile, an environment map can be established. SLAM can track the movement of a user and add virtual objects in a scene by superimposing the virtual objects in the real world using cameras and other sensors in augmented reality; in rescue and inspection tasks, SLAM can provide navigation and positioning functions for the unmanned aerial vehicle; in general, SLAM technology has wide application prospect and important research significance, and can provide an effective solution for the fields of autonomous navigation, augmented reality, unmanned aerial vehicle, security protection and the like.
From the sensor type, SLAMs can be classified into vision SLAMs and laser SLAMs, wherein the laser SLAMs acquire environmental three-dimensional point cloud data using a laser sensor, and the vision SLAMs acquire environmental image data using a camera. In contrast, the laser SLAM obtains more accurate data, the visual SLAM has higher requirements on the environment and requires better illumination conditions and contrast, so the laser sensor has absolute advantages in acquiring three-dimensional information of the environment compared with the visual sensor.
Because the laser radar obtains the coordinate information of the dense three-dimensional space points, the SLAM scene reconstruction function can be directly realized through the projection of the coordinate points. However, lidar has the following drawbacks: 1) in a rainy scene, laser reflection points measured by a laser radar can be reflected by raindrops to generate false point clouds, so that subsequent data processing and scene reconstruction are affected, 2) the laser radar cannot sense transparent or reflective objects such as mirrors, glass and the like, which can cause certain limitation on sensing and navigation of a robot, and 3) the laser radar cannot capture color information of the environment, so that the reconstructed three-dimensional model is not ideal in visual impression.
Disclosure of Invention
In view of the foregoing drawbacks or shortcomings in the art, it is desirable to provide a multi-sensor fused SLAM localization and reconstruction method and system.
In a first aspect, an embodiment of the present application provides a method for positioning and reconstructing a multi-sensor fused SLAM, where the method includes:
s1: preprocessing visible light camera data, synchronizing the preprocessed visible light camera data into laser point clouds through a camera and external parameters of a laser radar, and coloring laser points obtained by each scanning of the laser radar;
s2: calculating the relative motion quantity of the IMU between two scans of the laser radar, and obtaining prior information of robot attitude estimation through a Kalman filter;
s3: updating IMU prior information through a Kalman filter to obtain LIO final posterior state quantity, namely, relative motion information between two laser radar scanning frames, and obtaining a motion track of the robot through superposition of the relative motion information;
s4: fusing the GPS and the motion trail obtained in the S3 through a Kalman filter;
s5: performing loop detection on the step S4;
s6: superposing the posterior state on the laser point cloud, and giving absolute position information to each laser scanning combined with GPS information to obtain a three-dimensional reconstruction map;
s7: and sending the three-dimensional reconstruction map to the terminal.
In one embodiment, the step S1 includes: preprocessing the visible light camera data through EnlightnGAN, synchronizing the preprocessed visible light camera data into laser point clouds through the camera and the external parameters of the laser radar, and coloring laser points obtained by each scanning of the laser radar.
In one embodiment, the step S2 includes:
calculating the relative motion quantity of the IMU between two scans of the laser radar;
eliminating inherent distortion of the laser radar through the IMU;
and coupling the laser radar and IMU information through a Kalman filter, and jointly estimating the attitude information of the robot through the information of the laser radar and the IMU.
In one embodiment, in step S3, updating IMU a priori information by the kalman filter to obtain the LIO final posterior state quantity includes:
s31: constructing a priori covariance matrix Wherein F is x Indicating laser spot error, +.>Is the posterior covariance matrix at the moment i-1; f (F) w Is IMU noise coefficient matrix, Q i Is a noise variance matrix;
s32: defining a Kalman gain matrix as g K,Wherein R is an observation noise matrix, H is an observation noise coefficient matrix, J is a matrix for constraining a state quantity to be optimized and iterating therewith, the upper right corner-1 represents inverse operation of the matrix, and the upper right corner T represents transposed operation of the matrix, so R -1 Inverse matrix representing observed noise, J T Represents the transposition of J, H T The symbols are the same;
the kth iteration may be expressed as the following formula:
delta theta in matrix Ik Is the variation between the rotation component of the IMU gesture and the prior gesture at the kth iteration, delta theta Lk Is the change relation between IMU and laser radar rotation external parameter in iteration, 0 3×3 Is an all-zero matrix of 3 rows and 3 columns, 0 3×15 Is an all-zero matrix of 3 rows and 15 columns, I 3×3 Is an identity matrix of 3 rows and 3 columns, I 15×15 Is a unit matrix of 15 rows and 15 columns, (delta theta) Ik ) -T Is delta theta Ik Is inverted after transposition of (delta theta) Lk ) -T And the same is done;
s33: the posterior state of the kth iteration is as follows:
wherein (1)>Indicating a posterior state of the kth iteration at time i,/->Representing the posterior state before the kth iteration at time i, operator +.>Depending on the state quantity domain, the meaning is different, and the lie group domain representations are multiplied and the lie algebraic domain representations are added. Operator->Is an operator->In the reverse process of (a), g k represents a Kalman gain matrix, Z k A state quantity residual matrix representing the kth iteration, I representing a 3 x 3 identity matrix, H being an observation noise figure matrix, +.>Representing the variation between a posterior state quantity before iterative optimization at the moment i and an a priori state quantity before iterative optimization of the state quantity at the moment i;
s34: the posterior equation for the kth iteration can be expressed as:
wherein I represents a 3 x 3 identity matrix, g k represents a Kalman gain matrix, H is an observation noise figure matrix, < >>The matrix is i time prior covariance matrix, the jacobian matrix J is a matrix used for restraining the state quantity to be optimized and iterating with the state quantity, and the upper right corner T represents transposed operation of the matrix. And after the iteration reaches the exit condition or the designated times, the iteration can be exited, and the final posterior state is obtained.
In one embodiment, before step S4, the method further comprises: and converting the fused GPS coordinates.
In one embodiment, the step S5 includes: and (4) performing closed loop detection on the step S4 by adopting ScanContext, and performing global optimization on a loop region after loop detection.
In one embodiment, the step S6 includes: projecting the motion trail obtained in the step S4 to a world coordinate system;
post-projection sitting marker W P i =(x,y,z)。P i The I three-dimensional coordinate point is represented, the upper left corner W represents that the coordinate point is located in a world coordinate system or a global coordinate system, and a laser radar coordinate system L, IMU coordinate system I and a camera coordinate system C, x, y and z respectively represent the values of three coordinate axes.
In one embodiment, the step S7 includes: and transmitting the three-dimensional reconstruction map to the terminal through the Internet.
In a second aspect, embodiments of the present application provide a multi-sensor fused SLAM localization and reconstruction system, the system comprising:
the preprocessing module is used for preprocessing the visible light camera data, synchronizing the preprocessed visible light camera data with external parameters of the laser radar into the laser point cloud through the camera and the laser radar, and coloring the laser points obtained by each scanning of the laser radar;
the computing module is used for computing the relative motion quantity of the IMU between two scans of the laser radar, and acquiring prior information of robot attitude estimation through a Kalman filter;
the updating module is used for updating the prior information of the IMU through a Kalman filter to obtain the LIO final posterior state quantity, namely the relative motion information between two laser radar scanning frames, and obtaining the motion trail of the robot through superposition of the relative motion information;
the fusion module is used for fusing the GPS and the motion trail obtained in the S3 through a Kalman filter;
the detection module is used for carrying out loop detection on the step S4;
the superposition module is used for superposing the posterior state on the laser point cloud, and giving absolute position information to each laser scanning combined with GPS information to obtain a three-dimensional reconstruction map;
and the sending module is used for sending the three-dimensional reconstruction graph to the terminal.
The beneficial effects of this application include:
the SLAM positioning and reconstructing method with the multi-sensor fusion can complement the advantages of the laser vision sensor, and meanwhile, the system can still normally operate under the condition of lack of vision data, and has more flexible advantages compared with the coupled laser vision design; compared with the current mainstream SLAM multisensor fusion scheme, the SLAM multisensor fusion system has higher positioning accuracy and faster running speed, and has better comprehensive performance.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 shows a schematic flow chart of a SLAM positioning and reconstruction method for multi-sensor fusion provided in an embodiment of the present application;
FIG. 2 illustrates an exemplary block diagram of a SLAM localization and reconstruction system 200 with multi-sensor fusion according to one embodiment of the present application;
FIG. 3 illustrates yet another flow diagram provided in accordance with an embodiment of the present application;
FIG. 4 shows a low-light image enhancement schematic of the image preprocessing step of the present application;
FIG. 5 shows a flow chart of the mobile terminal interconnection principle of the present application;
FIG. 6 shows an absolute track error plot of the present application on an M2DGR dataset;
FIG. 7 shows a schematic representation of the final reconstruction effect of the present application;
fig. 8 shows a schematic diagram of a computer system suitable for use in implementing the terminal device of the embodiments of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and not limiting of the application. It should be noted that, for convenience of description, only the portions related to the application are shown in the drawings.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Referring to fig. 1, fig. 1 shows a method for positioning and reconstructing a multi-sensor fusion SLAM according to an embodiment of the present application, where the method includes:
step 110: preprocessing visible light camera data, synchronizing the preprocessed visible light camera data into laser point clouds through a camera and external parameters of a laser radar, and coloring laser points obtained by each scanning of the laser radar;
step 120: calculating the relative motion quantity of the IMU between two scans of the laser radar, and obtaining prior information of robot attitude estimation through a Kalman filter;
step 130: updating IMU prior information through a Kalman filter to obtain LIO final posterior state quantity, namely, relative motion information between two laser radar scanning frames, and obtaining a motion track of the robot through superposition of the relative motion information;
step 140: fusing the GPS and the motion trail obtained in the S3 through a Kalman filter;
step 150: performing loop detection on the step S4;
step 160: superposing the posterior state on the laser point cloud, and giving absolute position information to each laser scanning combined with GPS information to obtain a three-dimensional reconstruction map;
step 170: and sending the three-dimensional reconstruction map to the terminal.
As shown in fig. 3, the present invention is mainly divided into three modules, namely, module a is a laser inertial odometer part, and a kalman filter is used to fuse the IMU and the laser radar features and obtain the posterior state of the laser inertial odometer; the Module B is a GPS Module, and the WGS84 coordinates of the GPS are converted into a northeast day coordinate system through UTM coordinates and then fused with the laser inertial odometer, so that the laser inertial odometer has higher positioning accuracy and better map construction quality; the Module C is a visual feature association Module, firstly, after the visible light camera and the laser radar are subjected to time stamp synchronization, a camera frame corresponding to laser radar scanning is extracted, the camera frame is associated with the posterior output of the laser inertial meter after the operations such as preprocessing and feature extraction are performed on the camera frame, the error of the posterior state projection of the laser inertial meter to the visual Module is calculated, BA optimization is performed, and finally the posterior state of the multi-sensor fusion system is obtained. And sending the final posterior state to a loop detection module to wait for loop generation and optimization, and simultaneously reconstructing a three-dimensional RGB scene, and displaying the final reconstruction and positioning effect on the mobile phone end in real time.
The method comprises the following specific implementation steps:
step 110:
preprocessing the visible light camera data, synchronizing the preprocessed visible light camera visual information into the laser point cloud through the camera and the laser radar external parameters, and coloring the laser point cloud. The preprocessing adopts EnLightenGAN to enhance the image, and improves the detail characteristics of the bright and dark parts of the image. EnlighttenGAN is an image enhancement technique based on generating a countermeasure network (GAN) that can convert a dimmed image into a brighter, sharper image while maintaining the naturalness and realism of the image. The technology is proposed by a research team of an automated research institute of China academy of sciences, and is an important progress of the traditional image enhancement technology.
EnlightnGAN is an open source algorithm for low-light image enhancement implemented with an unsupervised generation countermeasure network, and the main idea is to learn the mapping relationship between an input image and a target image by generating the countermeasure network, thereby achieving enhancement of the image. Specifically, it comprises two key parts: a generator and a arbiter. The generator receives the dimmed image as input and generates a brighter, sharper image to match the target image as much as possible. The arbiter evaluates the similarity between the image generated by the generator and the target image and attempts to distinguish them.
The utilization of EnlightenGAN can improve the light quality of camera images very well. The effect of improving the image quality by the pretreatment of EnlightenGAN is shown in fig. 4.
In order to achieve RGB reconstruction of world coordinate points, it is only necessary to know the color of each laser point. In order to obtain the color of each laser point, the laser point needs to be projected to a visual pixel plane, and the RGB channel corresponding to the laser point is searched for in the pixel plane. Recording three-dimensional points in a camera coordinate system as c P i = (x, y, z), the external parameters of the laser system to the camera system are CL R and CL t, which represent rotation and translation, respectively, the laser-based three-dimensional point can be projected as a camera-based three-dimensional point by the following formula C P i :
c P i = cL R -1 ( L P i - CL t)
Step 120:
estimating the priori pose of the IMU; the IMU is used as an inertial load, and the IMU and the laser radar are fused together, so that uncertainty of the laser radar in attitude estimation can be made up, distortion of laser point cloud can be eliminated to a certain extent, and the IMU can assist the laser radar to better cope with complex scenes. The method utilizes a Kalman filter in FASTLIO2 of Mars laboratory of hong Kong university to tightly couple laser radar and IMU information, estimates the attitude information of the robot through the combination of the information of the laser radar and the IMU, and eliminates the inherent distortion of the laser radar. The Kalman filter is a recursive filter based on a probability model, and the system state is estimated by combining measured data with model prediction, so that the Kalman filter is suitable for scenes requiring real-time performance such as real-time control, the mathematical model is relatively simple, the implementation is relatively easy, and the Kalman filter is suitable for environments with limited resources such as embedded systems.
When the IMU is recorded to perform state estimation, the state quantity
Input of IMU sensor->Representing angular velocity and acceleration, respectively, noise of IMU sensor->Respectively, angular velocity noise, acceleration noise, angular velocity bias noise, and acceleration bias noise. The state quantities R and t represent pose information, i.e. rotation and translation, the upper left hand I represents parameters under the inertial system, and the upper left hand IL represents external parameters of the laser system to under the inertial system. v I Indicating the speed of the inertial sensor, +.>And->Respectively representing the angular velocity bias and the acceleration bias of the inertial sensor,>in the form of a vector of gravitational acceleration under an inertial system.
Since the state quantity of the robot is a superposition concept, the state quantity directly calculated by the IMU is the change quantity from the previous time to the current time, and to obtain the state at the current time, the state at the previous time needs to be superposed by the change quantity:
wherein the operatorDepending on the state quantity representation field, the meaning is different, and the state quantity representation fields are multiplied by each other in the lie group field and added together in the lie algebra field. The initial moment inertial system pose is used as the starting point of the world system, and the subsequent state quantity is accumulated on the initial pose when the world system coordinates are converted. X is x i The state quantity at the i-th time is represented by i-1, and the state quantity at the time immediately before the i-th time is represented by i-1.
When estimating the state quantity by a Kalman filter, first, it is assumed that the noise w is at the moment i i =0, which can be obtained by a kalman a priori state estimation formula:
wherein,representing a priori state quantity,/->Represents posterior state quantity, u i Representing the input of the sensor IMU sensor at time i, 0 represents an ideal case of ignoring noise, and noise is specifically introduced for optimization later. The state transition matrix f can be expressed as the following formula:
wherein Δt represents the time difference between two adjacent times, 0 3×1 Representing an all-zero vector in three dimensions, R 24 Each row in the 24-dimensional matrix, f (x, u, w), is used to constrain each column in the state quantity x, and the meaning of the rest of the letters in the formula is described in the foregoing, which is not described herein.
According to the laser radar scanning principle, the laser point cloud is inevitably distorted in the motion process, and the IMU is utilizedThe high-frequency special effect can eliminate the motion distortion to a certain extent, the approximate process is to correct the laser radar point cloud displacement by utilizing the acceleration of the IMU in a tiny time, and the angular velocity corrects the direction, so that the laser point P i Correction of the ith laser spot P' i The procedure of (2) can be expressed as follows:
wherein R is i And t i The relative rotation and translation of the IMU at the ith scan point are shown, the meaning of the remaining symbols has been described above, wherein the amount with the v-symbol represents a priori information and the amount with the symbol a represents posterior information.
Step 130:
updating the posterior attitude of the laser radar, and updating IMU prior information by using a Kalman filter to obtain LIO final posterior state quantity to obtain the relative motion information between two laser radar scanning frames. In step S1, it is assumed that no noise exists, but the actual situation is that noise exists, and then the laser radar is required to observe the environment in the current state so as to optimize the prior state to obtain the posterior state.
First, a priori covariance matrix needs to be constructedThe following is shown:
wherein F is x The laser point error is mainly the error between the scene laser point at the current moment obtained by prior state calculation and the scene laser point actually observed by the laser radar. F (F) w Is IMU noise coefficient matrix, Q i Is a noise variance matrix, typically a preset value.Posterior covariance moment at time i-1The upper right hand corner T represents the transpose operation.
In order not to be confused with the camera internal parameter matrix K described above, the Kalman gain matrix is defined herein as g K. In the optimization prior stage, a Kalman gain is firstly required to be obtained g K, as follows:
where R is an observation noise matrix, H is an observation noise coefficient matrix, J is a matrix for constraining a state quantity to be optimized and iterating therewith, the upper right corner T represents a transposed operation of the matrix, the upper right corner-1 represents an inverse operation of the matrix, and the kth iteration can be represented as the following formula:
delta theta in matrix Ik Is the variation between the rotation component of the IMU gesture and the prior gesture at the kth iteration, delta theta Lk Is the change relation between IMU and laser radar rotation external parameter in iteration, 0 3×3 Is an all-zero matrix of 3 rows and 3 columns, 0 3×15 Is an all-zero matrix of 3 rows and 15 columns, I 3×3 Is an identity matrix of 3 rows and 3 columns, I 15×15 Is a unit matrix of 15 rows and 15 columns, (delta theta) Ik ) -T Is delta theta Ik Is inverted after transposition of (delta theta) Lk ) -T And the same is true.
The posterior state of the kth iteration is as follows:
wherein,indicating a posterior state of the kth iteration at time i,/->Representing the posterior state before the kth iteration at time i, operator +.>Depending on the state quantity domain, the meaning is different, and the lie group domain representations are multiplied and the lie algebraic domain representations are added. Operator->Is an operator->In the reverse process of (a), g k represents a Kalman gain matrix, Z k A state quantity residual matrix representing the kth iteration, I representing a 3 x 3 identity matrix, H being an observation noise figure matrix, +.>Representing the variation between a posterior state quantity before iterative optimization at the moment i and an a priori state quantity before iterative optimization of the state quantity at the moment i; the posterior equation for the kth iteration can be expressed as:
wherein I represents a 3 x 3 identity matrix, g k represents a kalman gain matrix, H is an observation noise figure matrix,the matrix is i time prior covariance matrix, the jacobian matrix J is a matrix used for restraining the state quantity to be optimized and iterating with the state quantity, and the upper right corner T represents transposed operation of the matrix. And after the iteration reaches the exit condition or the designated times, the iteration can be exited, and the final posterior state is obtained.
And after the iteration reaches the exit condition or the designated times, the iteration can be exited, and the final posterior state is obtained.
Step 140:
compared with an indoor scene, the outdoor scene is better in GPS signal, the method can ensure that the track does not have serious accumulated drift when the robot executes outdoor long-term tasks by introducing GPS, and the loop detection has the effect of different work, and the difference is that the loop detection and optimization are effective only after the loop is generated by the system.
Unlike the northeast and north coordinate system of IMU, GPS is WGS84 coordinate system, i.e. longitude and latitude altitude coordinate system, so coordinate conversion is also required before fusion. Here the method uses the UTM coordinate system, the universal transverse ink card grid system. It divides the earth's expansion into several grids, each with independent northeast coordinates. The GPS sensor data are fused through the Kalman filter, so that the operation efficiency maximization of the system can be ensured.
After GPS observation is introduced, the Kalman gain is as follows:
wherein, g k represents a kalman gain matrix, H is an observation noise figure matrix,is a priori variance matrix, R is an observation noise matrix, the upper right corner-1 represents the inverse operation of the matrix, and the upper right corner T represents the transpose operation of the matrix.
Because the altitude information of the GPS has larger error, the drift amount in the actual measurement is more than ten meters in a short time, so that the altitude channel of the GPS is abandoned. Residual r of the kth frame k Estimating the distance in the east and north direction a priori using GPS and IMU under UTM system (t x ,t y ) And (3) calculating:
the relative distance difference is the displacement between the GPS coordinates at two adjacent moments, namely the displacement of the ith moment relative to the i-1 moment, and the IMU relative distance difference is the same. Wherein,representing the IMU east distance, < >>Representing the north-to-south distance of the IMU, gps t x indicating the distance to the east of the GPS, gps t y indicating the GPS north distance.
The general form of posterior state is:
and carrying the error and the Kalman gain in to obtain the posterior state of the final fusion GPS. Wherein,for the posterior covariance matrix, I is the three-dimensional identity matrix, g k represents a Kalman gain matrix, H is an observation noise figure matrix, < >>For a priori covariance matrix +.>For posterior state, add->Is a priori state, r i Is the residual.
Step 150:
the purpose of loop detection is to determine if the previous position has been reached. The result of the odometer estimation is often not kept up with it after reaching the previous location due to errors. By loop detection, when reaching the position where the current position passes through at a certain moment, the information is fed back to the rear end, and the rear end can pull the current pose of the odometer to the previous position, so that the accumulated error is eliminated to a certain extent;
the generation of the accumulated error is a situation that the SLAM system is difficult to avoid when running for a long time, the accumulated error can only be limited to a certain range and cannot be eliminated even if the GPS constraint is added, and the closed loop detection is a good scheme for eliminating the accumulated error. When the system runs to the place where the system passes before, but the current track is not overlapped with the track due to the influence of the accumulated error, the closed loop detection can well solve the problem. The method adopts ScanContext to carry out closed loop detection, and after loop detection, global optimization is carried out on a loop region. Parameters input by the loop module mainly comprise posterior state, laser scanning and GPS. GPS is used to limit loop scope and participate in global optimization.
Each input laser scan is downsampled and encoded to ultimately generate an intensity matrix I. Scoring the similarity between the intensity matrix generated by the encoding and the generated intensity matrix, wherein the similarity is represented by the following formula:
wherein I is q Representing candidate intensity matrix, I c Representing the current intensity matrix of the light,and->Representing the column vectors corresponding to the two matrices, respectively. Loop-back is considered to occur when the score is less than a certain threshold, which is set to 0.2 by default. If I q And I c With a loop-back relationship, calculating a relative motion relationship (R, t) of the lidar corresponding thereto using ICP, the motion relationship may be expressed as:
L P c =R L P q +t
wherein, L R q for the three-dimensional coordinate point under the laser radar coordinate system before loop optimization, L P c r is a rotation matrix for three-dimensional coordinate points in a laser radar coordinate system after loop optimizationT is a translation vector.
And adding corresponding nodes and gesture constraints of two frames forming a loop relation into a factor graph, and solving a final gesture by using a GTSAM.
The ICP calculation process can be described as:
1) The centroid positions p, p' of the two sets of laser points are calculated, and then the de-centroid coordinates of each point are calculated: q i =p i -p,q′ i =p′ i -p′
2) Construction optimization problem, solving rotation matrix R * :
3) Calculating t from R of 2: t is t * =p-R * p′
Step S6:
three-dimensional reconstruction, namely superposing a posterior state on a laser point cloud, and endowing absolute position information for combining GPS information for each laser scanning;
for laser points obtained by each scanning of a laser radar L P i Projecting the laser point to the world coordinate system by using the posterior state obtained in the step S4, and marking the projected point as W P i = (x, y, z), the projection relationship is shown as follows:
P i the I three-dimensional coordinate point is represented, the upper left corner W represents that the coordinate point is located in a world coordinate system or a global coordinate system, and a laser radar coordinate system L, IMU coordinate system I and a camera coordinate system C, x, y and z respectively represent the values of three coordinate axes.
Step S7:
the mobile terminals are interconnected and intercommunicated, SLAM is integrated to the APP of the Android terminal through network connection, so that the mobile phone terminal of the positioning and mapping result can be displayed in real time.
The flow of the intercommunication between SLAM and mobile terminal of the present invention is shown in FIG. 5. The mobile terminal needs to initiate a TCP connection request to the SLAM terminal, and the connection between the mobile terminal and the SLAM terminal is completed through a TCP protocol three-way handshake. The SLAM end continuously sends the posterior state quantity and world system point cloud data to the mobile end for the mobile end to display.
And for the mobile terminal positioning function, calling a hundred-degree map API by using the received posterior state quantity to display the real-time position in the world map. For the mapping function, the mobile terminal will render the received point cloud data to the screen in real time through the OpenGL graphics rendering library. In order to prevent the occurrence of abnormality in the data transmission process, the algorithm introduces a data packet header and check bits to perform double check on the reliability of the received data, so as to ensure that the received data cannot be in error.
The vision sensor can provide high-resolution and high-precision image information, but is easily affected by factors such as illumination. The laser sensor can provide high-precision three-dimensional point cloud information, but has lower resolution and is easy to fail for objects with higher transparency or reflectivity. The information of the two sensors is fused, so that the advantages of the two sensors can be fully utilized. The invention fuses the camera, the laser radar, the IMU and the GPS, realizes the high-precision positioning and reconstruction functions of the robot, and has higher running speed, higher positioning precision and reconstruction quality.
The method was subjected to an all-round experimental evaluation on an M2DGR dataset, which is a large-scale visual laser dataset created by Shanghai university research teams of transportation and collected by ground robots, comprising 32-wire 3D lidars, GNSS, RTK/GNSS-IMU and a plurality of cameras of various types. The data set total 36 sequences, total duration 3.73 hours, scene containing outdoor, indoor and indoor-outdoor alternations, environmental features covering night and day. The comparison results with SLAM methods of the same type of mainstream are shown in Table 1 below, and the trace error map is shown in FIG. 6.
TABLE 1
As can be seen from Table 1 and FIG. 6, compared with other similar methods and popular SLAM methods, the method of the present invention has the advantages of more stable accuracy, extremely low error control and stronger competitiveness in accuracy. The X in the table indicates that the sequence continued to initialize failed or successfully tracked less than 60% of the frames. In fig. 6, the x and y axes represent the east and north directions of the geodetic coordinate system, respectively, the dashed line represents the true trajectory, and the solid line represents the trajectory predicted by the algorithm. The running track of the method is basically coincident with the real track, and the whole track has no obvious accumulated error.
Because engineering applications of SLAM are extremely demanding in real time. For example, in the auxiliary driving, the SLAM can be used for real-time perception of surrounding things by the automobile, and spatial relations between obstacles and the automobile at the current moment and the historical moment can be drawn in a three-dimensional mode, so that the obstacles and the automobile are displayed to a driver in an image or an emergency plan is made. If the SLAM delay is high, the judgment of the surroundings of the current moment by the automobile may be affected, so that a part of application value is lost, and the following table 2 shows the time consumption of the method and other methods running on the intel i7 platform on the private data set.
TABLE 2
The LIO total time consumption for R3Live in Table 2 is about 150 milliseconds, the VIO time is about 20 milliseconds, and the average total system time is about 242 milliseconds. The average time taken by the method is 74 milliseconds. It can be seen from Table 2 that the average time consumption of the method of the present invention is only one third of that of R3LIVE and VLOAM, and the method has stronger comprehensive strength compared with the similar algorithm. The final reconstruction effect of the method of the present invention is schematically shown in FIG. 7.
It should be noted that although the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in that particular order or that all illustrated operations be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Further, referring to fig. 2, fig. 2 illustrates a multi-sensor fused SLAM localization and reconstruction system according to one embodiment of the present application, the system comprising:
the preprocessing module 210 is configured to preprocess the visible light camera data, synchronize the preprocessed visible light camera data to the laser point cloud through the camera and the external parameters of the laser radar, and color the laser points obtained by each scanning of the laser radar;
the calculation module 220 is configured to calculate a relative motion amount of the IMU between two scans of the lidar, and obtain prior information of robot pose estimation through a kalman filter;
the updating module 230 is configured to update the IMU priori information through a kalman filter to obtain an LIO final posterior state quantity, that is, relative motion information between two laser radar scanning frames, and obtain a motion track of the robot by overlapping the relative motion information;
the fusion module 240 is configured to fuse the motion trajectories obtained in the GPS and S3 through a kalman filter;
the detection module 250 is configured to perform loop detection on the step S4;
the superposition module 260 is configured to superimpose the posterior state on the laser point cloud, and obtain a three-dimensional reconstruction map after giving absolute position information to each laser scanning combined with GPS information;
and the sending module 270 is used for sending the three-dimensional reconstruction map to the terminal.
It should be understood that the elements or modules depicted in system 200 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations and features described above with respect to the method are equally applicable to the system 200 and the units contained therein and are not described in detail herein. The system 200 may be implemented in advance in a browser or other security application of the electronic device, or may be loaded into the browser or security application of the electronic device by means of downloading, etc. The corresponding elements in system 200 may interact with elements in an electronic device to implement aspects of embodiments of the present application.
Referring now to FIG. 8, a schematic diagram of a computer system 300 suitable for use in implementing a terminal device or server of an embodiment of the present application is shown.
As shown in fig. 8, the computer system 300 includes a Central Processing Unit (CPU) 301 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage section 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data required for the operation of the system 300 are also stored. The CPU 301, ROM 302, and RAM 303 are connected to each other through a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
The following components are connected to the I/O interface 305: an input section 306 including a keyboard, a mouse, and the like; an output portion 307 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 308 including a hard disk or the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the internet. The drive 310 is also connected to the I/O interface 305 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 310 as needed, so that a computer program read therefrom is installed into the storage section 308 as needed.
In particular, according to embodiments of the present disclosure, the process described above with reference to fig. 1 may be implemented as a computer software program or provide related processing services in the form of an HTTP interface. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing the method of fig. 1. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 309, and/or installed from the removable medium 311.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software, or may be implemented by hardware. The described units or modules may also be provided in a processor, for example, as: a processor includes a first sub-region generation unit, a second sub-region generation unit, and a display region generation unit. The names of these units or modules do not constitute a limitation of the unit or module itself in some cases, and for example, the display area generating unit may also be described as "a unit for generating a display area of text from the first sub-area and the second sub-area".
As another aspect, the present application also provides a computer-readable storage medium, which may be a computer-readable storage medium contained in the foregoing apparatus in the foregoing embodiment; or may be a computer-readable storage medium, alone, that is not assembled into a device. The computer-readable storage medium stores one or more programs for use by one or more processors in performing the text generation method described herein as applied to transparent window envelopes.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the application referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or their equivalents is possible without departing from the spirit of the application. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
Claims (7)
1. A multi-sensor fusion SLAM localization and reconstruction method, comprising:
s1: preprocessing visible light camera data, synchronizing the preprocessed visible light camera data into laser point clouds through a camera and external parameters of a laser radar, and coloring laser points obtained by each scanning of the laser radar;
s2: calculating the relative motion quantity of the IMU between two scans of the laser radar, and obtaining prior information of robot attitude estimation through a Kalman filter;
s3: updating IMU prior information through a Kalman filter to obtain LIO final posterior state quantity, namely, relative motion information between two laser radar scanning frames, and obtaining a motion track of the robot through superposition of the relative motion information;
s4: fusing the GPS and the motion trail obtained in the S3 through a Kalman filter;
s5: performing loop detection on the step S4;
s6: superposing the posterior state on the laser point cloud, and giving absolute position information to each laser scanning combined with GPS information to obtain a three-dimensional reconstruction map;
s7: transmitting the three-dimensional reconstruction map to a terminal;
the step S1 includes:
preprocessing visible light camera data through EnlightnGAN, synchronizing the preprocessed visible light camera data into laser point clouds through a camera and laser radar external parameters, and coloring laser points obtained by each scanning of the laser radar;
the step S2 includes:
calculating the relative motion quantity of the IMU between two scans of the laser radar;
eliminating inherent distortion of the laser radar through the IMU;
and coupling the laser radar and IMU information through a Kalman filter, and jointly estimating the attitude information of the robot through the information of the laser radar and the IMU.
2. The multi-sensor fusion SLAM locating and reconstructing method of claim 1, wherein in step S3, updating IMU a priori information by a kalman filter to obtain an LIO final posterior state quantity comprises:
s31: constructing a priori covariance matrix Wherein F is x Indicating laser spot error, +.>Is the posterior covariance matrix at time i-1, F w Is IMU noise coefficient matrix, Q i Is a noise variance matrix, F T A transpose operation representing the matrix;
s32: defining a Kalman gain matrix as g K,Wherein R is an observation noise matrix, H is an observation noise coefficient matrix, H T Transpose operation representing observation noise figure matrix, R -1 The inverse matrix representing observation noise, J is a matrix for restraining the state quantity to be optimized and iterating the state quantity, the upper right corner-1 represents inverse operation of the corresponding matrix, and the upper right corner T represents transposition operation of the corresponding matrix;
the kth iteration may be expressed as the following formula:
delta theta in matrix Ik Is the variation between the rotation component of the IMU gesture and the prior gesture at the kth iteration, delta theta Lk Is the change relation between IMU and laser radar rotation external parameter in iteration, 0 3×3 Is an all-zero matrix of 3 rows and 3 columns, 0 3×15 Is an all-zero matrix of 3 rows and 15 columns, I 3×3 Is an identity matrix of 3 rows and 3 columns, I 15×15 Is a unit matrix of 15 rows and 15 columns, (delta theta) Ik ) -T Is delta theta Ik Is inverted after transposition of (delta theta) Lk ) -T Is delta theta Lk Is inverted after transposition;
s33: the posterior state of the kth iteration is as follows:
wherein->Indicating a posterior state of the kth iteration at time i,/->Representing the posterior state before the kth iteration at time i, operator +.>Depending on the state quantity field, the meaning is different, multiplication is represented in the lie field, addition is represented in the lie field, and the operator +.>Is an operator->In the reverse process of (a), g k represents a Kalman gain matrix, Z k A state quantity residual matrix representing the kth iteration, I representing a 3 x 3 identity matrix, H being an observation noise figure matrix, +.>Representing the variation between a posterior state quantity before iterative optimization at the moment i and an a priori state quantity before iterative optimization of the state quantity at the moment i;
s34: post-test variance of the kth iterationCan be expressed as:
wherein I represents a 3 x 3 identity matrix, g k represents a Kalman gain matrix, H is an observation noise figure matrix, < >>The matrix is i time prior covariance matrix, the jacobian matrix J is a matrix used for restraining the state quantity to be optimized and iterating with the state quantity to be optimized, and the upper right corner T represents transposed operation of the matrix; and after the iteration reaches the exit condition or the designated times, the iteration can be exited, and the final posterior state is obtained.
3. The multi-sensor fusion SLAM localization and reconstruction method of claim 1, further comprising, prior to step S4:
and converting the fused GPS coordinates.
4. The multi-sensor fusion SLAM localization and reconstruction method of claim 1, wherein step S5 comprises:
and (4) performing closed loop detection on the step S4 by adopting ScanContext, and performing global optimization on a loop region after loop detection.
5. The multi-sensor fusion SLAM localization and reconstruction method of claim 1, wherein step S6 comprises:
projecting the motion trail obtained in the step S4 to a world coordinate system;
post-projection sitting marker W P i =(x,y,z),P i The ith three-dimensional coordinate point is represented, the upper left corner W represents that the coordinate point is located in a world coordinate system or a global coordinate system, and the laser is corresponding to the coordinate pointRadar coordinate system L, IMU coordinate system I and camera coordinate system C, x, y, z represent the values of the three coordinate axes, respectively.
6. The multi-sensor fusion SLAM localization and reconstruction method of claim 1, wherein step S7 comprises:
and transmitting the three-dimensional reconstruction map to the terminal through the Internet.
7. A multi-sensor fused SLAM localization and reconstruction system, comprising:
the preprocessing module is used for preprocessing the visible light camera data, synchronizing the preprocessed visible light camera data with external parameters of the laser radar into the laser point cloud through the camera and the laser radar, and coloring the laser points obtained by each scanning of the laser radar;
the computing module is used for computing the relative motion quantity of the IMU between two scans of the laser radar, and acquiring prior information of robot attitude estimation through a Kalman filter;
the updating module is used for updating the prior information of the IMU through a Kalman filter to obtain the LIO final posterior state quantity, namely the relative motion information between two laser radar scanning frames, and obtaining the motion trail of the robot through superposition of the relative motion information;
the fusion module is used for fusing the GPS and the motion trail obtained in the S3 through a Kalman filter;
the detection module is used for carrying out loop detection on the step S4;
the superposition module is used for superposing the posterior state on the laser point cloud, and giving absolute position information to each laser scanning combined with GPS information to obtain a three-dimensional reconstruction map;
the sending module is used for sending the three-dimensional reconstruction graph to the terminal;
the preprocessing module comprises:
preprocessing visible light camera data through EnlightnGAN, synchronizing the preprocessed visible light camera data into laser point clouds through a camera and laser radar external parameters, and coloring laser points obtained by each scanning of the laser radar;
the computing module includes:
calculating the relative motion quantity of the IMU between two scans of the laser radar;
eliminating inherent distortion of the laser radar through the IMU;
and coupling the laser radar and IMU information through a Kalman filter, and jointly estimating the attitude information of the robot through the information of the laser radar and the IMU.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311445001.2A CN117168441B (en) | 2023-11-02 | 2023-11-02 | Multi-sensor fusion SLAM positioning and reconstructing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311445001.2A CN117168441B (en) | 2023-11-02 | 2023-11-02 | Multi-sensor fusion SLAM positioning and reconstructing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117168441A CN117168441A (en) | 2023-12-05 |
CN117168441B true CN117168441B (en) | 2024-02-20 |
Family
ID=88947201
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311445001.2A Active CN117168441B (en) | 2023-11-02 | 2023-11-02 | Multi-sensor fusion SLAM positioning and reconstructing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117168441B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN112987065A (en) * | 2021-02-04 | 2021-06-18 | 东南大学 | Handheld SLAM device integrating multiple sensors and control method thereof |
CN113091771A (en) * | 2021-04-13 | 2021-07-09 | 清华大学 | Laser radar-camera-inertial navigation combined calibration method and system |
CN113706626A (en) * | 2021-07-30 | 2021-11-26 | 西安交通大学 | Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction |
CN113837277A (en) * | 2021-09-24 | 2021-12-24 | 东南大学 | Multisource fusion SLAM system based on visual point-line feature optimization |
WO2022077296A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市大疆创新科技有限公司 | Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium |
-
2023
- 2023-11-02 CN CN202311445001.2A patent/CN117168441B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
WO2022077296A1 (en) * | 2020-10-14 | 2022-04-21 | 深圳市大疆创新科技有限公司 | Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium |
CN112987065A (en) * | 2021-02-04 | 2021-06-18 | 东南大学 | Handheld SLAM device integrating multiple sensors and control method thereof |
CN113091771A (en) * | 2021-04-13 | 2021-07-09 | 清华大学 | Laser radar-camera-inertial navigation combined calibration method and system |
CN113706626A (en) * | 2021-07-30 | 2021-11-26 | 西安交通大学 | Positioning and mapping method based on multi-sensor fusion and two-dimensional code correction |
CN113837277A (en) * | 2021-09-24 | 2021-12-24 | 东南大学 | Multisource fusion SLAM system based on visual point-line feature optimization |
Non-Patent Citations (3)
Title |
---|
刘向臻.基于视觉激光融合惯导的SLAM室外场景定位和重建技术研究.全文. * |
无人驾驶车辆视觉SLAM方法综述;李希宇;仲首任;马芳武;;汽车文摘(第07期);全文 * |
融合激光与视觉点云信息的定位与建图方法;张伟伟;陈超;徐军;;计算机应用与软件(第07期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117168441A (en) | 2023-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109887057B (en) | Method and device for generating high-precision map | |
US8761439B1 (en) | Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit | |
US11064178B2 (en) | Deep virtual stereo odometry | |
US11227395B2 (en) | Method and apparatus for determining motion vector field, device, storage medium and vehicle | |
Chien et al. | Visual odometry driven online calibration for monocular lidar-camera systems | |
CN109631911B (en) | Satellite attitude rotation information determination method based on deep learning target recognition algorithm | |
CN110749308B (en) | SLAM-oriented outdoor positioning method using consumer-grade GPS and 2.5D building models | |
CN112556719A (en) | Visual inertial odometer implementation method based on CNN-EKF | |
Lacroix et al. | Digital elevation map building from low altitude stereo imagery | |
CN114964276A (en) | Dynamic vision SLAM method fusing inertial navigation | |
Chiu et al. | Augmented reality driving using semantic geo-registration | |
CN109341685B (en) | Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation | |
CN113345032B (en) | Initialization map building method and system based on wide-angle camera large distortion map | |
CN112945233B (en) | Global drift-free autonomous robot simultaneous positioning and map construction method | |
CN117710476A (en) | Monocular vision-based unmanned aerial vehicle pose estimation and dense mapping method | |
CN117168441B (en) | Multi-sensor fusion SLAM positioning and reconstructing method and system | |
Kupervasser et al. | Robust positioning of drones for land use monitoring in strong terrain relief using vision-based navigation | |
Liu et al. | 6-DOF motion estimation using optical flow based on dual cameras | |
Conway et al. | Vision-based Velocimetry over Unknown Terrain with a Low-Noise IMU | |
Wang et al. | Low-speed unmanned vehicle localization based on sensor fusion of low-cost stereo camera and IMU | |
Xiong et al. | Scale-aware monocular visual-inertial pose estimation for aerial robots | |
CN112097758A (en) | Positioning method and device, robot positioning method and robot | |
Shi et al. | MFVS/MIMU integrated 6-DOF autonomous navigation in known environments with extremely simple landmarks | |
Grinstead et al. | Developing detailed a priori 3D models of large environments to aid in robotic navigation tasks | |
CN117649619B (en) | Unmanned aerial vehicle visual navigation positioning recovery method, system, device and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |