CN113052855A - Semantic SLAM method based on visual-IMU-wheel speed meter fusion - Google Patents
Semantic SLAM method based on visual-IMU-wheel speed meter fusion Download PDFInfo
- Publication number
- CN113052855A CN113052855A CN202110218495.5A CN202110218495A CN113052855A CN 113052855 A CN113052855 A CN 113052855A CN 202110218495 A CN202110218495 A CN 202110218495A CN 113052855 A CN113052855 A CN 113052855A
- Authority
- CN
- China
- Prior art keywords
- imu
- wheel speed
- speed meter
- semantic
- trolley
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Multimedia (AREA)
- Computational Mathematics (AREA)
- Algebra (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of computer vision, and discloses a semantic SLAM method based on vision-IMU-wheel speed meter fusion. And then inputting the estimated pose of each frame of image, the corresponding depth map and the 2D semantic segmentation result into a 3D map building module to build a global 3D semantic mesh map. And the road condition information is transmitted to a positioning module at the front end of the SLAM through a 2D semantic segmentation result, whether the current trolley moves abnormally is judged according to the road condition in front of the trolley, the prediction position error of the IMU and the wheel speed meter and the alignment error between the IMU and the wheel speed meter measurement value, if the current trolley moves abnormally, the pre-integral measurement value of the wheel speed meter corresponding to the current frame is actively deleted from the state estimation equation, and the robustness of attitude estimation in a complex scene is improved.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a semantic SLAM method based on vision-IMU-wheel speed meter fusion.
Background
While it is possible to use fused inertial sensors and visual data to compensate for the scale uncertainty and poor fast motion tracking inherent in pure visual pose estimation methods, the combined method is ineffective in non-textured or low light environments where the visual sensors cannot obtain usable information. In this case, the visual inertial method will degrade to dead reckoning based on inertial navigation only, and the attitude error will increase rapidly over time. For a mobile robot with a wheel speed sensor, a camera, an inertial sensor and the wheel speed sensor can be fused to improve the robustness of attitude estimation in a complex scene.
Ground robots often experience limited motion (approximately planar and mostly moving in an arc or straight line at constant speed or acceleration) while navigating indoors, changing the observability of the VINS and making some additional degrees of freedom unobservable. When the local acceleration is constant, the magnitude of the true IMU acceleration cannot be distinguished from the magnitude of the accelerometer bias, as they are at least temporarily constant. Therefore, the magnitude of the true IMU acceleration may be arbitrary, resulting in scale ambiguity. When there is no rotational movement, the direction of the local gravitational acceleration cannot be distinguished from the direction of the accelerometer bias, since they are at least temporarily constant. Therefore, the roll angle and the pitch angle become blurred. In both of these non-observable cases, the non-observability of the second case can be easily eliminated by allowing the robot to deviate from its straight path, but making the scale observable is very challenging, as it will require the robot to constantly modify its acceleration, which will increase the wear of its moving system. We therefore solved this problem and ensured the observability of VINS by extending VINS and incorporating the measurements provided by the robot's wheel speed meter.
But the wheel speed meter only has correct measurement when the cart is in plane motion and there is no motion anomaly, i.e. if the robot moves in an inclined environment (e.g. uneven road surface, slope, pit on the road surface, speed bump, etc.) or the wheels slip, the wrong wheel speed meter measurement will make the SLAM algorithm unable to correctly estimate the pose of the robot (the scale estimation is inaccurate), and the positioning may fail.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a semantic SLAM method based on vision-IMU-wheel speed meter fusion, which solves the problems in the prior art.
The technical scheme is as follows: the invention provides a semantic SLAM method based on vision-IMU-wheel speed meter fusion, which comprises the following steps:
s1: fixedly mounting a depth camera on a trolley, mounting a wheel speed meter on the trolley, and acquiring a color image and a depth image, IMU (inertial measurement unit) measurement values and wheel speed meter reading at the previous moment and the current moment;
s2: calculating pre-integrals of the IMU and wheel speed meter for the IMU measurements and wheel speed meter readings in S1;
s3: calculating a visual reprojection error, an IMU pre-integration error and a wheel speed meter pre-integration error between two adjacent frames;
s4: performing pose estimation, namely performing nonlinear optimization on the visual reprojection error, the IMU pre-integration error and the wheel speed meter pre-integration error in S3 to solve the poses of all frames in the sliding window;
s5: performing 2D semantic segmentation on the road surface by using a convolutional neural network, segmenting an uneven area of the road surface, and obtaining a 2D road surface semantic segmentation result map;
s6: inputting the pose of each frame in S4, the 2D road surface semantic segmentation result map in S5 and the depth map in S1 into a global 3D semantic map building module to build a global 3D semantic mesh map with semantic labels;
s7: detecting abnormal movement of the trolley;
s8: and (4) posture optimization, namely actively deleting the wheel speed meter pre-integral measurement value of the current frame from the state estimation equation when abnormal movement of the trolley is detected.
Further, the calculating of the pre-integration of the IMU and the wheel speed in S2 is specifically as follows:
wherein i and j respectively represent the moment when the camera shoots the kth picture and the kth +1 picture; l and l +1 are two time instants between i and j;is an external parameter between the IMU and the wheel speed meter in the pre-integration stageA value of (d);andreadings of an accelerometer, a gyroscope, and a wheel speed meter, respectively; δ tlIs the time l andthe time interval between l + 1;andrespectively representing accelerometer and gyroscope bias at a certain time t;is a pre-integration result and can be regarded as displacement, speed change and relative rotation of the IMU;indicating the displacement of the wheel speed meter.
Further, the method for calculating the weight sensing projection error, the IMU pre-integration error and the wheel speed meter pre-integration error in S3 includes:
wherein, gwIs the gravitational acceleration, Δ t, in the world coordinate systemkRepresenting the time interval between image frames k and k + 1; andis compensated by variation of IMU deviationAndand IMU-wheel speed meter external referenceThe compensated pre-integration result.
Further, the method for estimating the position in S4 includes:
wherein c (x) represents a cost function, l represents an index of map points, BlIs the set of image frames on which the map point l appears; k is the number of frames in the sliding window,the error of the re-projection is represented,indicating IMU-wheel speed error, emDenotes the marginalized residual, WrIs an information matrix of all the reprojection items, and the quantity x to be optimized contains the state quantity x of each framekInverse depth λ of each map pointlRotation of the camera to the IMUAnd translationAnd rotation of the wheel speed meter to the IMUAnd translationWherein the state quantity x of each framekIncludedDisplacement of IMU to world coordinate system at each frame corresponding timeSpeed of rotationAnd rotationAnd acceleration biasing of IMUAnd angular velocity offset
Further, the constructing the three-dimensional map in S6 further includes: generating three-dimensional point cloud from RGBD data, then obtaining TSFD on each key frame by using a ray projection method, and extracting grids from the TSFD by using a marching cube algorithm; the method specifically comprises the following steps:
1) inputting a 2D road surface semantic segmentation result with a semantic label in each key frame, and then attaching the label to each three-dimensional point;
2) semantic labels are also projected during light projection, and for each beam of light in the light projection, a label probability vector is established according to the frequency of the labels observed in the beam;
3) this information is only propagated within the TSDF truncation distance, i.e. the near surface, to save computation time;
4) updating the label probability of each voxel by using a Bayesian method;
5) after semantic ray casting, each voxel has a label probability vector, and a label with the maximum probability is selected from the label probability vectors;
6) and extracting the semantic grids from the labels with the highest probability in the step 5) by using a marching cubes algorithm.
Further, the voxel label probability updating formula based on the bayesian method in 4) is as follows:
wherein k is the observed image frame sequence number, I is the observed image frame, I is the sequence number of the semantic category, and P () is the probability distribution of the ith classification; u. of(s,k)Pixel coordinates representing the s-th point element in the k-th frame; o is a vector of n elements, n represents the number of categories, wherein the ith element is taken out to obtain a sheet liAnd the probability distribution map of the class takes the probability value of the position of u to multiply the original probability for updating once.
Further, the detecting of abnormal movement of the dolly in S7 includes:
1) detection of abnormal movement of the trolley based on semantic segmentation of the road surface: when the road condition with uneven road surface is detected in front of the trolley, the pre-integration result of the wheel speed meter is regarded as abnormal, and the trolley is in an abnormal motion state;
2) detecting abnormal movement of the trolley based on the consistency of inertial navigation and wheel speed meters: acquiring the position, the speed and the attitude of the IMU in a world coordinate system when the IMU receives a camera frame for the last time from a state estimator, wherein the position, the speed and the attitude of the IMU have known initial attitude and speed, and the IMU dead reckoning algorithm based on inertial navigation predicts the real-time attitude and the speed of the trolley in a short time on the basis of IMU measurement under the condition of no gravity acceleration;
the wheel speed meter pre-integration algorithm predicts the real-time attitude and the speed of the trolley according to the motion state of the previous frame, the position of the robot is calculated from the IMU or the pre-point of the wheel speed meter according to the position covariance obtained by the wheel speed meter pre-integration, the Markov distance calculated by the two methods is more than 1.5, the pre-integration result of the wheel speed meter is considered as abnormal, and the trolley is in an abnormal motion state;
3) detection of trolley movement abnormality based on alignment of a wheel speed meter and an IMU: and (3) directly carrying out linear alignment on the IMU pre-integration and the wheel speed meter pre-integration, then calculating the deviation between an IMU preset point and a wheel speed meter preset point in the latest frame, and if the deviation of the Mahalanobis distance is more than 1.5, the pre-integration result of the wheel speed meter is regarded as abnormal, and the trolley is in an abnormal motion state.
Has the advantages that:
1. the wheel speed meter measurement is integrated into the vision-IMU odometer, so that the problem of fuzzy initial scales of the VINS when the trolley moves on the flat ground is solved, and the positioning robustness is improved.
2. The invention constructs the semantic map, updates the probability distribution of the category to which each map node belongs by using a Bayesian increment method, and solves the problem of inconsistent type probabilities in the process that the same spatial point is continuously observed.
3. The method detects the abnormal motion of the trolley by a semantic method, and avoids the adverse effect of wrong wheel speed meter measurement on the attitude estimation of the robot.
Drawings
FIG. 1 is a flow chart of the system of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
On the basis of the traditional visual inertial navigation SLAM, the method integrates wheel speed meter measurement by using a pre-integration and optimization-based method, performs road surface semantic segmentation on a key frame, and constructs a global three-dimensional semantic map. And (3) combining the road surface semantic information to detect abnormal movement of the trolley, and actively deleting the wheel speed meter pre-integral measurement value of the current frame from the state estimation equation when the abnormal movement of the trolley is detected.
FIG. 1 shows a semantic SLAM method flow diagram based on visual-IMU-wheel speed meter fusion, according to one embodiment of the present invention. As shown in fig. 1, the method comprises the steps of:
s1: the depth camera is fixedly installed on a trolley, a wheel speed meter is installed on the trolley, and a color image and a depth image, an IMU (inertial measurement unit) measured value and a wheel speed meter reading at the previous moment and the current moment are acquired.
S2: pre-integration of the IMU and wheel speed are calculated for IMU measurements and wheel speed readings in S1.
The calculation of the pre-integration of the IMU and the wheel speed meter is specifically:
wherein i and j respectively represent the moment when the camera shoots the kth picture and the kth +1 picture; l and l +1 are two time instants between i and j;is an external parameter between the IMU and the wheel speed meter in the pre-integration stageA value of (d);andreadings of an accelerometer, a gyroscope, and a wheel speed meter, respectively; δ tlIs the time interval between times l and l + 1;andrespectively representing accelerometer and gyroscope bias at a certain time t;is a pre-integration result and can be regarded as displacement, speed change and relative rotation of the IMU;indicating the displacement of the wheel speed meter.
S3: and calculating the visual reprojection error, the IMU pre-integration error and the wheel speed meter pre-integration error between two adjacent frames.
The method for calculating the vision reprojection error, the IMU pre-integration error and the wheel speed meter pre-integration error comprises the following steps:
wherein, gwIs the gravitational acceleration, Δ t, in the world coordinate systemkRepresenting the time interval between image frames k and k + 1; andis compensated by variation of IMU deviationAndand IMU-wheel speed meter external referenceThe compensated pre-integration result.
S4: and (4) pose estimation, namely performing nonlinear optimization on the visual reprojection error, the IMU pre-integration error and the wheel speed meter pre-integration error in S3 to solve the poses of all frames in the sliding window.
The pose estimation method comprises the following steps:
wherein c (x) represents a cost function, l represents an index of map points, BlIs the set of image frames on which the map point l appears; k is the number of frames in the sliding window,the error of the re-projection is represented,indicating IMU-wheel speed error, emDenotes the marginalized residual, WrIs an information matrix of all the reprojection items, and the quantity x to be optimized contains the state quantity x of each framekInverse depth λ of each map pointlRotation of the camera to the IMUAnd translationAnd rotation of the wheel speed meter to the IMUAnd translationWherein the state quantity x of each framekIncluding displacement of the IMU to the world coordinate system at the time corresponding to each frameSpeed of rotationAnd rotationAnd acceleration biasing of IMUAnd angular velocity offset
S5: and (3) performing 2D semantic segmentation on the road surface by using a convolutional neural network, segmenting an uneven area of the road surface, and obtaining a 2D road surface semantic segmentation result map.
S6: inputting the pose of each frame in S4, the 2D road surface semantic segmentation result map in S5 and the depth map in S1 into a global 3D semantic map building module to build a global 3D semantic mesh map with semantic labels.
Constructing the three-dimensional map further comprises: generating three-dimensional point cloud from RGBD data, then obtaining TSFD on each key frame by using a ray projection method, and extracting grids from the TSFD by using a marching cube algorithm; the method specifically comprises the following steps:
1) and inputting a 2D road surface semantic segmentation result with a semantic label on each key frame, and then attaching the label to each three-dimensional point.
2) Semantic labels are also projected during ray casting, and for each ray in the ray casting, a label probability vector is established according to the frequency of the observed labels in the beam.
3) This information is only propagated within the TSDF truncation distance, i.e. close to the surface, to save computation time.
4) Using a bayesian approach, the label probability for each voxel is updated.
The voxel label probability updating formula based on the Bayesian method is as follows:
wherein k is the observed image frame sequence number, I is the observed image frame, I is the sequence number of the semantic category, and P () is the probability distribution of the ith classification; u. of(s,k)Pixel coordinates representing the s-th point element in the k-th frame; o is a vector of n elements, n represents the number of categories, wherein the ith element is taken out to obtain a sheet liAnd the probability distribution map of the class takes the probability value of the position of u to multiply the original probability for updating once.
5) After semantic ray casting, each voxel has a label probability vector from which the label with the highest probability is selected.
6) And extracting the semantic grids from the labels with the highest probability in the step 5) by using a marching cubes algorithm.
S7: and detecting the movement abnormality of the trolley.
The detection of the abnormal movement of the trolley comprises the following steps:
1) detection of abnormal movement of the trolley based on semantic segmentation of the road surface: and if the road condition with uneven road surface is detected in front of the trolley, the pre-integration result of the wheel speed meter is regarded as abnormal, and the trolley is in an abnormal motion state.
2) Detecting abnormal movement of the trolley based on the consistency of inertial navigation and wheel speed meters: obtaining the position, the speed and the attitude of the IMU in a world coordinate system when the IMU receives a camera frame for the last time from the state estimator, wherein the position, the speed and the attitude have known initial attitude and speed, and the IMU dead reckoning algorithm based on inertial navigation predicts the real-time attitude and the speed of the trolley in a short time based on IMU measurement under the condition of no gravity acceleration.
The wheel speed meter pre-integration algorithm can also predict the real-time attitude and speed of the trolley according to the motion state of the previous frame, and the position of the robot can be deduced from the IMU or the wheel speed meter pre-integration according to the position covariance obtained by the wheel speed meter pre-integration, and if the Markov distance calculated by both methods is greater than 1.5 (the corresponding probability is about 13.4%), the wheel speed meter pre-integration result is considered as abnormal, and the trolley is in an abnormal motion state.
3) Detection of trolley movement abnormality based on alignment of a wheel speed meter and an IMU: the IMU pre-integration and the wheel speed meter pre-integration are directly aligned linearly, then the deviation between the IMU pre-setting point and the wheel speed meter pre-setting point is calculated in the latest frame, if the deviation of the Mahalanobis distance is larger than 1.5 (the corresponding probability is about 13.4%), the wheel speed meter pre-integration result is regarded as abnormal, and the trolley is in an abnormal motion state.
S8: and (4) posture optimization, namely actively deleting the wheel speed meter pre-integral measurement value of the current frame from the state estimation equation when abnormal movement of the trolley is detected.
The above embodiments are merely illustrative of the technical concepts and features of the present invention, and the purpose of the embodiments is to enable those skilled in the art to understand the contents of the present invention and implement the present invention, and not to limit the protection scope of the present invention. All equivalent changes and modifications made according to the spirit of the present invention should be covered within the protection scope of the present invention.
Claims (9)
1. A semantic SLAM method based on visual-IMU-wheel speed meter fusion is characterized by comprising the following steps:
s1: fixedly mounting a depth camera on a trolley, mounting a wheel speed meter on the trolley, and acquiring a color image and a depth image, IMU (inertial measurement unit) measurement values and wheel speed meter reading at the previous moment and the current moment;
s2: calculating pre-integrals of the IMU and wheel speed meter for the IMU measurements and wheel speed meter readings in S1;
s3: calculating a visual reprojection error, an IMU pre-integration error and a wheel speed meter pre-integration error between two adjacent frames;
s4: performing pose estimation, namely performing nonlinear optimization on the visual reprojection error, the IMU pre-integration error and the wheel speed meter pre-integration error in S3 to solve the poses of all frames in the sliding window;
s5: performing 2D semantic segmentation on the road surface by using a convolutional neural network, segmenting an uneven area of the road surface, and obtaining a 2D road surface semantic segmentation result map;
s6: inputting the pose of each frame in S4, the 2D road surface semantic segmentation result map in S5 and the depth map in S1 into a global 3D semantic map building module to build a global 3D semantic mesh map with semantic labels;
s7: detecting abnormal movement of the trolley;
s8: and (4) posture optimization, namely actively deleting the wheel speed meter pre-integral measurement value of the current frame from the state estimation equation when abnormal movement of the trolley is detected.
2. The semantic SLAM method based on visual-IMU-wheel speed fusion of claim 1, wherein the pre-integration of IMU and wheel speed in S2 is specifically:
wherein i and j respectively represent the moment when the camera shoots the kth picture and the kth +1 picture; l and l +1 are two time instants between i and j;is an external parameter between the IMU and the wheel speed meter in the pre-integration stageA value of (d); andreadings of an accelerometer, a gyroscope, and a wheel speed meter, respectively; δ tlIs the time interval between times l and l + 1;andrespectively representing accelerometer and gyroscope bias at a certain time t;is a pre-integration result and can be regarded as displacement, speed change and relative rotation of the IMU;indicating the displacement of the wheel speed meter.
3. The semantic SLAM method based on vision-IMU-wheel speed meter fusion of claim 1, wherein the calculation method of the vision reprojection error, the IMU pre-integration error and the wheel speed meter pre-integration error in S3 is as follows:
4. The semantic SLAM method based on visual-IMU-wheel speed meter fusion of claim 1, wherein the pose estimation method in S4 is:
wherein c (x) represents a cost function, l represents an index of map points, BlIs the set of image frames on which the map point l appears; k is the number of frames in the sliding window,the error of the re-projection is represented,indicating IMU-wheel speed error, emDenotes the marginalized residual, WrIs an information matrix of all the reprojection items, and the quantity x to be optimized contains the state quantity x of each framekInverse depth λ of each map pointlRotation of the camera to the IMUAnd translationAnd rotation of the wheel speed meter to the IMUAnd translationWherein the state quantity x of each framekIncluding displacement of the IMU to the world coordinate system at the time corresponding to each frameSpeed of rotationAnd rotationAnd acceleration biasing of IMUAnd angular velocity offset
5. The semantic SLAM method based on visual-IMU-wheel speed fusion of claim 1, wherein the constructing the three-dimensional map in S6 further comprises: generating three-dimensional point cloud from RGBD data, then obtaining TSFD on each key frame by using a ray projection method, and extracting grids from the TSFD by using a marching cube algorithm; the method specifically comprises the following steps:
1) inputting a 2D road surface semantic segmentation result with a semantic label in each key frame, and then attaching the label to each three-dimensional point;
2) semantic labels are also projected during light projection, and for each beam of light in the light projection, a label probability vector is established according to the frequency of the labels observed in the beam;
3) this information is only propagated within the TSDF truncation distance, i.e. the near surface, to save computation time;
4) updating the label probability of each voxel by using a Bayesian method;
5) after semantic ray casting, each voxel has a label probability vector, and a label with the maximum probability is selected from the label probability vectors;
6) and extracting the semantic grids from the labels with the highest probability in the step 5) by using a marching cubes algorithm.
6. The semantic SLAM method based on visual-IMU-wheel speed meter fusion of claim 5, wherein the voxel label probability updating formula based on the Bayesian method in 4) is as follows:
where k is the observed image frameI is the observed image frame, I is the sequence number of the semantic class, and P () is the probability distribution of the ith class; u. of(s,k)Pixel coordinates representing the s-th point element in the k-th frame; o is a vector of n elements, n represents the number of categories, wherein the ith element is taken out to obtain a sheet liAnd the probability distribution map of the class takes the probability value of the position of u to multiply the original probability for updating once.
7. The semantic SLAM method based on visual-IMU-wheel speed meter fusion of any one of claims 1 to 6, wherein the detection of the abnormal motion of the dolly in S7 comprises:
detection of abnormal movement of the trolley based on semantic segmentation of the road surface: and if the road condition with uneven road surface is detected in front of the trolley, the pre-integration result of the wheel speed meter is regarded as abnormal, and the trolley is in an abnormal motion state.
8. The semantic SLAM method based on visual-IMU-wheel speed meter fusion of any one of claims 1 to 6, wherein the detection of the abnormal motion of the dolly in S7 further comprises:
detecting abnormal movement of the trolley based on the consistency of inertial navigation and wheel speed meters: acquiring the position, the speed and the attitude of the IMU in a world coordinate system when the IMU receives a camera frame for the last time from a state estimator, wherein the position, the speed and the attitude of the IMU have known initial attitude and speed, and the IMU dead reckoning algorithm based on inertial navigation predicts the real-time attitude and the speed of the trolley in a short time on the basis of IMU measurement under the condition of no gravity acceleration;
the wheel speed meter pre-integration algorithm predicts the real-time attitude and the speed of the trolley according to the motion state of the previous frame, the position of the robot is calculated from the IMU or the pre-point of the wheel speed meter according to the position covariance obtained by the wheel speed meter pre-integration, the Markov distance calculated by the two methods is larger than 1.5, the pre-integration result of the wheel speed meter is considered as abnormal, and the trolley is in an abnormal motion state.
9. The semantic SLAM method based on visual-IMU-wheel speed meter fusion of any one of claims 1 to 6, wherein the detection of the abnormal motion of the dolly in S7 further comprises:
detection of trolley movement abnormality based on alignment of a wheel speed meter and an IMU: and (3) directly carrying out linear alignment on the IMU pre-integration and the wheel speed meter pre-integration, then calculating the deviation between an IMU preset point and a wheel speed meter preset point in the latest frame, and if the deviation of the Mahalanobis distance is more than 1.5, the pre-integration result of the wheel speed meter is regarded as abnormal, and the trolley is in an abnormal motion state.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110218495.5A CN113052855B (en) | 2021-02-26 | 2021-02-26 | Semantic SLAM method based on visual-IMU-wheel speed meter fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110218495.5A CN113052855B (en) | 2021-02-26 | 2021-02-26 | Semantic SLAM method based on visual-IMU-wheel speed meter fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113052855A true CN113052855A (en) | 2021-06-29 |
CN113052855B CN113052855B (en) | 2021-11-02 |
Family
ID=76509163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110218495.5A Active CN113052855B (en) | 2021-02-26 | 2021-02-26 | Semantic SLAM method based on visual-IMU-wheel speed meter fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113052855B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113793381A (en) * | 2021-07-27 | 2021-12-14 | 武汉中海庭数据技术有限公司 | Monocular visual information and wheel speed information fusion positioning method and system |
CN114485653A (en) * | 2022-02-23 | 2022-05-13 | 广州高新兴机器人有限公司 | Positioning method, device, medium and equipment based on fusion of vision and wheel type odometer |
CN115164918A (en) * | 2022-09-06 | 2022-10-11 | 联友智连科技有限公司 | Semantic point cloud map construction method and device and electronic equipment |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101344391A (en) * | 2008-07-18 | 2009-01-14 | 北京工业大学 | Lunar vehicle pose self-confirming method based on full-function sun-compass |
US20110130926A1 (en) * | 2006-08-30 | 2011-06-02 | Ford Global Technologies | Integrated control system for stability control of yaw, roll and lateral motion of a driving vehicle using an integrated sensing system with pitch information |
CN106054896A (en) * | 2016-07-13 | 2016-10-26 | 武汉大学 | Intelligent navigation robot dolly system |
CN106274308A (en) * | 2015-06-24 | 2017-01-04 | 通用汽车环球科技运作有限责任公司 | Integrated sensing unit and the method being used for determining wheel of vehicle speed and tire pressure |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN111854770A (en) * | 2019-04-30 | 2020-10-30 | 北京初速度科技有限公司 | Vehicle positioning system and method |
CN111882607A (en) * | 2020-07-14 | 2020-11-03 | 中国人民解放军军事科学院国防科技创新研究院 | Visual inertial navigation fusion pose estimation method suitable for augmented reality application |
CN111968129A (en) * | 2020-07-15 | 2020-11-20 | 上海交通大学 | Instant positioning and map construction system and method with semantic perception |
CN112197770A (en) * | 2020-12-02 | 2021-01-08 | 北京欣奕华数字科技有限公司 | Robot positioning method and positioning device thereof |
CN112219087A (en) * | 2019-08-30 | 2021-01-12 | 深圳市大疆创新科技有限公司 | Pose prediction method, map construction method, movable platform and storage medium |
CN112344923A (en) * | 2021-01-11 | 2021-02-09 | 浙江欣奕华智能科技有限公司 | Robot positioning method and positioning device thereof |
CN112734852A (en) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | Robot mapping method and device and computing equipment |
CN113223161A (en) * | 2021-04-07 | 2021-08-06 | 武汉大学 | Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling |
-
2021
- 2021-02-26 CN CN202110218495.5A patent/CN113052855B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110130926A1 (en) * | 2006-08-30 | 2011-06-02 | Ford Global Technologies | Integrated control system for stability control of yaw, roll and lateral motion of a driving vehicle using an integrated sensing system with pitch information |
CN101344391A (en) * | 2008-07-18 | 2009-01-14 | 北京工业大学 | Lunar vehicle pose self-confirming method based on full-function sun-compass |
CN106274308A (en) * | 2015-06-24 | 2017-01-04 | 通用汽车环球科技运作有限责任公司 | Integrated sensing unit and the method being used for determining wheel of vehicle speed and tire pressure |
CN106054896A (en) * | 2016-07-13 | 2016-10-26 | 武汉大学 | Intelligent navigation robot dolly system |
CN111854770A (en) * | 2019-04-30 | 2020-10-30 | 北京初速度科技有限公司 | Vehicle positioning system and method |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN112219087A (en) * | 2019-08-30 | 2021-01-12 | 深圳市大疆创新科技有限公司 | Pose prediction method, map construction method, movable platform and storage medium |
CN111882607A (en) * | 2020-07-14 | 2020-11-03 | 中国人民解放军军事科学院国防科技创新研究院 | Visual inertial navigation fusion pose estimation method suitable for augmented reality application |
CN111968129A (en) * | 2020-07-15 | 2020-11-20 | 上海交通大学 | Instant positioning and map construction system and method with semantic perception |
CN112197770A (en) * | 2020-12-02 | 2021-01-08 | 北京欣奕华数字科技有限公司 | Robot positioning method and positioning device thereof |
CN112344923A (en) * | 2021-01-11 | 2021-02-09 | 浙江欣奕华智能科技有限公司 | Robot positioning method and positioning device thereof |
CN112734852A (en) * | 2021-03-31 | 2021-04-30 | 浙江欣奕华智能科技有限公司 | Robot mapping method and device and computing equipment |
CN113223161A (en) * | 2021-04-07 | 2021-08-06 | 武汉大学 | Robust panoramic SLAM system and method based on IMU and wheel speed meter tight coupling |
Non-Patent Citations (6)
Title |
---|
ABHAYJEET JUNEJA等: "A Comparative Study of SLAM Algorithms for Indoor Navigation of Autonomous Wheelchairs", 《2019 IEEE INTERNATIONAL CONFERENCE ON CYBORG AND BIONIC SYSTEMS (CBS)》 * |
张超等: "基于视觉的惯性导航误差在线修正", 《导航定位与授时》 * |
朴松昊等: "视觉SLAM综述", 《智能***学报》 * |
林鹏: "融合IMU与RGB-D相机的室内移动机器人地图创建算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王延东: "多位姿信息融合的双目视觉惯性里程计研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
白师宇等: "基于IMU/ODO预积分的多传感器即插即用因子图融合方法", 《中国惯性技术学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113793381A (en) * | 2021-07-27 | 2021-12-14 | 武汉中海庭数据技术有限公司 | Monocular visual information and wheel speed information fusion positioning method and system |
CN114485653A (en) * | 2022-02-23 | 2022-05-13 | 广州高新兴机器人有限公司 | Positioning method, device, medium and equipment based on fusion of vision and wheel type odometer |
CN115164918A (en) * | 2022-09-06 | 2022-10-11 | 联友智连科技有限公司 | Semantic point cloud map construction method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113052855B (en) | 2021-11-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220082386A1 (en) | Vision-aided inertial navigation | |
CN113052855B (en) | Semantic SLAM method based on visual-IMU-wheel speed meter fusion | |
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN113781582B (en) | Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration | |
US9071829B2 (en) | Method and system for fusing data arising from image sensors and from motion or position sensors | |
CN108731670A (en) | Inertia/visual odometry combined navigation locating method based on measurement model optimization | |
US8761439B1 (en) | Method and apparatus for generating three-dimensional pose using monocular visual sensor and inertial measurement unit | |
CN109991636A (en) | Map constructing method and system based on GPS, IMU and binocular vision | |
CN112815939B (en) | Pose estimation method of mobile robot and computer readable storage medium | |
CN107909614B (en) | Positioning method of inspection robot in GPS failure environment | |
CN111795686A (en) | Method for positioning and mapping mobile robot | |
CN114526745A (en) | Drawing establishing method and system for tightly-coupled laser radar and inertial odometer | |
CN114019552A (en) | Bayesian multi-sensor error constraint-based location reliability optimization method | |
CN114323033A (en) | Positioning method and device based on lane lines and feature points and automatic driving vehicle | |
CN113503873B (en) | Visual positioning method for multi-sensor fusion | |
CN117739972B (en) | Unmanned aerial vehicle approach stage positioning method without global satellite positioning system | |
CN115453599A (en) | Multi-sensor-cooperated pipeline robot accurate positioning method | |
Wu et al. | AFLI-Calib: Robust LiDAR-IMU extrinsic self-calibration based on adaptive frame length LiDAR odometry | |
CN117710476A (en) | Monocular vision-based unmanned aerial vehicle pose estimation and dense mapping method | |
CN117387604A (en) | Positioning and mapping method and system based on 4D millimeter wave radar and IMU fusion | |
CN112923934A (en) | Laser SLAM technology suitable for combining inertial navigation in unstructured scene | |
CN116202509A (en) | Passable map generation method for indoor multi-layer building | |
Chen et al. | River: A Tightly-coupled Radar-inertial Velocity Estimator Based on Continuous-time Optimization | |
Reina et al. | Iterative path reconstruction for large-scale inertial navigation on smartphones | |
CN113298796B (en) | Line characteristic SLAM initialization method based on maximum posterior IMU |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |