CN112712107A - Optimization-based vision and laser SLAM fusion positioning method - Google Patents

Optimization-based vision and laser SLAM fusion positioning method Download PDF

Info

Publication number
CN112712107A
CN112712107A CN202011461247.5A CN202011461247A CN112712107A CN 112712107 A CN112712107 A CN 112712107A CN 202011461247 A CN202011461247 A CN 202011461247A CN 112712107 A CN112712107 A CN 112712107A
Authority
CN
China
Prior art keywords
fusion
positioning
optimization
vision
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011461247.5A
Other languages
Chinese (zh)
Other versions
CN112712107B (en
Inventor
李红
杨国青
辛越
吕攀
吴朝晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202011461247.5A priority Critical patent/CN112712107B/en
Publication of CN112712107A publication Critical patent/CN112712107A/en
Application granted granted Critical
Publication of CN112712107B publication Critical patent/CN112712107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/15Correlation function computation including computation of convolution operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Algebra (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Navigation (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an optimized vision and laser SLAM fusion positioning method, which comprises the steps of firstly respectively obtaining positioning estimation results obtained by a vision inertia odometer at the front end of a vision SLAM module and a particle filter positioning algorithm of the laser SLAM module, carrying out approximate Gaussian estimation on the distribution of particles, then correcting the vision SLAM module according to a positioning time difference value, and calculating an optimal fusion coefficient according to a set objective function and a corrected Gaussian parameter to obtain a final fusion result. The invention integrates two relatively independent positioning methods with higher precision, so that the positioning system can still normally work under the condition that one method fails, the precision is improved, and finally the real-time performance of the system is improved by improving the integration method.

Description

Optimization-based vision and laser SLAM fusion positioning method
Technical Field
The invention belongs to the technical field of mobile robot positioning, and particularly relates to an optimization-based vision and laser SLAM fusion positioning method.
Background
The positioning problem is always one of the core problems in the field of mobile robots, and the accuracy of the positioning result has great influence on subsequent modules such as planning and control of the mobile robot. At present, a plurality of positioning methods for mobile robots are available, such as a vision series method, a laser radar series method, an RFID series method, a GPS series method, a Landmark (Landmark) series method, and the like. The methods for improving the positioning effect mainly include two methods: the method uses a higher-precision sensor and improves a positioning algorithm, however, the high-precision sensor is often expensive, and the cost performance of the price and the precision is rapidly reduced along with the improvement of the precision, so the improvement of the positioning algorithm is widely applied in an actual scene.
The current stage of positioning algorithm improvement focuses on two directions: a high-precision positioning method with high computational complexity is operated by a hardware platform which is developed at a high speed, And is characterized by large calculated amount, the used method hardly meets the real-time performance of the hardware platform several years ago, but the advantages of the method are gradually obvious along with the rapid development of hardware technology in recent years, for example, a processing method at the rear end of SLAM (Simulanous Localization And Mapping) is developed into graph optimization from extended Kalman filtering; the other method is to use the characteristics and complementary advantages of different sensors to fuse measured data to obtain a more accurate positioning effect, for example, data of an IMU (Inertial Measurement Unit) is often used in vision and laser SLAM to obtain a positioning predicted value, and a fused positioning result is obtained through kalman filtering.
Most of the existing positioning fusion methods are based on Kalman filtering and variants thereof, and Chinese patent with publication number CN107289933B provides a positioning method fusing an MEMS sensor and a VLC sensor. But in practice, MEMS sensors with moderate price tend to have zero drift and poor measurement accuracy, which results in poor positioning accuracy of PDR, and the positioning effect still depends on VLC positioning method to a large extent. Chinese patent publication No. CN105737832B introduces a positioning method in which a whole state vector is divided into five-dimensional state distribution processing of robot pose estimation and landmark estimation by using a distributed structure and then fused, the method separately establishes a plurality of parallel sub-filters according to each effective landmark point, then fuses the robot pose estimation results of the sub-filters in a main filter, and performs feedback correction on the fusion results of the sub-filters through a global predictor, thereby finally obtaining a globally optimal robot pose estimation result; however, the sub-filter in the patent technology uses extended kalman filtering, and the main filter weights and averages the estimation results of the sub-filter to obtain the optimal estimation value, but how to obtain the weight coefficients of the sub-filter is not described.
Disclosure of Invention
In view of the above, the invention provides an optimization-based vision and laser SLAM fusion positioning method, which integrates two relatively independent positioning methods with higher accuracy, so that a positioning system can still normally work under the condition that one method fails, the accuracy is improved, and finally the real-time performance of the system is improved by improving the fusion method.
An optimization-based vision and laser SLAM fusion positioning method comprises the following steps:
(1) acquiring a robot position estimation result X and Gaussian parameters thereof obtained by a vision SLAM module through positioning calculation;
(2) acquiring a robot position estimation result Y obtained by a laser SLAM module through a particle filter positioning algorithm;
(3) performing approximate Gaussian processing on the particle distribution result in the calculation process of the laser SLAM module, and calculating to obtain a Gaussian parameter of a position estimation result Y;
(4) time correction is carried out on the position estimation result X and the Gaussian parameter thereof by utilizing the time difference positioned and output by the two modules;
(5) establishing an objective function according to the information, and optimizing and solving an optimal fusion coefficient alpha;
(6) and calculating an optimal robot position estimation result Z after fusion through a fusion function according to the optimal fusion coefficient alpha.
Further, the visual SLAM module fuses IMU data during the positioning calculation process.
Furthermore, the front end of the vision SLAM module uses a characteristic method to solve camera motion according to epipolar constraint and uses the camera motion as an observation result, an IMU and a wheel speed meter are further used to obtain a prediction result according to a robot motion model, and then extended Kalman filtering is used to obtain a position estimation result X and Gaussian parameters of the robot.
Further, the expression of the objective function is as follows:
Figure BDA0002828157800000031
wherein: m and n are intermediate variables, f (m, n) is an objective function for m and n, n ═ 1- α)2,m=α2,σx1Is the standard deviation, sigma, of the X-axis component of the robot position estimate Xx2Is the standard deviation, rho, of the y-axis component of the robot position estimate XxCovariance, σ, of the robot position estimate Xy1Is the standard deviation, sigma, of the x-axis component of the robot position estimate Yy2Is the standard deviation, rho, of the Y-axis component of the robot position estimate YyIs the covariance of the robot position estimate Y.
Further, the step (5) needs to satisfy the following constraint conditions in the optimization solving process:
Figure BDA0002828157800000032
wherein: m and n are intermediate variables, n ═ 1-alpha2,m=α2
Further, the step (6) calculates an optimal robot position estimation result Z after fusion through the following fusion function;
Figure BDA0002828157800000033
wherein: m and n are intermediate variables, n ═ 1-alpha2,m=α2
The visual SLAM module front end used by the SLAM fusion positioning method of the invention uses feature points or road sign point matching, and is fused with an IMU and a wheel speed encoder through extended Kalman filtering to be used as a VIO (visual-inertial odometer); the laser SLAM module obtains position estimation by using particle filter (particle filter), calculates an approximate Gaussian distribution covariance matrix according to particle weight, and finally inputs the position estimation result and the covariance matrix obtained by the vision and laser SLAM modules into a positioning fusion module to optimally calculate weight coefficients of two positioning methods according to a target function so as to obtain a fused optimal position estimation result; the orientation angle estimation of the robot is directly obtained according to the visual SLAM method without a fusion module.
Compared with the prior art, the invention has the following beneficial technical effects:
1. the positioning accuracy is higher; the invention integrates two higher-precision positioning methods instead of using low-precision methods such as IMU or MEMS direct position estimation and the like, and obtains a more accurate integrated positioning result.
2. The robustness is stronger; when one positioning module fails, the accuracy of the positioning fusion result is not lower than that of the other positioning module;
3. the time complexity is low. The fusion method is based on optimization instead of Kalman filtering, the fusion result is obtained in an iteration or analysis mode, and the time complexity is lower than that of Kalman filtering matrix multiplication and inversion operation when the state number is less.
Drawings
Fig. 1 is a schematic diagram of a system implementation framework of the SLAM fusion positioning method of the present invention.
FIG. 2 is a schematic diagram illustrating an execution flow of the fusion module according to the present invention.
FIG. 3 is a schematic diagram showing the comparison of the fusion effect of the present invention under two situations of large difference in positioning accuracy.
Fig. 4 is a schematic diagram showing the comparison of the fusion effect of the present invention under the condition that the two positioning accuracies are substantially the same.
FIG. 5 is a comparison of the fusion effect of the present invention in a positioning failure situation.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
The main problem of visual odometry is how to estimate the motion state of the camera from the adjacent frame images acquired by the camera. For the characteristic point method, firstly, the characteristic point extraction is needed to be carried out on the images, and then the changes of the corresponding characteristic points of the two images are compared; for the road sign point method, the road sign points can be directly used as the feature points. The essence of both methods is to find the position invariant features under the map coordinate system, and let N common features in the image, and use yi(i is 1,2, …, N) indicates that the state of the system at time k is xkObservation of noise εi,kObeying gaussian distribution, the observation equation of the system is:
zk=h(yi,xk)+εi,k (1)
where the abstract function h is a mapping of system states and features to observations, related to the camera model. From wheel speed meter and IMU measurements ukThe system state at the time k can be calculated according to the system state at the time k-1, that is, the motion equation of the system is:
xk=g(xk-1,uk)+δk (2)
wherein: the abstract function g describes the motion model, δkThe system noise follows a gaussian distribution.
Since the observation model and the motion model of the system are non-linear, extended Kalman filtering is usedThe method carries out state estimation, and the Jacobi matrixes of the observation model and the motion model at the time k are respectively set as HkAnd GkObservation of noise εi,k~N(0,Rk) System noise deltak~N(0,Qk) Then the expected value and covariance prediction for the state estimate are:
Figure BDA0002828157800000051
Figure BDA0002828157800000052
the kalman gain is:
Figure BDA0002828157800000053
and finally updating the expected value and the covariance matrix of the state estimation according to the observation of the camera as follows:
Figure BDA0002828157800000054
Figure BDA0002828157800000055
in plane motion, the pose state of the robot is represented by two-dimensional position coordinates and a yaw angle, and the system state xk=[x y θ]TAnd obtaining the positioning estimation and the covariance matrix of the visual SLAM module.
In the method, the laser SLAM carries out positioning estimation by using a general particle filtering method, the result of the particle filtering is a group of discrete state estimation sample sets and corresponding confidence coefficients thereof, but a subsequent optimization algorithm needs continuous confidence coefficient estimation, the method calculates Gaussian approximation according to particle weight, and the method comprises the following specific steps:
number of particles of particle filterFor N, the set of state samples x is output at time kk,i(i-1, 2, …, N) corresponding to a normalized weight of the particle of wi(i ═ 1,2, …, N), then the mean of the gaussian approximation is:
Figure BDA0002828157800000056
the covariance matrix is:
Figure BDA0002828157800000057
only the position estimation of the laser SLAM module is fused, so that only the Gaussian approximation of the position coordinates is calculated, and the position estimation of the laser SLAM module and the covariance matrix thereof are obtained.
And after Gaussian approximation of laser positioning result distribution is obtained, time difference correction is carried out on the state estimation mean value and covariance of the visual SLAM module according to an equation (3) and an equation (4), and the position estimation of the current moment is obtained. For ease of discussion, let us assume that the position estimates of the vision and laser SLAM are the respective random variables X ═ X1 x2]TAnd Y ═ Y1 y2]TAnd all obey the following two-dimensional gaussian distribution:
Figure BDA0002828157800000058
Figure BDA0002828157800000059
z is the fused position estimate, i.e.:
Z=αX+(1-α)Y (10)
where α is the fusion coefficient and Z obeys the following Gaussian distribution:
Figure BDA0002828157800000061
in two-dimensional Gaussian distribution, the area of a standard deviation ellipse reflects the distribution concentration of random variables, and in the positioning problem, a position estimation result is expected to be concentrated near an expected value as much as possible; the relation between the error elliptical area S and the covariance matrix of the random variable Z is as follows:
Figure BDA0002828157800000062
wherein:
Figure BDA0002828157800000063
the covariance matrix element values of Z are:
Figure BDA0002828157800000064
Figure BDA0002828157800000065
ρz=E(Z1Z2)-pxpy
wherein:
E(Z1Z2)=α2E(X1X2)+α(1-α)E(X1Y2)+α(1-α)E(X2Y1)+(1-α)2(Y1Y2)
because of the independence of camera, IMU and laser radar gaussian noise, then:
E(X1Y2)=E(X2Y1)=Cov(X1,Y2)+EX1EY2=pxpy
and:
E(X1X2)=ρx+pxpy
E(Y1Y2)=ρy+pxpy
therefore:
ρz=α2ρx+(1-α)2ρy
let m be alpha2,n=(1-α)2The optimized objective function is:
Figure BDA0002828157800000066
the constraint conditions are as follows:
Figure BDA0002828157800000067
finally, m and n are obtained by optimization methods such as a Lagrange multiplier method and the like, and the fusion result is as follows:
Figure BDA0002828157800000071
theories and practices prove that the positioning error precision after fusion is higher than that before fusion.
FIG. 1 shows a system implementation framework of the method of the present invention, with an IMU integrating an accelerometer, a gyroscope and a magnetometer, with a measured data output frequency of 100 Hz; the speed measurement result obtained by the accelerometer integration is not accurate enough, the invention uses a wheel speed meter module to calculate the speed, the wheel speed meter uses a photoelectric rotary encoder, and the output frequency of the speed calculation result is 100 Hz. The front end part of the visual SLAM module uses a characteristic method, the characteristics can be characteristic points extracted by characteristic extraction algorithms such as ORB and the like, and can also be road sign points such as color blocks and the like, and then an extended Kalman filtering algorithm is used to be fused with measurement results of an IMU and a wheel speed meter to obtain a position estimation and covariance matrix, and the positioning frequency of the visual SLAM module is 30 Hz. The laser SLAM module uses an IMU and a wheel speed meter to estimate the initial pose, the aim is to reduce the particle convergence time, then a position estimation and a group of particle weights are obtained through a particle filter positioning algorithm, and the positioning frequency of the laser SLAM is 10 Hz. The weight distribution of the particles reflects the probability distribution of the positions, the weight distribution of the particles is subjected to approximate Gaussian processing to obtain position probability distribution represented by the Gaussian distribution, and then the position estimation and Gaussian parameters of the two modules are input into the fusion module.
Fig. 2 is an execution flow of the fusion module, and two sets of position probability distributions obeying gaussian distribution are obtained through processing of position estimation, the position output of the visual SLAM module is recorded as a random variable X, and the position output of the laser SLAM module is recorded as a random variable Y. The fusion module performs fusion processing after obtaining the position estimation result of the laser SLAM, but the calculation results of the two positioning modules are not output simultaneously, so that the position estimation result of the visual SLAM needs to be subjected to time difference correction according to the motion model, the IMU and the wheel speed meter data to obtain the position estimation and Gaussian parameters at the current moment. The fusion module uses a linear fusion function Z ═ α X + (1- α) Y with the goal of reducing the uncertainty of the fusion result. The fusion module adopts an optimization mode to solve the fusion coefficient, and outputs the final positioning result according to the fusion coefficient.
Fig. 3, fig. 4 and fig. 5 are comparison of the positioning fusion effect of the system of the invention under different conditions. The dots of the first row and the second column of pictures represent the position distribution obtained by the visual SLAM, the dots of the second row and the first column of pictures represent the position distribution obtained by the laser SLAM, the dots of the second row and the second column of pictures represent the position distribution after fusion, the first row and the first column of pictures represent the comparison of the results of the three position distributions, and the ellipses A, B, C represent the positioning position distributions after the visual SLAM, the laser SLAM and the fusion respectively. FIG. 3 shows that the two positioning accuracies have a large difference, and the result after fusion is slightly better than the effect of laser SLAM; FIG. 4 shows that for two cases with similar positioning accuracy, the result after fusion is significantly better than the effect before non-fusion; fig. 5 is a case where a positioning module fails, and the fused result is substantially the same as that of a positioning method in normal operation.
The above-mentioned embodiments are intended to explain the technical solutions and effects of the present invention in detail so as to enable those skilled in the art to understand and apply the present invention. It will be readily apparent to those skilled in the art that various modifications to the above-described embodiments may be made, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.

Claims (8)

1. An optimization-based vision and laser SLAM fusion positioning method comprises the following steps:
(1) acquiring a robot position estimation result X and Gaussian parameters thereof obtained by a vision SLAM module through positioning calculation;
(2) acquiring a robot position estimation result Y obtained by a laser SLAM module through a particle filter positioning algorithm;
(3) performing approximate Gaussian processing on the particle distribution result in the calculation process of the laser SLAM module, and calculating to obtain a Gaussian parameter of a position estimation result Y;
(4) time correction is carried out on the position estimation result X and the Gaussian parameter thereof by utilizing the time difference positioned and output by the two modules;
(5) establishing an objective function according to the information, and optimizing and solving an optimal fusion coefficient alpha;
(6) and calculating an optimal robot position estimation result Z after fusion through a fusion function according to the optimal fusion coefficient alpha.
2. The optimization-based vision and laser SLAM fusion localization method of claim 1, wherein: the visual SLAM module fuses IMU data in the positioning calculation process.
3. The optimization-based vision and laser SLAM fusion localization method of claim 1, wherein: the front end of the vision SLAM module uses a characteristic method to solve camera motion according to epipolar constraint and uses the camera motion as an observation result, an IMU and a wheel speed meter are further used to obtain a prediction result according to a robot motion model, and then extended Kalman filtering is used to obtain a position estimation result X and Gaussian parameters of the robot.
4. The optimization-based vision and laser SLAM fusion localization method of claim 1, wherein: the expression of the objective function is as follows:
Figure FDA0002828157790000011
wherein: m and n are intermediate variables, f (m, n) is an objective function for m and n, n ═ 1- α)2,m=α2,σx1Is the standard deviation, sigma, of the X-axis component of the robot position estimate Xx2Is the standard deviation, rho, of the y-axis component of the robot position estimate XxCovariance, σ, of the robot position estimate Xy1Is the standard deviation, sigma, of the x-axis component of the robot position estimate Yy2Is the standard deviation, rho, of the Y-axis component of the robot position estimate YyIs the covariance of the robot position estimate Y.
5. The optimization-based vision and laser SLAM fusion localization method of claim 1, wherein: in the step (5), the following constraint conditions are required to be met in the optimization solving process:
Figure FDA0002828157790000021
wherein: m and n are intermediate variables, n ═ 1-alpha2,m=α2
6. The optimization-based vision and laser SLAM fusion localization method of claim 1, wherein: the optimal robot position estimation result Z after fusion is calculated through the following fusion function in the step (6);
Figure FDA0002828157790000022
wherein: m and n are intermediate variables, n ═ 1-alpha2,m=α2
7. The optimization-based vision and laser SLAM fusion localization method of claim 1, wherein: the front end of a visual SLAM module used by the fusion positioning method is matched with characteristic points or road sign points, and is fused with an IMU and a wheel speed encoder through extended Kalman filtering to serve as a visual inertial odometer; the laser SLAM module obtains position estimation by using particle filtering, calculates an approximate Gaussian distribution covariance matrix according to particle weight, and finally inputs the position estimation result and the covariance matrix obtained by the vision and laser SLAM modules into a positioning fusion module to optimally calculate the weight coefficients of the two positioning methods according to a target function so as to obtain a fused optimal position estimation result; the orientation angle estimation of the robot is directly obtained according to the visual SLAM method without a fusion module.
8. The optimization-based vision and laser SLAM fusion localization method of claim 1, wherein: the fusion positioning method integrates two higher-precision positioning methods instead of using low-precision methods such as IMU or MEMS direct position estimation and the like, and obtains a more accurate fusion positioning result; when one positioning module fails, the precision of the positioning fusion result is not lower than that of the other positioning module; the fusion positioning method is based on optimization instead of Kalman filtering, the fusion result is obtained in an iteration or analysis mode, and the time complexity is lower than that of Kalman filtering matrix multiplication and inversion operation when the state number is less.
CN202011461247.5A 2020-12-10 2020-12-10 Optimization-based vision and laser SLAM fusion positioning method Active CN112712107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011461247.5A CN112712107B (en) 2020-12-10 2020-12-10 Optimization-based vision and laser SLAM fusion positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011461247.5A CN112712107B (en) 2020-12-10 2020-12-10 Optimization-based vision and laser SLAM fusion positioning method

Publications (2)

Publication Number Publication Date
CN112712107A true CN112712107A (en) 2021-04-27
CN112712107B CN112712107B (en) 2022-06-28

Family

ID=75543151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011461247.5A Active CN112712107B (en) 2020-12-10 2020-12-10 Optimization-based vision and laser SLAM fusion positioning method

Country Status (1)

Country Link
CN (1) CN112712107B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113490146A (en) * 2021-05-08 2021-10-08 湖南大学 SLAM method based on WiFi and visual fusion

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105865449A (en) * 2016-04-01 2016-08-17 深圳杉川科技有限公司 Laser and vision-based hybrid location method for mobile robot
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
CN108694731A (en) * 2018-05-11 2018-10-23 武汉环宇智行科技有限公司 Fusion and positioning method and equipment based on low line beam laser radar and binocular camera
CN109633664A (en) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 Joint positioning method based on RGB-D Yu laser odometer
CN110174107A (en) * 2019-04-30 2019-08-27 厦门大学 A kind of guide to visitors robot localization builds figure laser vision fusion method and robot
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN111121767A (en) * 2019-12-18 2020-05-08 南京理工大学 GPS-fused robot vision inertial navigation combined positioning method
CN111337943A (en) * 2020-02-26 2020-06-26 同济大学 Mobile robot positioning method based on visual guidance laser repositioning
CN111429574A (en) * 2020-03-06 2020-07-17 上海交通大学 Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN111521176A (en) * 2020-04-27 2020-08-11 北京工业大学 Visual auxiliary inertial navigation method fusing laser
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105865449A (en) * 2016-04-01 2016-08-17 深圳杉川科技有限公司 Laser and vision-based hybrid location method for mobile robot
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
CN108694731A (en) * 2018-05-11 2018-10-23 武汉环宇智行科技有限公司 Fusion and positioning method and equipment based on low line beam laser radar and binocular camera
CN109633664A (en) * 2018-12-29 2019-04-16 南京理工大学工程技术研究院有限公司 Joint positioning method based on RGB-D Yu laser odometer
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110174107A (en) * 2019-04-30 2019-08-27 厦门大学 A kind of guide to visitors robot localization builds figure laser vision fusion method and robot
CN111121767A (en) * 2019-12-18 2020-05-08 南京理工大学 GPS-fused robot vision inertial navigation combined positioning method
CN111337943A (en) * 2020-02-26 2020-06-26 同济大学 Mobile robot positioning method based on visual guidance laser repositioning
CN111429574A (en) * 2020-03-06 2020-07-17 上海交通大学 Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111521176A (en) * 2020-04-27 2020-08-11 北京工业大学 Visual auxiliary inertial navigation method fusing laser

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郝刚涛等: "单目-无扫描3D激光雷达融合的非合作目标相对位姿估计", 《宇航学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113490146A (en) * 2021-05-08 2021-10-08 湖南大学 SLAM method based on WiFi and visual fusion

Also Published As

Publication number Publication date
CN112712107B (en) 2022-06-28

Similar Documents

Publication Publication Date Title
CN111121767B (en) GPS-fused robot vision inertial navigation combined positioning method
CN111415387B (en) Camera pose determining method and device, electronic equipment and storage medium
CN107516326B (en) Robot positioning method and system fusing monocular vision and encoder information
EP2549288B1 (en) Identifying true feature matches for vision based navigation
CN110517324B (en) Binocular VIO implementation method based on variational Bayesian adaptive algorithm
WO2020107931A1 (en) Pose information determination method and apparatus, and visual point cloud construction method and apparatus
US7747348B2 (en) Method and apparatus for using rotational movement amount of mobile device and computer-readable recording medium for storing computer program
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN113551665A (en) High dynamic motion state sensing system and sensing method for motion carrier
CN112347205A (en) Method and device for updating error state of vehicle
CN113066127A (en) Visual inertial odometer method and system for calibrating equipment parameters on line
CN114234967A (en) Hexapod robot positioning method based on multi-sensor fusion
CN112712107B (en) Optimization-based vision and laser SLAM fusion positioning method
CN114690229A (en) GPS-fused mobile robot visual inertial navigation method
CN116929338B (en) Map construction method, device and storage medium
CN111553954B (en) Online luminosity calibration method based on direct method monocular SLAM
CN116429098A (en) Visual navigation positioning method and system for low-speed unmanned aerial vehicle
CN115930948A (en) Orchard robot fusion positioning method
CN113503872B (en) Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU
Zarei et al. Performance improvement for mobile robot position determination using cubature Kalman filter
CN114690230A (en) Automatic driving vehicle navigation method based on visual inertia SLAM
Conway et al. Vision-based Velocimetry over Unknown Terrain with a Low-Noise IMU
CN114764830A (en) Object pose estimation method based on quaternion EKF and uncalibrated hand-eye system
Yu et al. Pose estimation of an aerial construction robot based on motion and dynamic constraints
CN110849392A (en) Robot mileage counting data correction method and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant