CN115930971B - Data fusion processing method for robot positioning and map building - Google Patents

Data fusion processing method for robot positioning and map building Download PDF

Info

Publication number
CN115930971B
CN115930971B CN202310049730.XA CN202310049730A CN115930971B CN 115930971 B CN115930971 B CN 115930971B CN 202310049730 A CN202310049730 A CN 202310049730A CN 115930971 B CN115930971 B CN 115930971B
Authority
CN
China
Prior art keywords
time
representing
iteration
state vector
covariance matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310049730.XA
Other languages
Chinese (zh)
Other versions
CN115930971A (en
Inventor
朱冬
方向明
宋雯
唐国梅
张建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seven Teng Robot Co ltd
Original Assignee
Seven Teng Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seven Teng Robot Co ltd filed Critical Seven Teng Robot Co ltd
Priority to CN202310049730.XA priority Critical patent/CN115930971B/en
Publication of CN115930971A publication Critical patent/CN115930971A/en
Application granted granted Critical
Publication of CN115930971B publication Critical patent/CN115930971B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a data fusion processing method for robot positioning and mapping, which comprises the following steps: predicting a state vector of the robot at the current moment; calculating a prediction error covariance matrix of the current moment according to the covariance matrix of the previous moment and the noise covariance matrix; predicting the deviation information difference of the current moment; obtaining observed quantity of the robot, and obtaining a joint posterior probability density function according to the deviation information difference quantity, the prediction state vector, the prediction error covariance matrix and the observed quantity at the current moment; and assigning the initial iteration parameters, and carrying out iterative updating on the joint posterior probability density function according to a variation inference method until the iterative updating is finished after the iteration conditions of the variation inference are met, so as to obtain the fusion data of the robot at the current moment. The method and the device for estimating the state vector of the robot and the offset information difference jointly reduce the influence of the time difference of the measured data and the visual information on the precision of the fusion data, and improve the self pose positioning precision of the robot.

Description

Data fusion processing method for robot positioning and map building
Technical Field
The invention belongs to the technical field of robot positioning and map building, and particularly relates to a data fusion processing method for robot positioning and map building.
Background
Synchronous positioning and mapping (Simultaneous Localization And Mapping, SLAM) of a robot refers to the process of extracting and combining useful information as the robot moves in an unknown environment, thereby determining its pose and gradually building up a map of the surrounding environment. SLAM has been successfully applied to mobile robots, unmanned aerial vehicles and the like at present, and is attracting more attention.
In the prior art, the methods for extracting and combining useful information to obtain the pose of the robot are as follows: the first method is a pose optimization method by using a monocular camera and Kalman Filter (KF), wherein the optimization process comprises the steps of calculating updated values of a rotation vector and a translation vector, and taking the updated values of the rotation vector and the translation vector as the optimized pose; the second method is to apply a broader SLAM algorithm based on extended Kalman filtering (extendedKalman filter, EKF), and the method utilizes the EKF to estimate the pose and map feature position of the robot, and the motion model and the observation model in the SLAM are required to be linearized by utilizing a first-order Taylor expansion before estimating the pose and map feature position of the robot; in addition, the mobile robot obtains pose information of the navigation pose reference system according to the geomagnetic field intensity actual measurement information, the acceleration actual measurement information and the compensated and corrected actual angular velocity measurement value, calculates pose information of the monocular camera, and carries out filtering fusion on the pose information of the monocular camera and the pose information of the navigation pose reference system according to Kalman filtering to obtain multi-sensor fusion data, namely self pose of the robot; compared with the two methods, the method greatly reduces the influence of light on the pose information of the monocular camera and improves the pose precision of the robot.
However, in the SLAM method of multi-sensor fusion, there is a fusion error in the data fusion process of the multi-sensor: when pose information of the pose reference system and pose information of the camera are fused, firstly, acquiring the pose information of the pose reference system at the moment k by using a plurality of sensors, and then acquiring the pose information of the camera at the same moment by using other sensors; because the sensor for acquiring information is not the same sensor, and the time stamps of the plurality of sensors when acquiring data are not completely aligned, the error deltat of the acquisition time of the plurality of data is unavoidable, that is, the camera pose information acquired at the same time k for acquiring the pose information of the navigation pose reference system is actually the camera pose information acquired at the time k+deltat. The fusion of sensor data at different moments leads to the fact that the self pose accuracy of the robot obtained after fusion is still low.
Disclosure of Invention
The invention aims at least solving the technical problems in the prior art, and provides a data fusion processing method for robot positioning and map building.
In order to achieve the above object of the present invention, according to a first aspect of the present invention, there is provided a data fusion processing method for robot positioning and mapping, comprising the steps of: predicting the state vector of the current moment of the robot according to the state vector of the last moment of the robot, the prediction error covariance matrix of the last moment and the measurement data of the current moment; calculating a prediction error covariance matrix of the current moment of the robot according to the error covariance matrix and the noise covariance matrix of the last moment of the robot; predicting the offset information difference of the robot at the current moment according to the offset information difference of the robot at the previous moment; the offset information difference is an information difference between the visual information and the measurement data; obtaining observed quantity of the robot, and obtaining a joint posterior probability density function according to the deviation information difference quantity of the current moment of the robot, the prediction state vector of the current moment, the prediction error covariance matrix and the observed quantity; assigning an initial iteration parameter, carrying out iterative updating on the joint posterior probability density function according to a variation inference method, ending the iterative updating after the iteration condition of the variation inference is met, and obtaining fusion data of the robot at the current moment according to the joint posterior probability density function at the current moment; the fusion data of the robot comprises a state vector and a prediction error covariance matrix at the current moment, and an offset information difference and an offset covariance matrix at the current moment.
Further, the step of obtaining the joint posterior probability density function according to the deviation information difference of the current moment of the robot, the prediction state vector of the current moment, the prediction error covariance matrix and the observed quantity is specifically as follows: a prior probability density function is established according to a predicted state vector of the current moment of the robot and a predicted covariance matrix of the current moment; establishing a Gaussian distribution function of the current moment offset information difference according to the current moment offset information difference and the visual information of the robot; establishing a likelihood function according to the observed quantity of the current robot and the deviation information difference quantity of the current moment; and acquiring a joint posterior probability density function according to the prior probability density function, the Gaussian distribution function and the likelihood function.
Further, the joint posterior probability density function is specifically as follows: specifically, p (X k ,ΔX k |Y 1:k ) Represents the joint posterior probability density function value, N (; ) Representing a gaussian distribution function; k represents the time; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference representing time k; ΔX k-1 Offset information difference representing time k-1; />A covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k; r is R ΔX An error covariance matrix representing the offset information delta; p (Y) 1:k-1 ) The observed quantity probability distribution from time 1 to time k-1 is shown.
Further, the variation inference method is a variational Bayesian method, and according to the variational Bayesian method, the optimal solution formula satisfies: wherein ,/>θ represents the set Θ k Any one element of (a) and (b); theta (theta) k (- θ) is represented by Θ k Non-theta element in (2); c (C) θ Representing the set Θ k In which is independent of the element thetaIs a measure of (2); e []Representing the expected value; p (Θ) k ,Y 1:k ) Representing a joint probability density function; the step of iteratively updating the joint posterior probability density function according to the variance inference method comprises the following steps: the iteration parameters and the joint posterior probability density function are put into an optimal Jie Gong to obtain the joint posterior probability density function at the current moment; the logarithmic function of the joint posterior probability density function is as follows: wherein k represents the time; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference representing time k; ΔX k-1 Offset information difference representing time k-1; () T Representing a transposed matrix; />A covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k; r is R ΔX Error covariance matrix representing offset information delta, < ->Representing the set { X } k ,ΔX k And set Θ in } k An irrelevant amount.
Further, the step of iteratively updating the joint posterior probability density function according to the variation inference method further includes: obtaining an approximate probability density function of the state vector and an approximate probability density function of the offset information difference according to the joint posterior probability density function at the current moment; the step of obtaining the fusion data of the robot at the current moment according to the joint posterior probability density function at the current moment specifically comprises the following steps: obtaining a state vector and a prediction error covariance matrix at the current moment according to the approximate probability density function of the state vector; and obtaining the offset information difference and the offset covariance matrix at the current moment according to the approximate probability density function of the offset information difference.
Further, the approximate probability density function of the state vector is obtained as follows: bringing the optimal solution formula into a logarithmic function of the joint posterior probability density function, and letting θ=x k Obtaining: wherein i represents the iteration number and k represents the time; q (i+1) (X k ) Representing an approximate probability density function of the state vector after the (i+1) th iteration of the moment k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference Δx indicating time k k The iteration parameter is the ith iteration; e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); () T Representing a transposed matrix; />A covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k; />Representing the set Θ k Intermediate and X k An irrelevant amount; according to log q (i+1) (X k ) If the approximate probability density function of the state vector is judged to follow the Gaussian distribution, the approximate probability density function of the state vector is as follows: />Wherein N (;) representsA Gaussian distribution function; />State vector representing the (i+1) th iteration of time k, P k|k An error covariance matrix at time k; the state vector and the prediction error covariance matrix are obtained according to a first solution formula.
Further, the first solving formula is specifically: P k|k =(I m -K k H k )P k|k-1 ;/> wherein i represents the iteration number and k represents the time; />A state vector, X, representing the (i+1) th iteration of time k k|k-1 A predicted state vector representing time k; k (K) k A gain matrix representing time k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference Δx indicating time k k The iteration parameter is the ith iteration; e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); p (P) k|k-1 A prediction error covariance matrix representing time k; h k T Representing the observation matrix H k Is a transposed matrix of (a); />Co-ordinates of likelihood functions representing time kA variance matrix; p (P) k|k An error covariance matrix at time k; i m Representing the identity matrix.
Further, the approximate probability density function of the offset information delta is obtained as follows: the optimal solution formula is brought into a logarithmic function of the joint posterior probability density function, and θ=Δx is made k Obtaining: wherein i represents the iteration number and k represents the time; q (i+1) (ΔX k ) An approximate probability density function representing the offset information delta after the i+1th iteration of time k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; y is Y k -H k X k For the iteration parameter of the ith iteration, E (i) [Y k -H k X k ]Represents the iteration parameter (Y) at the ith iteration k -H k X k ) Is a desired value of (2); ΔX k Offset information difference representing time k; () T Representing a transposed matrix; />A covariance matrix representing a time k likelihood function; ΔX k-1 Offset information difference representing time k-1; r is R ΔX An error covariance matrix representing the offset information delta; />Representing the set Θ k Neutralization of DeltaX k An irrelevant amount; according to log q (i+1) (ΔX k ) The approximate probability density function of the offset information delta is judged to follow the gaussian distribution, and then the approximate probability density function of the offset information delta is as follows: /> Wherein N (;) represents a Gaussian distribution function; />Offset information difference representing the i+1st iteration of time k; />An error covariance matrix representing the offset information difference at time k; the offset information difference and the covariance matrix of the offset information difference are obtained according to a second solution formula.
Further, the second solving formula is specifically: wherein i represents the iteration number and k represents the time; />Offset information difference representing the i+1st iteration of time k; ΔX k-1 Offset information difference representing time k-1; k (K) ΔX A gain matrix representing the difference of the offset information; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; e (E) (i) [Y k -H k X k ]Represents the ith iteration (Y k -H k X k ) E, E (i) [Y k -H k X k ]The iteration parameter is the ith iteration;an error covariance matrix representing the offset information difference at time k; i m Representing the identity matrix; r is R ΔX An error covariance matrix representing the offset information delta; h k T Representing the observation matrix H k Is a transposed matrix of (a); />Representing the covariance matrix of the likelihood function at time k.
Further, the iteration conditions of the variation inference are specifically as follows:wherein i represents the number of iterations, k represents the time, < +.>State vector representing the i+1th iteration of time k,/and>a state vector representing the ith iteration of time k; epsilon represents the iteration threshold; the step of ending the iterative update after the iteration condition of the variation inference is satisfied is specifically as follows: if the state vector at the current moment meets the iteration condition of variation inference after the (i+1) th iteration, ending the iteration; if the state vector at the current moment does not meet the iteration condition of variation inference after the (i+1) th iteration, taking the expected value of the iteration parameter of the current iteration as the iteration parameter of the next iteration to start the next iteration update.
The invention has the technical principle and beneficial effects that: the method comprises the steps of defining offset information difference as information difference between visual information and measurement data caused by delta t, introducing the offset information difference as one of variables in a filtering fusion process, and carrying out Gaussian modeling on the offset information difference; updating the state vector and the offset information difference in the iterative process by using a variation inference method, and predicting the offset information difference at the next moment according to the offset information difference at the current moment; outputting fusion data after the iteration condition is met; the invention introduces offset information difference in the fusion process of visual information and measurement data, and corrects the information difference caused by non-identical time acquisition between the data in the fusion process of the visual information and the measurement data; the joint estimation of the state vector and the offset information difference of the robot is realized by utilizing a variation inference method, the influence of the time difference of the measurement data and the visual information on the precision of the fusion data is reduced, and the self pose positioning precision of the robot is improved.
Drawings
Fig. 1 is a flow chart of a data fusion processing method for robot positioning and mapping according to the present invention.
Detailed Description
Embodiments of the present invention are described in detail below; the embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the synchronous positioning and map construction process of the robot, a multi-sensor fusion SLAM method is used, and filtering fusion is carried out on monocular camera pose information and navigation pose reference pose information according to Kalman filtering, so that multi-sensor fusion data, namely self pose of the robot, is obtained; but there is a fusion error in the data fusion process. In order to correct information difference caused by non-identical time acquisition among a plurality of measurement data in the process of fusing the measurement data of a plurality of sensors, the invention provides a data fusion processing method for robot positioning and map building.
As shown in fig. 1, the invention provides a data fusion processing method for robot positioning and mapping, which comprises the following steps:
predicting the state vector of the current moment of the robot according to the state vector of the last moment of the robot, the prediction error covariance matrix of the last moment and the measurement data of the current moment; the state vector comprises a three-dimensional position, speed information and quaternion of the robot; when the robot is at the first moment, the state vector and the prediction error covariance matrix are obtained according to the setting of an inertial measurement unit of the robot;
calculating a prediction error covariance matrix of the current moment of the robot according to the error covariance matrix and the noise covariance matrix of the last moment of the robot;
predicting the offset information difference of the robot at the current moment according to the offset information difference of the robot at the previous moment; the offset information difference is an information difference between the visual information and the measurement data; specifically, the visual information is obtained according to analysis of pictures taken by a camera; during the first iteration, an initial offset information difference is manually set according to the inertia measurement unit and the camera, and the initial offset information difference is gradually updated in the subsequent iteration process;
obtaining the observed quantity of the robot according to the visual information; obtaining a joint posterior probability density function at the current moment according to the deviation information difference of the robot at the current moment, the prediction state vector at the current moment, the prediction error covariance matrix and the observed quantity;
assigning an initial iteration parameter, carrying out iterative updating on the joint posterior probability density function according to a variation inference method, ending the iterative updating after the iteration condition of the variation inference is met, and obtaining fusion data of the robot at the current moment according to the joint posterior probability density function at the current moment; specifically, the fusion data of the robot includes a state vector and a prediction error covariance matrix at the current time, an offset information difference amount at the current time, and an offset covariance matrix.
Specifically, the process of predicting the state vector of the robot at the current time is as follows:
X k|k-1 =F k-1 X k-1|k-1
wherein k represents time, X k|k-1 A prediction state vector representing time k, F k-1 Representing pre-integral of measured data, X k-1|k-1 A state vector representing time k-1; predictive state vector X k|k-1 =[P bk ,V bk ,Q bk ],P bk and Vbk Three-dimensional position and speed information of mobile robot, Q bk Representing a quaternion from an inertial measurement unit coordinate system to a world coordinate system;
in this embodiment, the measurement data is obtained by an inertial measurement unit of the robot; the inertial measurement unit comprises an accelerator and a gyroscope, the measurement data comprises an acceleration value and an angular velocity value, and the pre-integration of the measurement data is obtained as follows: and removing noise from the acceleration value measured by the accelerometer, and then performing primary integration to obtain the speed, and performing secondary integration to obtain the relative displacement. Removing noise from the angular velocity value measured by the gyroscope, and then integrating once to obtain the angular change in the motion process of the mobile robot; the removed noise is a data value output when the acceleration sensor is stationary; the step of removing noise is specifically to subtract the data value output at rest from the measured acceleration value.
Specifically, the prediction error covariance matrix is as follows:
P k|k-1 =F k-1 P k-1|k-1 F k-1 T +Q k
wherein ,Pk|k-1 A prediction error covariance matrix representing time k, F k-1 Representing pre-integration of measured data, P k-1|k-1 An error covariance matrix at time k-1; () T Representing a transposed matrix; f (F) k-1 T A transpose matrix representing the pre-score; q (Q) k Representing a system noise covariance matrix, which is set according to the inertial measurement unit.
Specifically, the observed quantity includes three-dimensional pose and speed information of the robot, and satisfies:
Y k =H k X k|k-1 +H k ΔX k +v k
wherein ,Yk Represents the observed quantity of time k, H k The observation matrix representing the time k is the mapping relation between the observed quantity and the state vector of the robot, and in the embodiment, the observation matrix is a diagonal matrix with the previous sixth-order value of 1 and the subsequent value of 0, and DeltaX k Offset information difference v representing time k k The observation noise at time k is represented, and the observation noise is obtained by the camera.
Specifically, the process of assigning the initial iteration parameters is as follows:
E (0) [ΔX k ]=ΔX k-1
E (0) [Y k -H k X k ]=Y k -H k X k|k-1
wherein E []Representing the expected value; ΔX k Offset information difference Δx indicating time k k-1 Offset information difference representing time k-1 is denoted by ΔX k-1 As an iteration parameter DeltaX k Expected value of first iteration, iteration parameter DeltaX k The expected value of the first iteration is used as the iteration parameter of the next iteration.
Further, the step of obtaining the joint posterior probability density function according to the deviation information difference of the current moment of the robot, the prediction state vector of the current moment, the prediction error covariance matrix and the observed quantity is specifically as follows:
establishing a prior probability density function according to a predicted state vector of the current moment of the robot and a predicted covariance matrix of the current moment:
p(X k |Y k-1 )=N(X k ;X k|k-1 ,P k|k-1 )
wherein ,p(Xk |Y k-1 ) Representing a priori probability density function, X k A state vector representing time k, Y k-1 N (;) represents the observed quantity at time k-1 and N (;) represents a Gaussian distribution function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k;
establishing a Gaussian distribution function of the current moment offset information difference according to the offset information difference of the current moment of the robot and the visual information:
p(ΔX k )=N(ΔX k ;ΔX k-1 ,R ΔX )
wherein p (DeltaX) k ) Representing a Gaussian distribution function, deltaX k Offset information difference Δx indicating time k k-1 Representing the offset information difference of time k-1, R ΔX An error covariance matrix representing the offset information delta; an error covariance matrix of the offset information difference is obtained from the offset information difference at the current time.
Concrete embodimentsGround, deltaX k =[ΔP k ,ΔV k ,ΔQ k ],ΔP k Three-dimensional pose of mobile robot at time k, deltaV k Speed information, Δq, indicating time k k Offset information difference representing the quaternion at time k.
Establishing a likelihood function according to the observed quantity of the current robot and the deviation information difference quantity of the current moment:
wherein ,p(Yk |X k|k-1 ) Representing likelihood functions, Y k Represents the observed quantity of time k, H k An observation matrix, X, representing time k k State vector, Δx, representing time k k Offset information difference representing time k;a covariance matrix corresponding to a likelihood function representing a time k; the covariance matrix corresponding to the likelihood function at the first moment is artificially set according to the inertial measurement unit.
Acquiring a joint posterior probability density function according to the prior probability density function, the Gaussian distribution function and the likelihood function:
wherein ,p(Xk ,ΔX k |Y 1:k ) The joint posterior probability density function value is represented, and k represents time; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference representing time k; ΔX k-1 Offset information difference representing time k-1;a covariance matrix representing a time k likelihood function; x is X k|k-1 Indicating time of dayA predictive state vector for k; p (P) k|k-1 A prediction error covariance matrix representing time k; r is R ΔX An error covariance matrix representing the offset information delta; p (Y) 1:k-1 ) The observed quantity probability distribution from time 1 to time k-1 is shown.
In this embodiment, the variation inference method is a variational bayesian method, and according to the variational bayesian method, an optimal solution formula satisfies:
wherein ,θ represents the set Θ k Any one element of (a) and (b); theta (theta) k (- θ) is represented by Θ k Non-theta element in (2); c (C) θ Representing the set Θ k The amount of the element θ is not related; e []Representing the expected value; p (Θ) k ,Y 1:k ) Representing a joint probability density function;
the step of iteratively updating the joint posterior probability density function according to the variance inference method comprises the following steps: the iteration parameters and the joint posterior probability density function are put into an optimal solution formula to obtain a logarithmic function of the joint posterior probability density function;
wherein k represents the time; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference representing time k; ΔX k-1 Offset information difference representing time k-1;a covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 Prediction error covariance moment representing time kAn array; r is R ΔX Error covariance matrix representing offset information delta, < ->Representing the set { X } k ,ΔX k And set Θ in } k An irrelevant amount.
Preferably, the step of iteratively updating the joint posterior probability density function according to the variation inference method further comprises:
obtaining an approximate probability density function of the state vector and an approximate probability density function of the offset information difference according to the joint posterior probability density function at the current moment;
the step of obtaining the fusion data of the robot at the current moment according to the joint posterior probability density function at the current moment specifically comprises the following steps: obtaining a state vector and a prediction error covariance matrix at the current moment according to the approximate probability density function of the state vector; and obtaining the offset information difference and the offset covariance matrix at the current moment according to the approximate probability density function of the offset information difference. Specifically:
p(X k ,ΔX k |Y 1:k )≈q(X k )q(ΔX k )
wherein ,p(Xk ,ΔX k |Y 1:k ) A joint posterior probability density function representing time k, q (X k ) An approximate probability density function, q (Δx), representing the state vector at time k k ) An approximate probability density function representing the offset information delta at time k;
preferably, the approximate probability density function of the state vector is obtained as follows: bringing the optimal solution formula into a logarithmic function of the joint posterior probability density function, and letting θ=x k Obtaining:
wherein i represents the iteration number and k represents the time; q (i+1) (X k ) Representing an approximate probability density function of the state vector after the (i+1) th iteration of the moment k; y is Y k Representation ofAn observed quantity at time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference Δx indicating time k k The iteration parameter is the ith iteration; e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); () T Representing a transposed matrix;a covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k; />Representing the set Θ k Intermediate and X k An irrelevant amount;
according to log q (i+1) (X k ) If the approximate probability density function of the state vector is judged to follow the Gaussian distribution, the approximate probability density function of the state vector is as follows:
wherein N (;) represents a Gaussian distribution function;state vector representing the (i+1) th iteration of time k, P k|k An error covariance matrix at time k; the state vector and the prediction error covariance matrix are obtained according to a first solution formula.
Specifically, the first solving formula is specifically:
P k|k =(I m -K k H k )P k|k-1
wherein i represents the iteration number and k represents the time;a state vector, X, representing the (i+1) th iteration of time k k|k-1 A predicted state vector representing time k; k (K) k A gain matrix representing time k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference Δx indicating time k k The iteration parameter is the ith iteration; e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); p (P) k|k-1 A prediction error covariance matrix representing time k; h k T Representing the observation matrix H k Is a transposed matrix of (a); />A covariance matrix representing a time k likelihood function; p (P) k|k An error covariance matrix at time k; i m Representing the identity matrix.
Preferably, the approximate probability density function of the offset information delta is obtained as follows: the optimal solution formula is brought into a logarithmic function of the joint posterior probability density function, and θ=Δx is made k Obtaining:
wherein i represents the iteration number and k represents the time; q (i+1) (ΔX k ) An approximate probability density function representing the offset information delta after the i+1th iteration of time k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; y is Y k -H k X k For the iteration parameter of the ith iteration, E (i) [Y k -H k X k ]Represents the iteration parameter (Y) at the ith iteration k -H k X k ) Is a desired value of (2); ΔX k Offset information difference representing time k; () T Representing a transposed matrix;a covariance matrix representing a time k likelihood function; ΔX k-1 Offset information difference representing time k-1; r is R ΔX An error covariance matrix representing the offset information delta; />Representing the set Θ k Neutralization of DeltaX k An irrelevant amount;
according to log q (i+1) (ΔX k ) The approximate probability density function of the offset information delta is judged to follow the gaussian distribution, and then the approximate probability density function of the offset information delta is as follows:
wherein N (;) represents a Gaussian distribution function;offset information difference representing the i+1st iteration of time k; the offset information difference and the covariance matrix of the offset information difference are obtained according to a second solution formula.
Specifically, the second solving formula is specifically:
wherein i represents the iteration number and k represents the time;an update offset information delta representing the i+1th iteration of time k; ΔX k-1 Offset information difference representing time k-1; k (K) ΔX A gain matrix representing the difference of the offset information; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; e (E) (i) [Y k -H k X k ]Represents the ith iteration (Y k -H k X k ) E, E (i) [Y k -H k X k ]The iteration parameter is the ith iteration; />A covariance matrix representing the offset information difference at time k; i m Representing the identity matrix; r is R ΔX An error covariance matrix representing the offset information delta; h k T Representing the observation matrix H k Is a transposed matrix of (a); />Representing the covariance matrix of the likelihood function at time k.
Preferably, the iteration conditions for the variation inference are specifically as follows:
where i represents the number of iterations, k represents the time of day,a state vector representing the i+1st iteration of time k,a state vector representing the ith iteration of time k; epsilon represents the iteration threshold; the iteration threshold is obtained by manual setting.
The step of ending the iterative update after the iteration condition of the variation inference is satisfied is specifically as follows:
if the state vector at the current moment meets the iteration condition of variation inference after the (i+1) th iteration, ending the iteration;
if the state vector at the current moment does not meet the iteration condition of variation inference after the (i+1) th iteration, taking the expected value of the iteration parameter of the current iteration as the iteration parameter of the next iteration to start the next iteration update.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. The data fusion processing method for robot positioning and mapping is characterized by comprising the following steps:
predicting the state vector of the current moment of the robot according to the state vector of the last moment of the robot, the prediction error covariance matrix of the last moment and the measurement data of the current moment; calculating a prediction error covariance matrix of the current moment of the robot according to the error covariance matrix and the noise covariance matrix of the last moment of the robot;
predicting the offset information difference of the robot at the current moment according to the offset information difference of the robot at the previous moment; the offset information difference is an information difference between the visual information and the measurement data;
obtaining observed quantity of the robot, and obtaining a joint posterior probability density function according to the deviation information difference quantity of the current moment of the robot, the prediction state vector of the current moment, the prediction error covariance matrix and the observed quantity;
assigning an initial iteration parameter, carrying out iterative updating on the joint posterior probability density function according to a variation inference method, ending the iterative updating after the iteration condition of the variation inference is met, and obtaining fusion data of the robot at the current moment according to the joint posterior probability density function at the current moment; the fusion data of the robot comprises a state vector at the current moment, a prediction error covariance matrix, an offset information difference at the current moment and a deviation covariance matrix;
the step of obtaining the joint posterior probability density function according to the deviation information difference of the current moment of the robot, the prediction state vector of the current moment, the prediction error covariance matrix and the observed quantity comprises the following specific steps:
establishing a prior probability density function according to a predicted state vector of the current moment of the robot and a predicted covariance matrix of the current moment;
establishing a Gaussian distribution function of the current moment offset information difference according to the current moment offset information difference and the visual information of the robot;
establishing a likelihood function according to the observed quantity of the current robot and the deviation information difference quantity of the current moment:
acquiring a joint posterior probability density function according to the prior probability density function, the Gaussian distribution function and the likelihood function;
the joint posterior probability density function is specifically as follows:
in particular, the method comprises the steps of,p(X k ,ΔX k |Y 1:k ) Represents the joint posterior probability density function value, N (; ) Representing a gaussian distribution function; k represents the time; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference representing time k; ΔX k-1 Offset information difference representing time k-1;a covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k; r is R ΔX A covariance matrix representing offset information delta errors; p (Y) 1:k-1 ) The observed quantity probability distribution from time 1 to time k-1 is shown.
2. The method for processing the data fusion of the positioning and the mapping of the robot according to claim 1, wherein the variance inference method is a variance decibel method, and the optimal solution formula is satisfied according to the variance decibel method:
wherein ,θ represents the set Θ k Any one element of (a) and (b); theta (theta) k (- θ) is represented by Θ k Non-theta element in (2); c (C) θ Representing the set Θ k The amount of the element θ is not related; e []Representing the expected value; p (Θ) k ,Y 1:k ) Representing a joint probability density function;
the step of iteratively updating the joint posterior probability density function according to the variance inference method comprises the following steps: the iteration parameters and the joint posterior probability density function are put into an optimal Jie Gong to obtain the joint posterior probability density function at the current moment;
the logarithmic function of the joint posterior probability density function is as follows:
wherein k represents the time; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference representing time k; ΔX k-1 Offset information difference representing time k-1; () T Representing a transposed matrix;a covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k; r is R ΔX Error covariance matrix representing offset information delta, < ->Representing the set { X } k ,ΔX k And set Θ in } k An irrelevant amount.
3. The method for processing data fusion of robot positioning and mapping according to claim 2, wherein the step of iteratively updating the joint posterior probability density function according to the variance inference method further comprises: obtaining an approximate probability density function of the state vector and an approximate probability density function of the offset information difference according to the joint posterior probability density function at the current moment;
the step of obtaining the fusion data of the robot at the current moment according to the joint posterior probability density function at the current moment specifically comprises the following steps: obtaining a state vector and a prediction error covariance matrix at the current moment according to the approximate probability density function of the state vector; and obtaining the offset information difference and the offset covariance matrix at the current moment according to the approximate probability density function of the offset information difference.
4. The method for data fusion processing of robot positioning and mapping according to claim 3, wherein the approximate probability density function of the state vector is obtained as follows:
bringing the optimal solution formula into a logarithmic function of the joint posterior probability density function, and letting θ=x k Obtaining:
wherein i represents the iteration number and k represents the time; q (i+1) (X k ) Representing an approximate probability density function of the state vector after the (i+1) th iteration of the moment k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference Δx indicating time k k The iteration parameter is the ith iteration; e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); () T Representing a transposed matrix;a covariance matrix representing a time k likelihood function; x is X k|k-1 A predicted state vector representing time k; p (P) k|k-1 A prediction error covariance matrix representing time k: />Representing the set Θ k Intermediate and X k An irrelevant amount;
according to log q (i+1) (X k ) If the approximate probability density function of the state vector is judged to follow the Gaussian distribution, the approximate probability density function of the state vector is as follows:
wherein N (;) represents a Gaussian distribution function;state vector representing the (i+1) th iteration of time k, P k|k An error covariance matrix at time k; the state vector and the prediction error covariance matrix are obtained according to a first solution formula.
5. The method for processing data fusion of robot positioning and mapping according to claim 4, wherein the first solution formula is specifically:
P k|k =(I m -K k H k )P k|k-1
wherein i represents the iteration number and k represents the time;a state vector, X, representing the (i+1) th iteration of time k k|k-1 A predicted state vector representing time k; k (K) k A gain matrix representing time k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; ΔX k Offset information difference Δx indicating time k k The iteration parameter is the ith iteration; e (E) (i) [ΔX k ]Represents the ith iteration DeltaX k Is a desired value of (2); p (P) k|k-1 A prediction error covariance matrix representing time k; h k T Representing the observation matrix H k Is a transposed matrix of (a); />A covariance matrix representing a time k likelihood function; p (P) k|k An error covariance matrix at time k; i m Representing the identity matrix.
6. The method for processing the data fusion of the positioning and mapping of the robot according to claim 3, wherein the approximate probability density function of the offset information difference is obtained as follows: the optimal solution formula is brought into a logarithmic function of the joint posterior probability density function, and θ=Δx is made k Obtaining:
wherein i represents the iteration number and k represents the time; q (i+1) (ΔX k ) An approximate probability density function representing the offset information delta after the i+1th iteration of time k; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; y is Y k -H k X k For the iteration parameter of the ith iteration, E (i) [Y k -H k X k ]Represents the iteration parameter (Y) at the ith iteration k -H k X k ) Is a desired value of (2); ΔX k Offset information difference representing time k; () T Representing the transposed matrix:a covariance matrix representing a time k likelihood function; ΔX k-1 Offset information difference representing time k-1; r is R ΔX An error covariance matrix representing the offset information delta;representing the set Θ k Neutralization of DeltaX k An irrelevant amount;
according to log q (i+1) (ΔX k ) The formula of (1) judges the deviation information differenceIs subject to gaussian distribution, the approximate probability density function of the offset information delta is as follows:
wherein N (;) represents a Gaussian distribution function;offset information difference representing the i+1st iteration of time k; />An error covariance matrix representing the offset information difference at time k; the offset information difference and the covariance matrix of the offset information difference are obtained according to a second solution formula.
7. The method for processing data fusion of robot positioning and mapping according to claim 6, wherein the second solving formula is specifically:
wherein i represents the iteration number and k represents the time;offset information difference representing the i+1st iteration of time k; ΔX k-1 Offset signal representing time k-1The amount of the rest; k (K) ΔX A gain matrix representing the difference of the offset information; y is Y k An observed quantity indicating time k; h k An observation matrix representing time k; x is X k A state vector representing time k; e (E) (i) [Y k -H k X k ]Represents the ith iteration (Y k -H k X k ) E, E (i) [Y k -H k X k ]The iteration parameter is the ith iteration; />An error covariance matrix representing the offset information difference at time k; i m Representing the identity matrix; r is R ΔX An error covariance matrix representing the offset information delta; h k T Representing the observation matrix H k Is a transposed matrix of (a): />Representing the covariance matrix of the likelihood function at time k.
8. The method for processing the data fusion of the robot positioning and mapping according to claim 1, 2, 3, 4, 5, 6 or 7, wherein the iteration condition of the variance inference is as follows:
where i represents the number of iterations, k represents the time of day,state vector representing the i+1th iteration of time k,/and>a state vector representing the ith iteration of time k; epsilon represents the iteration threshold;
the step of ending the iterative update after the iteration condition of the variation inference is satisfied is specifically as follows: if the state vector at the current moment meets the iteration condition of variation inference after the (i+1) th iteration, ending the iteration; if the state vector at the current moment does not meet the iteration condition of variation inference after the (i+1) th iteration, taking the expected value of the iteration parameter of the current iteration as the iteration parameter of the next iteration to start the next iteration update.
CN202310049730.XA 2023-02-01 2023-02-01 Data fusion processing method for robot positioning and map building Active CN115930971B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310049730.XA CN115930971B (en) 2023-02-01 2023-02-01 Data fusion processing method for robot positioning and map building

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310049730.XA CN115930971B (en) 2023-02-01 2023-02-01 Data fusion processing method for robot positioning and map building

Publications (2)

Publication Number Publication Date
CN115930971A CN115930971A (en) 2023-04-07
CN115930971B true CN115930971B (en) 2023-09-19

Family

ID=86651027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310049730.XA Active CN115930971B (en) 2023-02-01 2023-02-01 Data fusion processing method for robot positioning and map building

Country Status (1)

Country Link
CN (1) CN115930971B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118034304A (en) * 2024-03-05 2024-05-14 广州市东鼎智能装备有限公司 Robot path planning method and system based on real-time modeling

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106052678A (en) * 2016-05-23 2016-10-26 中国空间技术研究院 Polymerization type star sensor and satellite attitude determination method thereof
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
CN109508445A (en) * 2019-01-14 2019-03-22 哈尔滨工程大学 A kind of method for tracking target for surveying noise and variation Bayesian adaptation Kalman filtering with colo(u)r specification
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
WO2021035669A1 (en) * 2019-08-30 2021-03-04 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform, and storage medium
WO2022062177A1 (en) * 2020-09-28 2022-03-31 南京邮电大学 Adaptive indoor fusion positioning method based on dynamic environment
CN115388899A (en) * 2022-09-15 2022-11-25 七腾机器人有限公司 Mobile robot vision inertia fusion SLAM method based on variational Bayes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110268354A (en) * 2019-05-09 2019-09-20 珊口(深圳)智能科技有限公司 Update the method and mobile robot of map

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106052678A (en) * 2016-05-23 2016-10-26 中国空间技术研究院 Polymerization type star sensor and satellite attitude determination method thereof
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
WO2020087846A1 (en) * 2018-10-31 2020-05-07 东南大学 Navigation method based on iteratively extended kalman filter fusion inertia and monocular vision
CN109508445A (en) * 2019-01-14 2019-03-22 哈尔滨工程大学 A kind of method for tracking target for surveying noise and variation Bayesian adaptation Kalman filtering with colo(u)r specification
CN110345944A (en) * 2019-05-27 2019-10-18 浙江工业大学 Merge the robot localization method of visual signature and IMU information
WO2021035669A1 (en) * 2019-08-30 2021-03-04 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform, and storage medium
WO2022062177A1 (en) * 2020-09-28 2022-03-31 南京邮电大学 Adaptive indoor fusion positioning method based on dynamic environment
CN115388899A (en) * 2022-09-15 2022-11-25 七腾机器人有限公司 Mobile robot vision inertia fusion SLAM method based on variational Bayes

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于视觉惯导融合的智能车辆定位技术研究;余哲琳;中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑(第5期);C035-374 *

Also Published As

Publication number Publication date
CN115930971A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111207774B (en) Method and system for laser-IMU external reference calibration
CN110706279B (en) Global position and pose estimation method based on information fusion of global map and multiple sensors
CN108519615B (en) Mobile robot autonomous navigation method based on combined navigation and feature point matching
CN111136660A (en) Robot pose positioning method and system
CN111337020A (en) Factor graph fusion positioning method introducing robust estimation
CN112965063B (en) Robot mapping and positioning method
CN112230242A (en) Pose estimation system and method
CN115930971B (en) Data fusion processing method for robot positioning and map building
CN114485643B (en) Coal mine underground mobile robot environment sensing and high-precision positioning method
CN113447949B (en) Real-time positioning system and method based on laser radar and prior map
CN110929402A (en) Probabilistic terrain estimation method based on uncertain analysis
CN112946681A (en) Laser radar positioning method fusing combined navigation information
CN112254728A (en) Method for enhancing EKF-SLAM global optimization based on key road sign
CN115388899A (en) Mobile robot vision inertia fusion SLAM method based on variational Bayes
Li et al. Aerial-triangulation aided boresight calibration for a low-cost UAV-LiDAR system
Peng et al. Vehicle odometry with camera-lidar-IMU information fusion and factor-graph optimization
CN114608568A (en) Multi-sensor-based information instant fusion positioning method
CN117388830A (en) External parameter calibration method, device, equipment and medium for laser radar and inertial navigation
CN109655057B (en) Filtering optimization method and system for accelerator measurement value of six-push unmanned aerial vehicle
CN115597595B (en) Multi-line laser positioning method and positioning device, computer equipment and storage medium
CN111812668B (en) Winding inspection device, positioning method thereof and storage medium
CN113503872B (en) Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU
CN115950414A (en) Adaptive multi-fusion SLAM method for different sensor data
CN115344033A (en) Monocular camera/IMU/DVL tight coupling-based unmanned ship navigation and positioning method
CN113340310A (en) Step terrain identification and positioning method for mobile robot and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Data Fusion Processing Method for Robot Localization and Mapping

Granted publication date: 20230919

Pledgee: Industrial and Commercial Bank of China Limited Chongqing Liangjiang Branch

Pledgor: Seven Teng Robot Co.,Ltd.

Registration number: Y2024980010233