CN110702107A - Monocular vision inertial combination positioning navigation method - Google Patents

Monocular vision inertial combination positioning navigation method Download PDF

Info

Publication number
CN110702107A
CN110702107A CN201911003161.5A CN201911003161A CN110702107A CN 110702107 A CN110702107 A CN 110702107A CN 201911003161 A CN201911003161 A CN 201911003161A CN 110702107 A CN110702107 A CN 110702107A
Authority
CN
China
Prior art keywords
imu
visual
coordinate system
pose
state vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911003161.5A
Other languages
Chinese (zh)
Inventor
康健
王志军
冯辰
于艺晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING VEKETEC TECHNOLOGY Co Ltd
Original Assignee
BEIJING VEKETEC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING VEKETEC TECHNOLOGY Co Ltd filed Critical BEIJING VEKETEC TECHNOLOGY Co Ltd
Priority to CN201911003161.5A priority Critical patent/CN110702107A/en
Publication of CN110702107A publication Critical patent/CN110702107A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Navigation (AREA)

Abstract

The invention provides a monocular vision inertial combination positioning navigation method, which comprises the following steps: collecting a video stream and an IMU data stream, and packaging and aligning; initializing, namely, initializing the vision, initializing the IMU, determining a conversion relation between a vision coordinate system and an IMU world coordinate system, determining a scale initial value through nonlinear optimization, and performing refinement estimation on the scale by using lambda I-EKF; inertial navigation data in IMU data flow is obtained, IMU pose is obtained by a method of combining complementary filtering with pre-integration, image characteristics in video flow are tracked, and visual pose is obtained by referring to IMU pose variation; and judging whether the visual tracking is lost, if so, performing motion tracking by using the IMU pose, and if not, fusing the visual pose and the IMU pose by using an IDSF (inverse discrete cosine transformation) method to obtain a final camera pose. Aiming at the defects of the prior art, the invention improves the precision of the scale and achieves the effects of keeping high precision and high robustness in the positioning process.

Description

Monocular vision inertial combination positioning navigation method
Technical Field
The invention relates to the technical field of positioning and navigation, in particular to a monocular vision inertial combination positioning and navigation method.
Background
In recent years, with the development of emerging technologies such as auto-driving, augmented reality, virtual reality, robot and unmanned aerial vehicle navigation, integrated navigation gradually becomes a popular research direction in the field of positioning navigation. In different navigation sensor combination modes, monocular visual inertial combination provides an inexpensive and extremely potential solution. The monocular camera is low in price and low in power consumption, can provide rich environmental information, but has the defects of uncertain scale and easiness in being influenced by environmental conditions in the pure monocular vision navigation method; the inertial sensor provides self-movement information and is not easily influenced by shielding and illumination, but the pure inertial navigation method is influenced by drift and accumulated errors of the pure inertial navigation method, has better precision only near the initialization moment, and gradually diverges along with the increase of time. Combining a monocular camera and an inertial sensor allows them to compensate each other for their limitations.
Disclosure of Invention
Aiming at the defects of the prior art, the invention adopts nonlinear optimization and lambda I-EKF (Lamdba Iterated Extended Kalman Filter) algorithm to refine and estimate the scale in initialization, and adopts IDSF (double Sensor Iterated Extended Kalman Filter) algorithm to realize the fusion of a monocular camera and an inertial Sensor, thereby providing a monocular vision inertial combination positioning and navigation method which has higher scale precision and keeps higher precision and higher robustness in the positioning process.
The technical scheme adopted by the invention for solving the technical problems is as follows:
step one, acquiring a video stream through a monocular camera; acquiring IMU data flow through the IMU;
aligning the collected video stream and the IMU data stream, and packaging the aligned video stream and the IMU data stream together;
initializing system data; if the initialization is successful, sequentially executing the fourth step and the fifth step; if the initialization fails, the initialization is carried out again; the initialization process comprises visual initialization, IMU initialization, determination of a conversion relation between a visual coordinate system and an IMU world coordinate system, nonlinear optimization determination of a scale initial value and refinement estimation of the scale by using lambda I-EKF;
step four, inertial navigation data in the IMU data stream are obtained, complementary filtering is carried out on the inertial navigation data to obtain attitude variation, IMU pose is obtained by combining pre-integral calculation, and the IMU pose variation is given to the step five to be used as reference variation for calculating the pose of the current visual frame;
fifthly, extracting image features in the video stream frame by frame, and performing visual tracking and matching by using the extracted image features to obtain a visual pose;
step six, judging whether the visual tracking in the step five is lost or not, and if the visual tracking is lost and can not be relocated, performing motion positioning by using the pose of the IMU; and if the visual tracking is not lost, fusing the visual pose and the IMU pose by an IDSF (interactive digital solution scanning) method to obtain a camera pose finally used for positioning and navigation.
The alignment in the second step comprises time stamp synchronization; packing includes data packing.
The step three of initializing specifically comprises:
3.1) visual initialization: selecting continuous image frames with the number of the characteristic points larger than a set value for matching, and performing visual initialization when the current frame matching point pair is larger than the set value; if the matching point pair is smaller than the set value, restarting to select the image frame; if the vision initialization can be carried out, simultaneously calculating the homography matrix and the basic matrix by using an RANSAC 8 point method, preferentially selecting and calculating to obtain the pose of the camera, then continuously calculating the pose of the camera by using a vision odometer, and simultaneously calculating the pose of the camera to obtain the vision pose;
3.2) outputting the visual pose obtained by visual initialization;
3.3) IMU initialization: configuring camera and IMU external parameter, the IMU external parameter is the initial value after calibration,
Figure BDA0002241937180000031
representing a translation between the camera and the IMU,
Figure BDA0002241937180000032
representing a rotational quaternion between the camera and the IMU; initial weight of configurationCalculating the pose of the IMU by using the force vector g;
3.4) outputting the IMU pose and the gravity vector g obtained by IMU initialization;
3.5) configuring the initial gravity vector in the step 3.3): the camera pose of a certain still image frame is obtained through visual initialization, and the accelerometer reading of the IMU frame corresponding to the still image frame is used as a gravity vector g;
3.6) determining a conversion relation between the visual coordinate system and the world coordinate system of the IMU: the position of a static image frame is the origin of a visual coordinate system of the system, the position and the posture of the static image frame are taken as the visual coordinate system, the static image frame is also a visual first frame of the system, vicon is used for representing the visual coordinate system, the position and the posture of an IMU frame corresponding to the static image frame are taken as a world coordinate system of the IMU, meanwhile, the IMU frame is also a first frame of the IMU of the system, world coordinate system of the IMU is represented by world, at the moment, the external parameters of a camera and the IMU are the conversion relation between the visual coordinate system and the world coordinate system of the IMU, and the external parameters of the camera and the IMU are taken as the conversion relation
Figure BDA0002241937180000033
Representing translation between the visual coordinate system and the world coordinate system of the IMU
Figure BDA0002241937180000034
Representing a rotation quaternion between a visual coordinate system and a world coordinate system of the IMU, wherein a scale lambda involved in the conversion relation is an initial configuration value;
3.7) determining the initial value of the scale by nonlinear optimization: obtaining a conversion relation between the IMU pose and the visual pose by utilizing a conversion relation between a visual coordinate system and a world coordinate system of the IMU, an IMU motion model and a known gravity vector according to the condition that the motion of the camera is actually consistent with the motion of the IMU in the same time, so that a least square solution is solved through nonlinear optimization to obtain initial estimation of the scale;
3.8) refinement of the scale with λ I-EKF: further refining the value of the metric lambda by a lambda I-EKF (Lamdba Iterated extended Kalman Filter) algorithm, and updating the state vector X,
Figure BDA0002241937180000035
Figure BDA0002241937180000036
wherein b isw、baRespectively, the zero drift of a gyroscope in the IMU and the zero drift of an accelerometer in the IMU.
The method for calculating the visual pose in the step 3.1) specifically comprises the following steps: according to the visual coordinate system determined in the step 3.6), converting the camera pose into the visual pose under the visual coordinate system, and using
Figure BDA0002241937180000041
Representing camera translation in a visual coordinate system by
Figure BDA0002241937180000042
Representing a camera rotation quaternion in a visual coordinate system.
The method for calculating the IMU pose in the step 3.3) specifically comprises the following steps: after the conversion relation between the visual coordinate system and the world coordinate system of the IMU is determined according to the step 3.6), the velocity is obtained by integrating a plurality of accelerations, the position is obtained by integrating a plurality of velocities, the attitude variation is obtained by carrying out complementary filtering on inertial navigation data, the pose of the IMU is obtained by integrating the attitude variation and the velocity variation,
Figure BDA0002241937180000043
representing IMU translations in the IMU's world coordinate system,
Figure BDA0002241937180000044
an IMU rotation quaternion representing the IMU in its world coordinate system,
Figure BDA0002241937180000045
representing the IMU speed in the IMU's world coordinate system.
Wherein, the step 3.8) specifically comprises:
4.1) inputting an IMU pose and a gravity vector g obtained by IMU initialization, and inputting an initial value of the dimension lambda obtained in the step 3.7);
4.2) constructing the state vector X, for convenience, order
Figure BDA0002241937180000046
Figure BDA0002241937180000047
Thus state vector
Figure BDA0002241937180000048
Figure BDA0002241937180000049
4.3) from the state vector X and the motion model, noise model of the IMU, we can derive: error of the state vector isThis is achieved by
Figure BDA00022419371800000411
An estimated value representing a state vector, an error vector
Figure BDA00022419371800000412
For a 25-dimensional system state error item, a transition matrix of an error state can be obtained through a prediction process of extended Kalman filtering:
Figure BDA0002241937180000051
wherein:
Figure BDA0002241937180000052
Figure BDA0002241937180000053
Figure BDA0002241937180000054
E=-A
Figure BDA0002241937180000058
Figure BDA0002241937180000059
Figure BDA00022419371800000514
an antisymmetric matrix representing the angular velocity ω, amRepresenting acceleration measurements, w, of the IMUmRepresents a measure of the angular velocity of the IMU,
Figure BDA00022419371800000510
an estimate of the null shift of the accelerometer is shown,
Figure BDA00022419371800000511
an estimate of the null shift of the gyroscope is represented,
Figure BDA00022419371800000512
an estimate value representing the acceleration is shown,
Figure BDA00022419371800000513
representing an estimate of angular velocity, at represents the time interval of two state vectors,
Figure BDA0002241937180000061
the directional cosine matrix from the IMU coordinate system to the world coordinate system of the IMU, derived from the aboveThe matrix is obtained, a prediction error state and a covariance matrix thereof can be obtained according to the extended Kalman filtering, so that the state vector is predicted, and a corresponding prediction error covariance matrix is obtained;
4.4) inputting the obtained visual pose of visual initialization;
4.5) correcting the system state through the visual pose, obtaining an observation error model through the conversion relation between the visual coordinate system and the world coordinate system of the IMU and the external reference of the camera and the IMU, and obtaining an observation matrix H through discrete processing of the observation error modelvThe following were used:
Figure BDA0002241937180000062
Figure BDA0002241937180000063
updating a system state vector for a direction cosine matrix from an IMU coordinate system to an IMU world coordinate system according to the derivation and the extended Kalman filtering, and obtaining a state error item through Kalman gain so as to obtain an updated state vector and a state error covariance matrix;
4.6) judging whether the variation of the scale lambda in the state vector X before and after updating is smaller than a threshold mu, if so, outputting the updated scale lambda, and outputting the updated state vector, if not, performing multiple iterative updating until the condition is met for outputting, wherein the process of each iterative updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating Kalman gain according to the updating process of the expanded Kalman filtering and by using the predicted state vector, solving the updated state vector and the state error covariance matrix by using the new Kalman gain to obtain the state vector after repeated iteration updating, continuously judging the variable quantity of the scale lambda, and checking whether the obtained quantity can be output.
The inertial navigation data in the fourth step are as follows: three-axis acceleration and three-axis angular velocity. Pre-integration refers to the constraint of integrating IMU measurements between key frames into relative motion.
The step four of performing complementary filtering on the inertial navigation data to obtain the attitude variation specifically includes:
6.1) normalization of the acceleration to ab
6.2) standard gravitational acceleration gn=[0 0 1]TG is obtained by converting the coordinate system into an IMU coordinate systemb
6.3) for gbAnd abCarrying out vector cross multiplication to obtain a correction compensation value e of the angular velocity;
6.4) compensating the correction compensation value e to the filtered angular velocity by using a PI control method to obtain a corrected angular velocity;
6.5) converting the corrected angular velocity into a quaternion through a quaternion differential equation, normalizing the obtained quaternion to obtain an attitude matrix, and calculating the attitude variation.
The image characteristics in the step five are as follows: a local descriptor of the image.
In the sixth step, the step of fusing the visual pose and the IMU pose by using an IDSF (double Sensor Iterated Extended Kalman Filter) method specifically comprises the following steps:
8.1) inputting a plurality of IMU poses;
8.2) constructing the State vector X, State vector
Figure BDA0002241937180000071
8.3) according to the extended Kalman filtering, a prediction error state and a covariance matrix thereof can be obtained by calculation of a state vector X, a motion model and a noise model of an IMU, a system noise covariance matrix is added, errors in state prediction are compensated, and meanwhile, the prediction error state covariance matrix is weighted and expanded, so that the system state vector is predicted, and a correspondingly adjusted prediction error covariance matrix is obtained;
8.4) inputting a visual pose;
8.5) correcting the system state through the visual pose, and updating the system state vector according to extended Kalman filtering by the conversion relation between the visual coordinate system and the world coordinate system of the IMU and the external parameters of the camera and the IMU;
8.6) judging whether the variation of the trace of the state error covariance matrix before and after updating is smaller than a threshold value delta, if so, outputting the updated state vector, if not, performing multiple iteration updating until the updated state vector meets the condition, wherein the process of each iteration updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating the Kalman gain according to the updating process of the extended Kalman filtering and by using the adjusted prediction error covariance matrix, and solving the updated state vector and the state error covariance matrix by using the new Kalman gain so as to obtain the state vector after repeated iteration updating, and continuously judging whether the variation of the trace of the state error covariance matrix is converged below a set threshold value delta so as to check whether the obtained quantity can be output.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the method uses nonlinear optimization and lambda I-EKF to accurately estimate the scale of the system in initialization, and compared with the existing monocular vision inertial combination system, the method has higher scale accuracy; meanwhile, data fusion is carried out on the monocular camera and the inertial sensor by using the IDSF, so that the system achieves the effects of keeping higher precision and higher robustness in the positioning process.
Drawings
FIG. 1 is a flow chart of a monocular visual inertial integration positioning and navigation method;
FIG. 2 is a flow chart of initialization in a monocular visual inertial integration positioning navigation method;
FIG. 3 is a flow chart of λ I-EKF in a monocular vision inertial integration positioning navigation method;
fig. 4 is a flowchart of IDSF in the positioning navigation method of monocular vision-inertia combination.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In an embodiment of the present invention, as shown in fig. 1 to 4, the present invention provides a monocular vision inertial combination positioning navigation method, which includes steps one to six.
Step one, acquiring a video stream through a monocular camera; and collecting IMU data flow through the IMU.
And step two, aligning the acquired video stream and the IMU data stream, and packaging the aligned video stream and the IMU data stream together.
Wherein the packing alignment includes timestamp synchronization and data packing.
Initializing system data, and if the initialization is successful, sequentially executing the fourth step and the fifth step; and if the initialization fails, the initialization is carried out again. The initialization process comprises visual initialization, IMU initialization, determination of a conversion relation between a visual coordinate system and an IMU world coordinate system, determination of a scale initial value through nonlinear optimization and refinement and estimation of the scale by using lambda I-EKF.
The third step specifically comprises:
3.1) visual initialization: selecting continuous image frames with the number of the characteristic points larger than a set value for matching, and performing visual initialization when the current frame matching point pair is larger than the set value; if the matching point pair is smaller than the set value, restarting to select the image frame; if the vision initialization can be carried out, simultaneously calculating the homography matrix and the basic matrix by using the RANSAC 8 point method, preferentially selecting and calculating to obtain the pose of the camera, then continuously calculating the pose of the camera by using a vision odometer, and simultaneously calculating the pose of the camera by using the method in the step 3.8) to obtain the vision pose;
3.2) outputting the visual pose obtained by visual initialization;
3.3) IMU initialization: configuring camera and IMUThe external parameter is the initial value after calibration,representing a translation between the camera and the IMU,
Figure BDA0002241937180000102
representing a rotational quaternion between the camera and the IMU; configuring an initial covariance matrix P, configuring an initial gravity vector g, and calculating the pose of the IMU by using a method of 3.7);
3.4) outputting the IMU pose and the gravity vector g obtained by IMU initialization;
3.5) there are many ways to configure the initial gravity vector described in step 3.3) above, such as one of the following: the camera pose of a certain still image frame is obtained through visual initialization, and the accelerometer reading of the IMU frame corresponding to the still image frame is used as a gravity vector g;
3.6) the calculation method for determining the conversion relation between the visual coordinate system and the world coordinate system of the IMU is as follows: if the gravity vector g is calculated according to the method in the step 3.5), the position of the still image frame is the origin of the visual coordinate system of the system, the position and the posture of the still image frame are taken as the visual coordinate system, the still image frame is also the visual first frame of the system, vicon represents the visual coordinate system, the position and the posture of the IMU frame corresponding to the still image frame are taken as the world coordinate system of the IMU, the IMU frame is also the IMU first frame of the system, world coordinate system of the IMU is represented by world, at this time, the external reference of the camera and the IMU is the conversion relation between the visual coordinate system and the world coordinate system of the IMU, and world coordinate system of the IMU is used
Figure BDA0002241937180000103
Representing translation between the visual coordinate system and the world coordinate system of the IMU
Figure BDA0002241937180000104
Representing a rotation quaternion between a visual coordinate system and a world coordinate system of the IMU, wherein a scale lambda involved in the conversion relation is an initial configuration value;
3.7) the IMU pose calculation method in the step 3.3) specifically comprises the following steps: after the conversion relation between the visual coordinate system and the world coordinate system of the IMU is determined according to the step 3.6), the velocity is obtained by integrating a plurality of accelerations, the position is obtained by integrating a plurality of velocities, the attitude variation is obtained by carrying out complementary filtering on inertial navigation data, the pose of the IMU is obtained by integrating the attitude variation and the velocity variation,
Figure BDA0002241937180000105
representing IMU translations in the IMU's world coordinate system,
Figure BDA0002241937180000106
an IMU rotation quaternion representing the IMU in its world coordinate system,
Figure BDA0002241937180000107
representing IMU speed in the IMU's world coordinate system;
3.8) the method for calculating the visual pose in the step 3.1) specifically comprises the following steps: according to the visual coordinate system determined in the step 3.6), converting the camera pose into the visual pose under the visual coordinate system, and using
Figure BDA0002241937180000111
Representing camera translation in a visual coordinate system by
Figure BDA0002241937180000112
Representing a camera rotation quaternion in a visual coordinate system;
3.9) initialization of the scale λ: obtaining a conversion relation between the IMU pose and the visual pose by utilizing a conversion relation between a visual coordinate system and a world coordinate system of the IMU, an IMU motion model and a known gravity vector according to the condition that the motion of the camera is actually consistent with the motion of the IMU in the same time, so that a least square solution is solved through nonlinear optimization to obtain initial estimation of the scale;
3.10) optimizing the scale for more accurate conversion between coordinate systems and pose calculation. Further refinement of the value of λ by the λ I-EKF (Lamdba Iterated Extended Kalman Filter) algorithmAnd the state vector X is updated,
Figure BDA0002241937180000113
Figure BDA0002241937180000114
wherein b isw、baRespectively, the zero drift of a gyroscope in the IMU and the zero drift of an accelerometer in the IMU.
Wherein, the step 3.10) specifically comprises:
3.10.1) inputting IMU pose and gravity vector g obtained by IMU initialization, and inputting the initial value of the scale lambda obtained in the step 3.9);
3.10.2) construct a state vector X, for convenience, such as
Figure BDA0002241937180000116
Thus state vector
Figure BDA0002241937180000117
Figure BDA0002241937180000118
3.10.3) from the state vector X and the motion model, noise model of the IMU, we can derive: error of the state vector is
Figure BDA0002241937180000119
This is achieved by
Figure BDA00022419371800001110
An estimated value representing a state vector, an error vector
Figure BDA00022419371800001111
For a 25-dimensional system state error item, a transition matrix of an error state can be obtained through a prediction process of extended Kalman filtering:
Figure BDA0002241937180000121
wherein:
Figure BDA0002241937180000122
Figure BDA0002241937180000123
Figure BDA0002241937180000124
Figure BDA0002241937180000125
E=-A
Figure BDA0002241937180000126
Figure BDA0002241937180000127
Figure BDA0002241937180000128
Figure BDA0002241937180000129
Figure BDA00022419371800001210
an antisymmetric matrix representing the angular velocity ω, amRepresenting acceleration measurements, w, of the IMUmRepresents a measure of the angular velocity of the IMU,
Figure BDA00022419371800001211
an estimate of the null shift of the accelerometer is shown,an estimate of the null shift of the gyroscope is represented,
Figure BDA00022419371800001213
an estimate value representing the acceleration is shown,
Figure BDA00022419371800001214
representing an estimate of angular velocity, at represents the time interval of two state vectors,the method comprises the steps that a predicted error state and a covariance matrix thereof can be obtained for a direction cosine matrix from an IMU coordinate system to an IMU world coordinate system according to the deduced matrix and extended Kalman filtering, so that a state vector is predicted, and a corresponding predicted error covariance matrix is obtained;
3.10.4) inputting the vision initialized visual pose;
3.10.5) correcting system state by visual pose, obtaining observation error model by the transformation relation between visual coordinate system and IMU world coordinate system and camera and IMU external parameter, dispersing the observation error model to obtain observation matrix HvThe following were used:
Figure BDA0002241937180000132
Figure BDA0002241937180000133
updating a system state vector for a direction cosine matrix from an IMU coordinate system to an IMU world coordinate system according to the derivation and the extended Kalman filtering, and obtaining a state error item through Kalman gain so as to obtain an updated state vector and a state error covariance matrix;
3.10.6) judging whether the variation of the scale lambda in the state vector X before and after updating is less than the threshold mu, if yes, outputting the updated scale lambda, and outputting the updated state vector, if not, performing multiple iteration updating until the condition is met, wherein the process of each iteration updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating Kalman gain according to the updating process of the expanded Kalman filtering and by using the predicted state vector, solving the updated state vector and the state error covariance matrix by using the new Kalman gain to obtain the state vector after repeated iteration updating, continuously judging the variable quantity of the scale lambda, and checking whether the obtained quantity can be output.
And step four, acquiring inertial navigation data in the IMU data stream, performing complementary filtering on the inertial navigation data to obtain attitude variation, combining with pre-integral calculation to obtain the IMU attitude, and giving the IMU attitude variation to the step five as reference variation for calculating the current visual frame attitude.
Pre-integration refers to the constraint of integrating IMU measurements between key frames into relative motion, which is called pre-integration.
Wherein the inertial navigation data comprises three-axis acceleration and three-axis angular velocity.
The step of obtaining the attitude variation by performing complementary filtering on the inertial navigation data specifically includes:
4.1) normalization of the acceleration to ab
4.2) standard gravitational acceleration gn=[0 0 1]TG is obtained by converting the coordinate system into an IMU coordinate systemb
4.3) to gbAnd abCarrying out vector cross multiplication to obtain a correction compensation value e of the angular velocity;
4.4) compensating the correction compensation value e to the filtered angular velocity by using a PI control method to obtain a corrected angular velocity;
and 4.5) converting the corrected angular velocity into a quaternion through a quaternion differential equation, normalizing the obtained quaternion to obtain an attitude matrix, and calculating the attitude variation.
And fifthly, extracting image features in the video stream frame by frame, and performing visual tracking and matching by using the extracted image features to obtain a visual pose.
Wherein the image feature is a local descriptor of the image.
Step six, judging whether the visual tracking in the step five is lost or not, and if the visual tracking is lost and can not be relocated, performing motion positioning by using the pose of the IMU; and if the visual tracking is not lost, fusing the visual pose and the IMU pose by an IDSF (interactive digital solution scanning) method to obtain a camera pose finally used for positioning and navigation.
In the sixth step, the step of fusing the visual pose and the IMU pose by using an IDSF (double Sensor estimated Extended Kalman Filter) method specifically comprises the following steps:
6.1) inputting a plurality of IMU poses;
6.2) construction of the State vector X, State vector
Figure BDA0002241937180000151
6.3) according to the extended Kalman filtering, a prediction error state and a covariance matrix thereof can be obtained by calculation of a state vector X, a motion model and a noise model of an IMU, a system noise covariance matrix is added, errors in state prediction are compensated, and meanwhile, the prediction error state covariance matrix is weighted and expanded, so that the system state vector is predicted, and a correspondingly adjusted prediction error covariance matrix is obtained;
6.4) inputting a visual pose;
6.5) correcting the system state through the visual pose, and updating the system state vector according to extended Kalman filtering by the conversion relation between the visual coordinate system and the world coordinate system of the IMU and the external parameters of the camera and the IMU;
6.6) judging whether the variation of the trace of the state error covariance matrix before and after updating is smaller than a threshold value delta, if so, outputting the updated state vector, if not, performing multiple iteration updating until the condition is met, wherein the process of each iteration updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating the Kalman gain according to the updating process of the extended Kalman filtering and by using the adjusted prediction error covariance matrix, and solving the updated state vector and the state error covariance matrix by using the new Kalman gain so as to obtain the state vector after repeated iteration updating, and continuously judging whether the variation of the trace of the state error covariance matrix is converged below a set threshold value delta so as to check whether the obtained quantity can be output.
The technical scheme of the invention is explained in detail below with reference to the accompanying drawings, which are convenient for those skilled in the art to understand.
Fig. 1 is a flowchart of a positioning and navigation method of monocular vision inertial combination: firstly, collecting a video stream and an IMU data stream, and packaging and aligning; then, initializing; then, inertial navigation data in IMU data flow is obtained, IMU pose is obtained by a method of combining complementary filtering with pre-integration, image characteristics in video flow are tracked, and visual pose is obtained by referring to IMU pose variation; finally, making a decision whether the visual tracking is lost, and if the visual tracking is lost and the visual tracking cannot be relocated, performing motion positioning by using the pose of the IMU; and if the visual tracking is not lost, fusing the visual pose and the IMU pose by an IDSF (interactive digital solution scanning) method to obtain a final camera pose.
Fig. 2 is a flowchart of initialization of a monocular visual inertial integration positioning navigation method. Can be described as: the initialization implementation process comprises the following steps:
i. visual initialization: selecting continuous image frames with enough number of feature points for matching, and performing visual initialization when the number of matching points of the current back frame is enough; if the number of the matched point pairs is not enough, restarting to select the image frame; if the vision initialization can be carried out, simultaneously calculating the homography matrix and the basic matrix by using the RANSAC 8 point method, preferentially selecting and calculating to obtain the pose of the camera, then continuously calculating the pose of the camera by using a vision odometer, and simultaneously calculating the pose of the camera by using the method of the step viii to obtain the vision pose;
ii. Outputting the visual pose obtained by visual initialization;
iii, IMU initialization: configuring camera and IMU external parameter, the external parameter is the initial value after calibration,
Figure BDA0002241937180000161
representing a translation between the camera and the IMU,
Figure BDA0002241937180000162
representing a rotational quaternion between the camera and the IMU; configuring an initial covariance matrix P, configuring an initial gravity vector g, and calculating the pose of the IMU by a vii method;
iv, outputting the IMU pose and the gravity vector g obtained by IMU initialization;
v, there are many ways to configure the initial gravity vector in step iii above, and one of them is exemplified here: the camera pose of a certain still image frame is obtained through visual initialization, and the accelerometer reading of the IMU frame corresponding to the still image frame is used as a gravity vector g;
vi, the calculation method for determining the conversion relationship between the visual coordinate system and the world coordinate system of the IMU comprises the following steps: if the gravity vector g is calculated according to the method in the above step v, the position of the still image frame is the origin of the visual coordinate system of the system, the position and the posture of the still image frame are taken as the visual coordinate system, the still image frame is also the first visual frame of the system, the visual coordinate system is represented by vicon, the position and the posture of the IMU frame corresponding to the still image frame are taken as the world coordinate system of the IMU, the IMU frame is also the first IMU frame of the system, the world coordinate system of the IMU is represented by world, at this time, the external parameters of the camera and the IMU are the conversion relation between the visual coordinate system and the world coordinate system of the IMU, and the camera and the IMU are used for converting the relation between the visual coordinate system and the world
Figure BDA0002241937180000171
Representing translation between the visual coordinate system and the world coordinate system of the IMU
Figure BDA0002241937180000172
Representing a rotation quaternion between a visual coordinate system and a world coordinate system of the IMU, wherein a scale lambda involved in the conversion relation is an initial configuration value;
vii, the method for calculating the pose of the IMU in step iii specifically includes: determining views as described in step vi aboveAfter feeling the conversion relation between the coordinate system and the world coordinate system of the IMU, obtaining the speed by integrating a plurality of accelerations, obtaining the position by integrating a plurality of speeds, carrying out complementary filtering on inertial navigation data to obtain the attitude variation, and synthesizing the attitude variation and the attitude variation to obtain the pose of the IMU,
Figure BDA0002241937180000173
representing IMU translations in the IMU's world coordinate system,an IMU rotation quaternion representing the IMU in its world coordinate system,
Figure BDA0002241937180000175
representing IMU speed in the IMU's world coordinate system;
viii, the method for calculating a visual pose in step i includes: according to the visual coordinate system determined in the step vi, converting the camera pose into the visual pose under the visual coordinate system, and using the visual poseRepresenting camera translation in a visual coordinate system byRepresenting a camera rotation quaternion in a visual coordinate system;
ix, scale λ initialization: obtaining a conversion relation between the IMU pose and the visual pose by utilizing a conversion relation between a visual coordinate system and a world coordinate system of the IMU, an IMU motion model and a known gravity vector according to the condition that the motion of the camera is actually consistent with the motion of the IMU in the same time, so that a least square solution is solved through nonlinear optimization to obtain initial estimation of the scale;
and x, optimizing the scale for more accurate conversion between coordinate systems and pose calculation. Further refining the value of the scale lambda by a lambda I-EKF (Lamdba Iterated Extended Kalman Filter) algorithm, and updating the state vector X,
Figure BDA0002241937180000181
wherein b isw、baRespectively, the zero drift of a gyroscope in the IMU and the zero drift of an accelerometer in the IMU.
FIG. 3 is a flow chart of λ I-EKF in a monocular vision inertial integration positioning navigation method. Can be described as: the implementation process of the lambda I-EKF is as follows:
i. inputting initial values of IMU pose, gravity vector g and scale lambda obtained by IMU initialization;
ii. Construct the state vector X, for convenience, order
Figure BDA0002241937180000183
Thus state vector
Figure BDA0002241937180000184
Figure BDA0002241937180000185
iii, from the state vector X and the motion model, noise model of the IMU, we can derive: error of the state vector is
Figure BDA0002241937180000186
This is achieved by
Figure BDA0002241937180000187
An estimated value representing a state vector, an error vector
Figure BDA0002241937180000188
For a 25-dimensional system state error item, a transition matrix of an error state can be obtained through a prediction process of extended Kalman filtering:
Figure BDA0002241937180000189
wherein:
Figure BDA00022419371800001810
Figure BDA00022419371800001811
Figure BDA0002241937180000191
Figure BDA0002241937180000192
E=-A
Figure BDA0002241937180000193
Figure BDA0002241937180000194
Figure BDA0002241937180000197
an antisymmetric matrix representing the angular velocity ω, amRepresenting acceleration measurements, w, of the IMUmRepresents a measure of the angular velocity of the IMU,
Figure BDA0002241937180000198
an estimate of the null shift of the accelerometer is shown,
Figure BDA0002241937180000199
an estimate of the null shift of the gyroscope is represented,
Figure BDA00022419371800001910
an estimate value representing the acceleration is shown,
Figure BDA00022419371800001911
representing an estimate of angular velocity, at represents the time interval of two state vectors,
Figure BDA00022419371800001912
the method comprises the steps that a predicted error state and a covariance matrix thereof can be obtained for a direction cosine matrix from an IMU coordinate system to an IMU world coordinate system according to the deduced matrix and extended Kalman filtering, so that a state vector is predicted, and a corresponding predicted error covariance matrix is obtained;
iv, inputting the obtained visual pose of visual initialization;
v, correcting the system state through the visual pose, obtaining an observation error model through the conversion relation between the visual coordinate system and the world coordinate system of the IMU and the external reference of the camera and the IMU, and obtaining an observation matrix H through discrete processing of the observation error modelvThe following were used:
Figure BDA0002241937180000201
Figure BDA0002241937180000202
updating a system state vector for a direction cosine matrix from an IMU coordinate system to an IMU world coordinate system according to the derivation and the extended Kalman filtering, and obtaining a state error item through Kalman gain so as to obtain an updated state vector and a state error covariance matrix;
vi, judging whether the variation of the scale lambda in the state vector X before and after updating is smaller than a threshold mu, if so, outputting the updated scale lambda, and outputting the updated state vector, if not, performing repeated iteration updating until the updated state vector meets the condition for outputting, wherein the process of each iteration updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating Kalman gain according to the updating process of the expanded Kalman filtering and by using the predicted state vector, solving the updated state vector and the state error covariance matrix by using the new Kalman gain to obtain the state vector after repeated iteration updating, continuously judging the variable quantity of the scale lambda, and checking whether the obtained quantity can be output.
Fig. 3 is a flowchart of an IDSF in a monocular vision-inertial integration positioning navigation method. Can be described as: the IDSF implementation process is as follows:
i. inputting a plurality of IMU poses;
ii. Constructing a state vector X, a state vector
Figure BDA0002241937180000203
According to the extended Kalman filtering, a prediction error state and a covariance matrix thereof can be obtained through calculation of a state vector X, a motion model and a noise model of an IMU, a system noise covariance matrix is added, errors in state prediction are compensated, and meanwhile, weighting and expansion are carried out on the prediction error state covariance matrix, so that the system state vector is predicted, and a correspondingly adjusted prediction error covariance matrix is obtained;
iv, inputting a visual pose;
v, correcting the system state through the visual pose, and updating the system state vector according to extended Kalman filtering by the conversion relation between the visual coordinate system and the world coordinate system of the IMU and the external parameters of the camera and the IMU;
vi, judging whether the variation of the trace of the state error covariance matrix before and after updating is smaller than a threshold value delta, if so, outputting the updated state vector, if not, performing repeated iteration updating until the updated state vector meets the condition, wherein the process of each iteration updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating the Kalman gain according to the updating process of the extended Kalman filtering and by using the adjusted prediction error covariance matrix, and solving the updated state vector and the state error covariance matrix by using the new Kalman gain so as to obtain the state vector after repeated iteration updating, and continuously judging whether the variation of the trace of the state error covariance matrix is converged below a set threshold value delta so as to check whether the obtained quantity can be output.
Aiming at the defects of uncertain scale and easy influence of environmental conditions existing in the existing pure monocular vision navigation method; the existing pure inertial navigation method has the defect that the precision gradually diverges along with the increase of time due to the influence of drift and accumulated errors of the existing pure inertial navigation method. The invention adopts IDSF algorithm to realize the fusion of the monocular camera and the inertial sensor, and achieves the effects of keeping higher precision and higher robustness in the positioning process. Aiming at the defects that the scale initial value is manually set and the finally obtained scale value is not accurate enough in the existing monocular vision inertial integrated navigation method, the invention adopts nonlinear optimization to determine the scale initial value in initialization, and adopts a lambda I-EKF algorithm to carry out refinement estimation on the scale, thereby achieving the effect of higher scale precision.

Claims (10)

1. A monocular vision inertial combined positioning navigation method is characterized by comprising the following steps:
step one, acquiring a video stream through a monocular camera; acquiring IMU data flow through the IMU;
aligning the collected video stream and the IMU data stream, and packaging the aligned video stream and the IMU data stream together;
initializing system data; if the initialization is successful, sequentially executing the fourth step and the fifth step; if the initialization fails, the initialization is carried out again; the initialization process comprises visual initialization, IMU initialization, determination of a conversion relation between a visual coordinate system and an IMU world coordinate system, nonlinear optimization determination of a scale initial value and refinement estimation of the scale by using lambda I-EKF;
step four, inertial navigation data in the IMU data stream are obtained, complementary filtering is carried out on the inertial navigation data to obtain attitude variation, IMU pose is obtained by combining pre-integral calculation, and the IMU pose variation is given to the step five to be used as reference variation for calculating the pose of the current visual frame;
fifthly, extracting image features in the video stream frame by frame, and performing visual tracking and matching by using the extracted image features to obtain a visual pose;
step six, judging whether the visual tracking in the step five is lost or not, and if the visual tracking is lost and can not be relocated, performing motion positioning by using the pose of the IMU; and if the visual tracking is not lost, fusing the visual pose and the IMU pose by an IDSF (interactive digital solution scanning) method to obtain a camera pose finally used for positioning and navigation.
2. The method of claim 1, wherein the aligning in step two comprises time stamp synchronization; packing includes data packing.
3. The method according to claim 1, wherein the step of initializing in step three specifically comprises:
3.1) visual initialization: selecting continuous image frames with the number of the characteristic points larger than a set value for matching, and performing visual initialization when the current frame matching point pair is larger than the set value; if the matching point pair is smaller than the set value, restarting to select the image frame; if the vision initialization can be carried out, simultaneously calculating the homography matrix and the basic matrix by using an RANSAC 8 point method, preferentially selecting and calculating to obtain the pose of the camera, then continuously calculating the pose of the camera by using a vision odometer, and simultaneously calculating the pose of the camera to obtain the vision pose;
3.2) outputting the visual pose obtained by visual initialization;
3.3) IMU initialization: configuring camera and IMU external parameter, the IMU external parameter is the initial value after calibration,
Figure FDA0002241937170000021
representing a translation between the camera and the IMU,representing a rotational quaternion between the camera and the IMU; configuring an initial gravity vector g, and calculating the pose of the IMU;
3.4) outputting the IMU pose and the gravity vector g obtained by IMU initialization;
3.5) configuring the initial gravity vector in the step 3.3): the camera pose of a certain still image frame is obtained through visual initialization, and the accelerometer reading of the IMU frame corresponding to the still image frame is used as a gravity vector g;
3.6) determining a conversion relation between the visual coordinate system and the world coordinate system of the IMU: the position of a static image frame is the origin of a visual coordinate system of the system, the position and the posture of the static image frame are taken as the visual coordinate system, the static image frame is also a visual first frame of the system, vicon is used for representing the visual coordinate system, the position and the posture of an IMU frame corresponding to the static image frame are taken as a world coordinate system of the IMU, meanwhile, the IMU frame is also a first frame of the IMU of the system, world coordinate system of the IMU is represented by world, at the moment, the external parameters of a camera and the IMU are the conversion relation between the visual coordinate system and the world coordinate system of the IMU, and the external parameters of the camera and the IMU are taken as the conversion relation
Figure FDA0002241937170000023
Representing translation between the visual coordinate system and the world coordinate system of the IMURepresenting a rotation quaternion between a visual coordinate system and a world coordinate system of the IMU, wherein a scale lambda involved in the conversion relation is an initial configuration value;
3.7) determining the initial value of the scale by nonlinear optimization: obtaining a conversion relation between the IMU pose and the visual pose by utilizing a conversion relation between a visual coordinate system and a world coordinate system of the IMU, an IMU motion model and a known gravity vector according to the condition that the motion of the camera is actually consistent with the motion of the IMU in the same time, so that a least square solution is solved through nonlinear optimization to obtain initial estimation of the scale;
3.8) refinement of the scale with λ I-EKF: further refine the value of the metric λ by the λ I-ekf (lamdba Iterated Extended kalman filter) algorithm, and update the state vector X,
Figure FDA0002241937170000031
Figure FDA0002241937170000032
wherein b isw、baRespectively, the zero drift of a gyroscope in the IMU and the zero drift of an accelerometer in the IMU.
4. The method according to claim 3, wherein the method of calculating the visual pose in step 3.1) specifically comprises: according to the visual coordinate system determined in the step 3.6), converting the camera pose into the visual pose under the visual coordinate system, and using
Figure FDA0002241937170000033
Representing camera translation in a visual coordinate system by
Figure FDA0002241937170000034
Representing a camera rotation quaternion in a visual coordinate system.
5. The method according to claim 3 or 4, wherein the method of calculating IMU pose in step 3.3) specifically comprises: after the conversion relation between the visual coordinate system and the world coordinate system of the IMU is determined according to the step 3.6), the velocity is obtained by integrating a plurality of accelerations, the position is obtained by integrating a plurality of velocities, the attitude variation is obtained by carrying out complementary filtering on inertial navigation data, the pose of the IMU is obtained by integrating the attitude variation and the velocity variation,
Figure FDA0002241937170000035
representing IMU translations in the IMU's world coordinate system,
Figure FDA0002241937170000036
an IMU rotation quaternion representing the IMU in its world coordinate system,
Figure FDA0002241937170000037
representing the IMU speed in the IMU's world coordinate system.
6. The method according to one of claims 3 to 5, characterized in that said step 3.8) comprises in particular:
4.1) inputting an IMU pose and a gravity vector g obtained by IMU initialization, and inputting an initial value of the dimension lambda obtained in the step 3.7);
4.2) constructing the state vector X, for convenience, order
Figure FDA0002241937170000038
Figure FDA0002241937170000039
Thus state vector
Figure FDA00022419371700000311
4.3) from the state vector X and the motion model, noise model of the IMU, we can derive: error of the state vector is
Figure FDA00022419371700000312
This is achieved by
Figure FDA00022419371700000313
An estimated value representing a state vector, an error vectorFor a 25-dimensional system state error item, a transition matrix of an error state can be obtained through a prediction process of extended Kalman filtering:
wherein:
Figure FDA0002241937170000042
Figure FDA0002241937170000043
Figure FDA0002241937170000044
Figure FDA0002241937170000045
E=-A
Figure FDA0002241937170000046
Figure FDA0002241937170000047
Figure FDA0002241937170000048
Figure FDA0002241937170000049
Figure FDA00022419371700000410
an antisymmetric matrix representing the angular velocity ω, amRepresenting acceleration measurements, w, of the IMUmRepresents a measure of the angular velocity of the IMU,
Figure FDA00022419371700000411
an estimate of the null shift of the accelerometer is shown,
Figure FDA00022419371700000412
an estimate of the null shift of the gyroscope is represented,
Figure FDA00022419371700000413
an estimate value representing the acceleration is shown,
Figure FDA00022419371700000414
representing an estimate of angular velocity, at represents the time interval of two state vectors,the method comprises the steps that a predicted error state and a covariance matrix thereof can be obtained for a direction cosine matrix from an IMU coordinate system to an IMU world coordinate system according to the deduced matrix and extended Kalman filtering, so that a state vector is predicted, and a corresponding predicted error covariance matrix is obtained;
4.4) inputting the obtained visual pose of visual initialization;
4.5) correcting the system state through the visual pose, obtaining an observation error model through the conversion relation between the visual coordinate system and the world coordinate system of the IMU and the external reference of the camera and the IMU, and obtaining an observation matrix H through discrete processing of the observation error modelvThe following were used:
Figure FDA0002241937170000052
Figure FDA0002241937170000053
updating a system state vector for a direction cosine matrix from an IMU coordinate system to an IMU world coordinate system according to the derivation and the extended Kalman filtering, and obtaining a state error item through Kalman gain so as to obtain an updated state vector and a state error covariance matrix;
4.6) judging whether the variation of the scale lambda in the state vector X before and after updating is smaller than a threshold mu, if so, outputting the updated scale lambda, and outputting the updated state vector, if not, performing multiple iterative updating until the condition is met for outputting, wherein the process of each iterative updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating Kalman gain according to the updating process of the expanded Kalman filtering and by using the predicted state vector, solving the updated state vector and the state error covariance matrix by using the new Kalman gain to obtain the state vector after repeated iteration updating, continuously judging the variable quantity of the scale lambda, and checking whether the obtained quantity can be output.
7. The method of claim 1, wherein the inertial navigation data in step four is: three-axis acceleration and three-axis angular velocity; pre-integration refers to the constraint of integrating IMU measurements between key frames into relative motion.
8. The method of claim 1, wherein the step of complementarily filtering the inertial navigation data to obtain the attitude change in the fourth step specifically includes:
6.1) normalization of the acceleration to ab
6.2) standard gravitational acceleration gn=[0 0 1]TG is obtained by converting the coordinate system into an IMU coordinate systemb
6.3) for gbAnd abCarrying out vector cross multiplication to obtain a correction compensation value e of the angular velocity;
6.4) compensating the correction compensation value e to the filtered angular velocity by using a PI control method to obtain a corrected angular velocity;
6.5) converting the corrected angular velocity into a quaternion through a quaternion differential equation, normalizing the obtained quaternion to obtain an attitude matrix, and calculating the attitude variation.
9. The method according to claim 1, wherein the image features in the fifth step are: a local descriptor of the image.
10. The method as claimed in claim 1, wherein the step six of fusing the visual pose and the IMU pose by an idsf (double sensoritemized Extended Kalman filter) method specifically includes:
8.1) inputting a plurality of IMU poses;
8.2) constructing the State vector X, State vector
8.3) according to the extended Kalman filtering, a prediction error state and a covariance matrix thereof can be obtained by calculation of a state vector X, a motion model and a noise model of an IMU, a system noise covariance matrix is added, errors in state prediction are compensated, and meanwhile, the prediction error state covariance matrix is weighted and expanded, so that the system state vector is predicted, and a correspondingly adjusted prediction error covariance matrix is obtained;
8.4) inputting a visual pose;
8.5) correcting the system state through the visual pose, and updating the system state vector according to extended Kalman filtering by the conversion relation between the visual coordinate system and the world coordinate system of the IMU and the external parameters of the camera and the IMU;
8.6) judging whether the variation of the trace of the state error covariance matrix before and after updating is smaller than a threshold value delta, if so, outputting the updated state vector, if not, performing multiple iteration updating until the updated state vector meets the condition, wherein the process of each iteration updating is as follows: and performing Taylor expansion on the observation matrix at the current updated state vector to obtain a first-order approximation, recalculating the Kalman gain according to the updating process of the extended Kalman filtering and by using the adjusted prediction error covariance matrix, and solving the updated state vector and the state error covariance matrix by using the new Kalman gain so as to obtain the state vector after repeated iteration updating, and continuously judging whether the variation of the trace of the state error covariance matrix is converged below a set threshold value delta so as to check whether the obtained quantity can be output.
CN201911003161.5A 2019-10-22 2019-10-22 Monocular vision inertial combination positioning navigation method Pending CN110702107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911003161.5A CN110702107A (en) 2019-10-22 2019-10-22 Monocular vision inertial combination positioning navigation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911003161.5A CN110702107A (en) 2019-10-22 2019-10-22 Monocular vision inertial combination positioning navigation method

Publications (1)

Publication Number Publication Date
CN110702107A true CN110702107A (en) 2020-01-17

Family

ID=69200780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911003161.5A Pending CN110702107A (en) 2019-10-22 2019-10-22 Monocular vision inertial combination positioning navigation method

Country Status (1)

Country Link
CN (1) CN110702107A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111307176A (en) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111390940A (en) * 2020-04-20 2020-07-10 上海机器人产业技术研究院有限公司 Industrial robot automatic calibration system and method
CN111539982A (en) * 2020-04-17 2020-08-14 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111580596A (en) * 2020-05-19 2020-08-25 北京数字绿土科技有限公司 Method, device and terminal for synchronizing multiple IMUs (inertial measurement units) in time
CN111581322A (en) * 2020-05-12 2020-08-25 北京维盛沃德科技有限公司 Method, device and equipment for displaying interest area in video in map window
CN112179338A (en) * 2020-09-07 2021-01-05 西北工业大学 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion
CN112188037A (en) * 2020-09-24 2021-01-05 影石创新科技股份有限公司 Method for generating gyroscope rotation direction and computer equipment
CN112378396A (en) * 2020-10-29 2021-02-19 江苏集萃未来城市应用技术研究所有限公司 Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB
CN112444246A (en) * 2020-11-06 2021-03-05 北京易达恩能科技有限公司 Laser fusion positioning method in high-precision digital twin scene
CN112729294A (en) * 2021-04-02 2021-04-30 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment
CN112862768A (en) * 2021-01-28 2021-05-28 重庆邮电大学 Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics
CN113237478A (en) * 2021-05-27 2021-08-10 哈尔滨工业大学 Unmanned aerial vehicle attitude and position estimation method and unmanned aerial vehicle
CN113326716A (en) * 2020-02-28 2021-08-31 北京创奇视界科技有限公司 Loop detection method for guiding AR application positioning by assembling in-situ environment
CN113670327A (en) * 2021-08-11 2021-11-19 影石创新科技股份有限公司 Visual inertial odometer initialization method, device, equipment and storage medium
CN113686334A (en) * 2021-07-07 2021-11-23 上海航天控制技术研究所 Method for improving on-orbit combined filtering precision of star sensor and gyroscope
CN113936120A (en) * 2021-10-12 2022-01-14 北京邮电大学 Mark-free lightweight Web AR method and system
CN114199275A (en) * 2020-09-18 2022-03-18 阿里巴巴集团控股有限公司 Parameter determination method and device for sensor
CN114370870A (en) * 2022-01-05 2022-04-19 中国兵器工业计算机应用技术研究所 Filter updating information screening method suitable for pose measurement Kalman filtering
CN114593735A (en) * 2022-01-26 2022-06-07 奥比中光科技集团股份有限公司 Pose prediction method and device
CN115128655A (en) * 2022-08-31 2022-09-30 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115752442A (en) * 2022-12-07 2023-03-07 无锡恺韵来机器人有限公司 Auxiliary inertial positioning method based on monocular vision
CN112461237B (en) * 2020-11-26 2023-03-14 浙江同善人工智能技术有限公司 Multi-sensor fusion positioning method applied to dynamic change scene
WO2023109249A1 (en) * 2021-12-13 2023-06-22 成都拟合未来科技有限公司 Method and system for aligning imu data and video data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931275A (en) * 2016-05-23 2016-09-07 北京暴风魔镜科技有限公司 Monocular and IMU fused stable motion tracking method and device based on mobile terminal
CN108036785A (en) * 2017-11-24 2018-05-15 浙江大学 A kind of aircraft position and orientation estimation method based on direct method and inertial navigation fusion
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN109993113A (en) * 2019-03-29 2019-07-09 东北大学 A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
潘林豪等: "单目相机-IMU外参自动标定与在线估计的视觉-惯导SLAM", 《仪器仪表学报》 *
熊敏君等: "基于单目视觉与惯导融合的无人机位姿估计", 《计算机应用》 *
陈文钊: "基于多传感器融合的移动机器人定位与地图创建", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326716B (en) * 2020-02-28 2024-03-01 北京创奇视界科技有限公司 Loop detection method for AR application positioning of assembly guidance of assembly site environment
CN113326716A (en) * 2020-02-28 2021-08-31 北京创奇视界科技有限公司 Loop detection method for guiding AR application positioning by assembling in-situ environment
CN111307176B (en) * 2020-03-02 2023-06-16 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111307176A (en) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111539982A (en) * 2020-04-17 2020-08-14 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111539982B (en) * 2020-04-17 2023-09-15 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111390940A (en) * 2020-04-20 2020-07-10 上海机器人产业技术研究院有限公司 Industrial robot automatic calibration system and method
CN111581322A (en) * 2020-05-12 2020-08-25 北京维盛沃德科技有限公司 Method, device and equipment for displaying interest area in video in map window
CN111581322B (en) * 2020-05-12 2023-07-18 北京维盛沃德科技有限公司 Method, device and equipment for displaying region of interest in video in map window
CN111580596A (en) * 2020-05-19 2020-08-25 北京数字绿土科技有限公司 Method, device and terminal for synchronizing multiple IMUs (inertial measurement units) in time
CN111580596B (en) * 2020-05-19 2022-04-15 北京数字绿土科技股份有限公司 Method, device and terminal for synchronizing multiple IMUs (inertial measurement units) in time
CN112179338A (en) * 2020-09-07 2021-01-05 西北工业大学 Low-altitude unmanned aerial vehicle self-positioning method based on vision and inertial navigation fusion
CN114199275A (en) * 2020-09-18 2022-03-18 阿里巴巴集团控股有限公司 Parameter determination method and device for sensor
CN112188037B (en) * 2020-09-24 2023-03-24 影石创新科技股份有限公司 Method for generating gyroscope rotation direction and computer equipment
CN112188037A (en) * 2020-09-24 2021-01-05 影石创新科技股份有限公司 Method for generating gyroscope rotation direction and computer equipment
CN112378396A (en) * 2020-10-29 2021-02-19 江苏集萃未来城市应用技术研究所有限公司 Hybrid high-precision indoor positioning method based on robust LM visual inertial odometer and UWB
CN112444246A (en) * 2020-11-06 2021-03-05 北京易达恩能科技有限公司 Laser fusion positioning method in high-precision digital twin scene
CN112444246B (en) * 2020-11-06 2024-01-26 北京易达恩能科技有限公司 Laser fusion positioning method in high-precision digital twin scene
CN112461237B (en) * 2020-11-26 2023-03-14 浙江同善人工智能技术有限公司 Multi-sensor fusion positioning method applied to dynamic change scene
CN112862768A (en) * 2021-01-28 2021-05-28 重庆邮电大学 Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics
CN112862768B (en) * 2021-01-28 2022-08-02 重庆邮电大学 Adaptive monocular VIO (visual image analysis) initialization method based on point-line characteristics
CN112734852A (en) * 2021-03-31 2021-04-30 浙江欣奕华智能科技有限公司 Robot mapping method and device and computing equipment
CN112729294B (en) * 2021-04-02 2021-06-25 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN112729294A (en) * 2021-04-02 2021-04-30 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN113237478A (en) * 2021-05-27 2021-08-10 哈尔滨工业大学 Unmanned aerial vehicle attitude and position estimation method and unmanned aerial vehicle
CN113237478B (en) * 2021-05-27 2022-10-14 哈尔滨工业大学 Unmanned aerial vehicle attitude and position estimation method and unmanned aerial vehicle
CN113686334B (en) * 2021-07-07 2023-08-04 上海航天控制技术研究所 Method for improving on-orbit combined filtering precision of star sensor and gyroscope
CN113686334A (en) * 2021-07-07 2021-11-23 上海航天控制技术研究所 Method for improving on-orbit combined filtering precision of star sensor and gyroscope
CN113670327A (en) * 2021-08-11 2021-11-19 影石创新科技股份有限公司 Visual inertial odometer initialization method, device, equipment and storage medium
CN113936120A (en) * 2021-10-12 2022-01-14 北京邮电大学 Mark-free lightweight Web AR method and system
WO2023109249A1 (en) * 2021-12-13 2023-06-22 成都拟合未来科技有限公司 Method and system for aligning imu data and video data
CN114370870A (en) * 2022-01-05 2022-04-19 中国兵器工业计算机应用技术研究所 Filter updating information screening method suitable for pose measurement Kalman filtering
CN114370870B (en) * 2022-01-05 2024-04-12 中国兵器工业计算机应用技术研究所 Filter update information screening method suitable for pose measurement Kalman filtering
CN114593735A (en) * 2022-01-26 2022-06-07 奥比中光科技集团股份有限公司 Pose prediction method and device
CN114593735B (en) * 2022-01-26 2024-05-31 奥比中光科技集团股份有限公司 Pose prediction method and device
CN115128655A (en) * 2022-08-31 2022-09-30 智道网联科技(北京)有限公司 Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN115752442A (en) * 2022-12-07 2023-03-07 无锡恺韵来机器人有限公司 Auxiliary inertial positioning method based on monocular vision
CN115752442B (en) * 2022-12-07 2024-03-12 运来智能装备(无锡)有限公司 Monocular vision-based auxiliary inertial positioning method

Similar Documents

Publication Publication Date Title
CN110702107A (en) Monocular vision inertial combination positioning navigation method
CN110030994B (en) Monocular-based robust visual inertia tight coupling positioning method
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN109029433B (en) Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform
CN110009681B (en) IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method
CN111426318B (en) Low-cost AHRS course angle compensation method based on quaternion-extended Kalman filtering
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
CN111780754B (en) Visual inertial odometer pose estimation method based on sparse direct method
CN109520497B (en) Unmanned aerial vehicle autonomous positioning method based on vision and imu
CN112240768A (en) Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration
CN112649016A (en) Visual inertial odometer method based on point-line initialization
CN111780781B (en) Template matching vision and inertia combined odometer based on sliding window optimization
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN112556719B (en) Visual inertial odometer implementation method based on CNN-EKF
CN110207693B (en) Robust stereoscopic vision inertial pre-integration SLAM method
CN108731700B (en) Weighted Euler pre-integration method in visual inertial odometer
CN114001733B (en) Map-based consistent efficient visual inertial positioning algorithm
CN111340851A (en) SLAM method based on binocular vision and IMU fusion
CN111862316A (en) IMU tight coupling dense direct RGBD three-dimensional reconstruction method based on optimization
CN109211230A (en) A kind of shell posture and accelerometer constant error estimation method based on Newton iteration method
CN114485640A (en) Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics
CN115046545A (en) Positioning method combining deep network and filtering
CN114234967A (en) Hexapod robot positioning method based on multi-sensor fusion
WO2022057350A1 (en) Inertial pre-integration method for combined motion measurement system based on nonlinear integral compensation
CN112284381B (en) Visual inertia real-time initialization alignment method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117