CN109238277B - Positioning method and device for visual inertial data depth fusion - Google Patents

Positioning method and device for visual inertial data depth fusion Download PDF

Info

Publication number
CN109238277B
CN109238277B CN201810860266.1A CN201810860266A CN109238277B CN 109238277 B CN109238277 B CN 109238277B CN 201810860266 A CN201810860266 A CN 201810860266A CN 109238277 B CN109238277 B CN 109238277B
Authority
CN
China
Prior art keywords
visual
measurement
state
measurement value
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810860266.1A
Other languages
Chinese (zh)
Other versions
CN109238277A (en
Inventor
程农
李建
李清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810860266.1A priority Critical patent/CN109238277B/en
Publication of CN109238277A publication Critical patent/CN109238277A/en
Application granted granted Critical
Publication of CN109238277B publication Critical patent/CN109238277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a positioning method and a positioning device for visual inertial data depth fusion, wherein the method comprises the following steps: step S1: placing the system in a static state at an initial moment, and collecting a measurement value of an accelerometer and a measurement value of a gyroscope in an initialization time period to estimate the initial state of the system; step S2: after the initial state is obtained, the system state is propagated according to the measurement value of the accelerometer and the measurement value of the gyroscope, and the covariance matrix of the system is updated; step S3: after the image is obtained, tracking the characteristic points by utilizing an IMU (inertial measurement Unit) assisted outlier elimination method; step S4: and for the characteristic points with failed tracking, constructing visual measurement according to the visual measurement information, and updating the system state. The method makes full use of various sensor data in the visual inertial fusion system, can effectively utilize IMU data to improve tracking effect and efficiency, and can accurately estimate the pose of the system in real time.

Description

Positioning method and device for visual inertial data depth fusion
Technical Field
The invention relates to the technical field of unmanned aerial vehicle navigation positioning, in particular to a positioning method and device for visual inertial data depth fusion.
Background
The unmanned aerial vehicle is becoming an ideal platform for tasks such as monitoring, exploration and rescue due to the characteristics of small volume, high maneuverability, convenient hovering and the like.
The main problems to be solved for autonomous navigation of an unmanned aerial vehicle under the condition of no GPS (Global Positioning System) include: context awareness, state estimation, mission planning, flight control, etc. In this series of problems, the state estimation, i.e. the positioning function in general, is the most basic and important, on the one hand, because the high mobility of the drone and the unknown environment require that we must determine the position and attitude of the aircraft at a high frequency, and on the other hand, because accurate positioning is the basis and precondition for implementing the remaining functions of mission planning, flight control, etc.
The inertial navigation equipment has high state updating frequency and is sensitive to the attitude angle, the condition of large rotation can be better processed, the inertial navigation equipment is accurate enough in a short time, is slightly influenced by the environment and can recover absolute scale information, but the position information of the inertial navigation equipment is obtained by acceleration integration, has accumulated errors and cannot be eliminated by the inertial navigation equipment; the camera-based visual estimation is more suitable for long linear motion scenes and can eliminate accumulated errors through visual correlation, but the estimation effect on the rotating scenes is poor, the scale uncertainty exists, and the camera-based visual estimation is easily influenced by illumination, textures and the like in the environment. This complementary property between the two visual and inertial sensors, their cheap price and their ready availability have led to extensive research into the systems they make up by what is known in the robotic field as the minimal sensor system for autonomous navigation.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, an object of the present invention is to provide a positioning method for depth fusion of visual inertial data, which makes full use of various sensor data in a visual inertial fusion system, can effectively utilize IMU (inertial measurement unit) data to improve tracking effect and efficiency, and can estimate the pose of the system accurately in real time.
The invention also aims to provide a positioning device for the depth fusion of the visual inertial data.
In order to achieve the above object, an embodiment of an aspect of the present invention provides a positioning method for depth fusion of visual inertial data, including the following steps: step S1: placing the system in a static state at an initial moment, and collecting a measurement value of an accelerometer and a measurement value of a gyroscope in an initialization time period to estimate the initial state of the system; step S2: after the initial state is obtained, the system state is propagated according to the measurement value of the accelerometer and the measurement value of the gyroscope, and the covariance matrix of the system is updated; step S3: after the image is obtained, tracking the characteristic points by utilizing an IMU (inertial measurement Unit) assisted outlier elimination method; step S4: and for the characteristic points with failed tracking, constructing visual measurement according to visual measurement information, and updating the system state.
According to the positioning method for the visual inertial data depth fusion, the accelerometer and gyroscope data read from the IMU are utilized, the initial attitude of the system is estimated by utilizing the two data, the initialization of the system is completed, the accelerometer and gyroscope measurement values are used for spreading the state of the system after the initialization, the coordinates of the feature points of the previous frame of image in the current frame are obtained after a new image is obtained, and the visual measurement is constructed and updated, so that various sensor data in the visual inertial fusion system are fully utilized, the tracking effect and efficiency can be effectively improved by utilizing the IMU data, and the attitude of the system can be accurately estimated in real time.
In addition, the positioning method for the depth fusion of the visual inertial data according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the step S3 further includes: and obtaining the geometric constraint of the product pair by adopting an IMU (inertial measurement Unit) assisted visual tracking algorithm and utilizing the rotation between adjacent frames obtained by IMU integration so as to reduce the point number required for solving the geometric constraint model of the product pair and reduce the iteration times.
Further, in an embodiment of the present invention, in the step S4, the vision measurement is constructed by calculating a three-dimensional position of the feature point by using a pixel position of the feature point in the image satisfying a preset condition, and linearizing the vision measurement to marginalize the feature point correlation error.
Further, in an embodiment of the present invention, the step S2 further includes: propagating the state position, velocity and attitude of the system using the accelerometer measurements and the gyroscope measurements, and constructing a covariance matrix update equation from the initial state of the system to update covariance.
Further, in an embodiment of the present invention, the step S4 further includes: and for the feature points with failed tracking, if the number of tracked frames is greater than or equal to a preset number of frames, estimating the positions of the feature points with failed tracking by utilizing all past multiple observations and estimated corresponding camera poses based on a least square mode.
In order to achieve the above object, another embodiment of the present invention provides a positioning apparatus for depth fusion of visual inertial data, including: the system comprises an estimation module, a data acquisition module and a data processing module, wherein the estimation module is used for placing the system in a static state at an initial moment, and acquiring a measurement value of an accelerometer and a measurement value of a gyroscope in an initialization time period so as to estimate the initial state of the system; the propagation module is used for propagating the system state according to the measurement value of the accelerometer and the measurement value of the gyroscope after the initial state is obtained, and updating the covariance matrix of the system; the rejecting module is used for tracking the characteristic points by utilizing an IMU (inertial measurement Unit) assisted outlier rejecting method after the image is acquired; and the building module is used for building visual measurement according to the visual measurement information for the characteristic points with failed tracking and updating the system state.
According to the positioning device for the visual inertial data depth fusion, disclosed by the embodiment of the invention, the accelerometer and gyroscope data read from the IMU are utilized, the initial attitude of the system is estimated by utilizing the two data, the initialization of the system is completed, the accelerometer and gyroscope measurement values are used for propagating the state of the system after the initialization, and after a new image is obtained, the coordinates of the characteristic points of the previous frame of image in the current frame are obtained, and the visual measurement is constructed for updating, so that various sensor data in the visual inertial fusion system are fully utilized, the tracking effect and efficiency can be effectively improved by utilizing the IMU data, and the attitude of the system can be accurately estimated in real time.
In addition, the positioning apparatus for depth fusion of visual inertial data according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the culling module is further configured to adopt an IMU assisted visual tracking algorithm, and obtain a geometric constraint of the cross product by using rotation between adjacent frames obtained by IMU integration, so as to reduce the number of points required for solving the geometric constraint model of the cross product, and reduce the number of iterations.
Further, in an embodiment of the present invention, the visual measurement is constructed by calculating a three-dimensional position of the feature point by using a pixel position of the feature point in the image satisfying a preset condition, and linearizing the visual measurement to marginalize a feature point correlation error.
Further, in one embodiment of the present invention, the propagation module is further configured to propagate the state position, velocity and attitude of the system using the measurements of the accelerometer and the measurements of the gyroscope, and to construct a covariance matrix update equation from the initial state of the system to update the covariance.
Further, in an embodiment of the present invention, the building module is further configured to, for the feature point that fails to be tracked, if the number of tracked frames is greater than or equal to a preset number of frames, estimate the position of the feature point that fails to be tracked based on a least square method by using all of the past multiple observations and the estimated corresponding camera poses of the feature point.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a localization method for visual inertial data depth fusion according to one embodiment of the present invention;
FIG. 2 is a flow chart of a method for visual inertial data depth fusion localization according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a positioning apparatus for depth fusion of visual inertial data according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The following describes a positioning method and apparatus for depth fusion of visual inertial data according to an embodiment of the present invention with reference to the accompanying drawings, and first, a positioning method for depth fusion of visual inertial data according to an embodiment of the present invention will be described with reference to the accompanying drawings.
FIG. 1 is a flow chart of a localization method of visual inertial data depth fusion according to an embodiment of the present invention.
As shown in fig. 1, the positioning method for visual inertial data depth fusion includes the following steps:
step S1: the system is placed in a static state at an initial time, and the measurement values of the accelerometer and the measurement values of the gyroscope within an initialization period are collected to estimate the initial state of the system.
It will be appreciated that as shown in FIG. 2, the system is initially brought to rest, and acceleration measurements and gyroscope measurements are taken during an initialization period; and estimating the initial state of the system by using the acquired accelerometer and gyroscope measurement values.
Specifically, by collecting accelerometer and gyroscope measurements during an initial stationary time period, the attitude, position, velocity, accelerometer zero-bias, and gyroscope zero-bias at the initial time are determined. This step requires the visual inertial system to be at rest at an initial moment, calculating its mean value by collecting the acceleration measurements within an initialization period
Figure BDA0001749471430000041
Accelerometer mean according to normalization
Figure BDA0001749471430000042
And the direction g of gravitational acceleration at restB=[0 0 1]TCalculating a rotation matrix between them
Figure BDA0001749471430000043
This rotation R is set as the initial attitude of the system. The initial position, the speed and the acceleration of the system are 0, and the zero offset of the gyroscope is the mean value of gyroscope measured values in the initialization time period.
Step S2: and after the initial state is obtained, the system state is propagated according to the measurement value of the accelerometer and the measurement value of the gyroscope, and the covariance matrix of the system is updated.
In one embodiment of the present invention, step S2 further includes: the state position, velocity and attitude of the system are propagated using the accelerometer measurements and gyroscope measurements, and a covariance matrix update equation is constructed from the initial state of the system to update the covariance.
It will be appreciated that, as shown in FIG. 2, after measurements of the IMU have been obtained, the state position, velocity and attitude of the system are propagated using the accelerometer and gyroscope measurements, and the covariance matrix update equation is constructed using the estimated system state, resulting in an update of the covariance
Step S3: after the image is acquired, tracking the characteristic points by utilizing an IMU assisted outlier rejection method.
In one embodiment of the present invention, step S3 further includes: and obtaining the geometric constraint of the product pair by adopting an IMU (inertial measurement Unit) assisted visual tracking algorithm and utilizing the rotation between adjacent frames obtained by IMU integration so as to reduce the point number required for solving the geometric constraint model of the product pair and reduce the iteration times.
It can be understood that the embodiment of the invention adopts an IMU assisted visual tracking algorithm, calculates the geometric constraint of the product by utilizing the rotation between adjacent frames obtained by IMU integration, and reduces the point number required for solving the geometric constraint model of the product, thereby reducing the iteration times of the RANSAC algorithm.
Specifically, after the camera acquires a new image, the gyroscope measurements of all IMU sensors between the previous frame image to the current frame image are used
Figure BDA0001749471430000051
Estimating the rotation between two images
Figure BDA0001749471430000052
Based on pure rotation RIMUThe corresponding homography matrix H ═ K-1RIMUK calculating the characteristic point x of the previous frame imagepInitial coordinates Hx of current frame image (x, y)p. Using the predicted pixel coordinates Hx as the initial value of the light flow based on the lightTracking by a flow tracking algorithm to obtain the final position x of the pixel point of the previous frame image in the current framec
Figure BDA0001749471430000053
Wherein, wx,wyIs half the window size of the feature point in the current frame image J, (d)x,dy) Is set to Hxp-(x,y)。
After the feature points are tracked by using an optical flow algorithm, outliers need to be removed by using a RANSAC algorithm. Since the rotations of the two adjacent frames have already been calculated in S11
Figure BDA0001749471430000054
Therefore, only two point pairs need to be selected to solve the solution in the cross product geometric constraint model
Figure BDA0001749471430000055
Where t and R are the relative translation and rotation between two frames, the formula can be modified as:
Figure BDA0001749471430000056
due to the fact that
Figure BDA0001749471430000057
The above equation can thus be converted into:
Figure BDA0001749471430000058
two equations are constructed from two point pairs, and a solution for t can be obtained based on SVD (Singular Value Decomposition). After the relative translation between the two frames is calculated, according to the formula:
Figure BDA0001749471430000059
the error of the point pair to the model for the cross product geometric constraint is calculated.
Step S4: and for the characteristic points with failed tracking, constructing visual measurement according to the visual measurement information, and updating the system state.
It can be understood that, as shown in fig. 2, the embodiment of the present invention uses all the visual measurement information of the feature points with failed tracking to construct the visual measurement, and updates the state of the system.
Further, in one embodiment of the present invention, in step S4, the vision measurement is constructed by calculating the three-dimensional position of the feature point using the pixel position of the feature point in the image satisfying the preset condition, and linearizing the vision measurement to marginalize the error associated with the feature point.
Further, in an embodiment of the present invention, the step S4 further includes: and for the feature points which fail to be tracked, if the number of tracked frames is greater than or equal to the preset number of frames, estimating the positions of the feature points which fail to be tracked by utilizing all past multiple observations and estimated corresponding camera poses and based on a least square mode.
Specifically, for a feature point that fails to track, if the number of frames tracked by the feature point is greater than or equal to 3, the position of the feature point is estimated based on a least square method by using all the past n observations and the estimated corresponding camera poses.
The angle between the observations of the first frame and the following n-1 frames is first calculated based on the following formula:
Figure BDA0001749471430000061
measuring the position of the triangulated point with the first frame and the frame having the largest included angle with the first frame, and optimizing the position of the objective function point based on:
Figure BDA0001749471430000062
wherein
Figure BDA0001749471430000063
Is a point
Figure BDA0001749471430000064
Inverse depth representation of (1), function hiIs defined as:
Figure BDA0001749471430000065
after triangularization of the feature points, the following EKF (Extended Kalman Filter) measurement equation is constructed:
Figure BDA0001749471430000066
wherein
Figure BDA0001749471430000071
And(s)iIs a characteristic point fjIndex of frame in sliding window corresponding to ith observation)
Figure BDA0001749471430000072
Due to the characteristic point fjIs not estimated in the state variable, so a left multiplication can be used
Figure BDA0001749471430000073
The method of left null space of
Figure BDA0001749471430000074
And (5) marginating. Suppose that
Figure BDA0001749471430000075
Has a left null space of V, satisfies
Left-hand multiplication of V on both sides of formulaTObtaining:
Figure BDA0001749471430000077
since there may be a case that the RANSAC method is not enough to remove outliers in the tracking process, the method needs to be applied to
Figure BDA0001749471430000078
The Mahalobis gating test shown in equation (4.95) was used.
Figure BDA0001749471430000079
Stacking the residual equations of all the feature points passing through the Mahalobis gating test to obtain:
Figure BDA00017494714300000710
wherein
Figure BDA00017494714300000711
A standard EKF update procedure is then performed:
K=Pl+1|lHoT(HoPl+1|lHoT+Ro)-1
Pl+1|l+1=(I-KHo)Pl+1|l(I-KHo)T+KRoKT
Figure BDA0001749471430000081
according to the positioning method for the visual inertial data depth fusion, provided by the embodiment of the invention, the accelerometer and gyroscope data read from the IMU are utilized, the initial attitude of the system is estimated by utilizing the two data, the initialization of the system is completed, the accelerometer and gyroscope measurement values are used for propagating the state of the system after the initialization, and after a new image is obtained, the coordinates of the feature points of the previous frame of image in the current frame are obtained, and the visual measurement is constructed for updating, so that the data of various sensors in the visual inertial fusion system are fully utilized, the tracking effect and efficiency can be effectively improved by utilizing the IMU data, and the attitude of the system can be accurately estimated in real time.
The positioning device for the visual inertial data depth fusion proposed by the embodiment of the invention is described next with reference to the accompanying drawings.
FIG. 3 is a schematic structural diagram of a positioning apparatus for depth fusion of visual inertial data according to an embodiment of the present invention.
As shown in fig. 3, the positioning device 10 for depth fusion of visual inertial data includes: estimation module 100, propagation module 200, culling module 300 and construction module 400.
The estimation module 100 is configured to place the system in a static state at an initial time, and collect measurement values of an accelerometer and a gyroscope in an initialization period to estimate an initial state of the system. The propagation module 200 is configured to propagate the system state according to the measurement value of the accelerometer and the measurement value of the gyroscope after obtaining the initial state, and update the covariance matrix of the system. The elimination module 300 is configured to track feature points by using an IMU-assisted outlier elimination method after an image is acquired. The building module 400 is configured to build a visual measurement according to the visual measurement information for the feature point with the tracking failure, and update the system state. The device 10 of the embodiment of the invention fully utilizes various sensor data in the visual inertial fusion system, can effectively utilize IMU data to improve the tracking effect and efficiency, and can accurately estimate the pose of the system in real time.
Further, in an embodiment of the present invention, the culling module 300 is further configured to adopt an IMU assisted visual tracking algorithm, and obtain a geometric constraint of the cross product by using rotation between adjacent frames obtained by IMU integration, so as to reduce the number of points needed for solving the geometric constraint model of the cross product, and reduce the number of iterations.
Further, in an embodiment of the present invention, the visual measurement is constructed by calculating a three-dimensional position of the feature point using a pixel position of the feature point in the image satisfying a preset condition, and linearizing the visual measurement to marginalize a feature point correlation error.
Further, in an embodiment of the present invention, the propagation module 200 is further configured to propagate the state position, velocity and attitude of the system using the measurements of the accelerometer and the measurements of the gyroscope, and construct a covariance matrix update equation according to the initial state of the system to update the covariance.
Further, in an embodiment of the present invention, the building module 400 is further configured to, for a feature point that fails to track, if the number of tracked frames is greater than or equal to a preset number of frames, estimate a position of the feature point that fails to track based on a least square method by using all of the past multiple observations and the estimated corresponding camera pose.
It should be noted that the explanation of the embodiment of the positioning method for depth fusion of visual inertial data is also applicable to the positioning device for depth fusion of visual inertial data of the embodiment, and is not repeated here.
According to the positioning device for the visual inertial data depth fusion, provided by the embodiment of the invention, the accelerometer and gyroscope data read from the IMU are utilized, the initial attitude of the system is estimated by utilizing the two data, the initialization of the system is completed, the accelerometer and gyroscope measurement values are used for propagating the state of the system after the initialization, and after a new image is obtained, the coordinates of the characteristic points of the previous frame of image in the current frame are obtained, and the visual measurement is constructed for updating, so that the data of various sensors in the visual inertial fusion system are fully utilized, the tracking effect and efficiency can be effectively improved by utilizing the IMU data, and the attitude of the system can be accurately estimated in real time.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (2)

1. A positioning method for visual inertial data depth fusion is characterized by comprising the following steps:
step S1: placing the system in a static state at an initial moment, and collecting a measurement value of an accelerometer and a measurement value of a gyroscope in an initialization time period to estimate the initial state of the system;
step S2: after obtaining the initial state, propagating the system state according to the measurement value of the accelerometer and the measurement value of the gyroscope, and updating a covariance matrix of the system, where the step S2 further includes: propagating the state position, velocity and attitude of the system using the accelerometer measurements and the gyroscope measurements, and constructing a covariance matrix update equation from the initial state of the system to update covariance;
step S3: after the image is acquired, tracking the feature points by using an IMU-assisted outlier rejection method, where the step S3 further includes: an IMU assisted visual tracking algorithm is adopted, and rotation between adjacent frames obtained by IMU integration is utilized to obtain geometric constraint of an opposite product, so that the number of point pairs required for solving the geometric constraint model of the opposite product is reduced, and the iteration times are reduced; and
step S4: for the feature point with failed tracking, constructing a visual measurement according to the visual measurement information, and updating the system state, in step S4, the visual measurement construction method is to calculate the three-dimensional position of the feature point by using the pixel position of the feature point in the image satisfying the preset condition, and linearize the visual measurement to marginalize the feature point related error; the step S4 further includes: and for the feature points with failed tracking, if the number of tracked frames is greater than or equal to a preset number of frames, estimating the positions of the feature points with failed tracking by utilizing all past multiple observations and estimated corresponding camera poses based on a least square mode.
2. A visual inertial data depth fusion positioning device, comprising:
the system comprises an estimation module, a data acquisition module and a data processing module, wherein the estimation module is used for placing the system in a static state at an initial moment, and acquiring a measurement value of an accelerometer and a measurement value of a gyroscope in an initialization time period so as to estimate the initial state of the system;
the propagation module is used for propagating the state of the system according to the measurement value of the accelerometer and the measurement value of the gyroscope after the initial state is obtained, and updating the covariance matrix of the system, and the propagation module is further used for propagating the state position, the speed and the attitude of the system according to the measurement value of the accelerometer and the measurement value of the gyroscope, and constructing a covariance matrix updating equation according to the initial state of the system so as to update the covariance;
the elimination module is used for tracking the characteristic points by utilizing an IMU (inertial measurement Unit) assisted outlier elimination method after the image is obtained, and is further used for obtaining the geometric constraint of the cross product by adopting an IMU (inertial measurement Unit) assisted visual tracking algorithm and utilizing the rotation between adjacent frames obtained by IMU integration so as to reduce the number of point pairs required for solving the geometric constraint model of the cross product and reduce the iteration times; and
the system comprises a construction module, a processing module and a processing module, wherein the construction module is used for constructing visual measurement according to visual measurement information for a feature point with failed tracking and updating the system state, and the visual measurement construction method comprises the steps of calculating the three-dimensional position of the feature point by using the pixel position of the feature point in an image meeting preset conditions and linearizing the visual measurement so as to marginalize the related error of the feature point; and the construction module is further used for estimating the position of the feature point with failed tracking by utilizing all past multiple observations and the estimated corresponding camera poses of the feature point with failed tracking in a least square mode if the number of tracked frames is greater than or equal to a preset number of frames for the feature point with failed tracking.
CN201810860266.1A 2018-08-01 2018-08-01 Positioning method and device for visual inertial data depth fusion Active CN109238277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810860266.1A CN109238277B (en) 2018-08-01 2018-08-01 Positioning method and device for visual inertial data depth fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810860266.1A CN109238277B (en) 2018-08-01 2018-08-01 Positioning method and device for visual inertial data depth fusion

Publications (2)

Publication Number Publication Date
CN109238277A CN109238277A (en) 2019-01-18
CN109238277B true CN109238277B (en) 2020-10-27

Family

ID=65073358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810860266.1A Active CN109238277B (en) 2018-08-01 2018-08-01 Positioning method and device for visual inertial data depth fusion

Country Status (1)

Country Link
CN (1) CN109238277B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112798010B (en) * 2019-11-13 2023-05-09 北京三快在线科技有限公司 Initializing method and device of VIO system of visual inertial odometer
CN111288989B (en) * 2020-02-25 2021-11-05 浙江大学 Visual positioning method for small unmanned aerial vehicle
CN113159197A (en) * 2021-04-26 2021-07-23 北京华捷艾米科技有限公司 Pure rotation motion state judgment method and device
CN113503872B (en) * 2021-06-03 2024-04-12 浙江工业大学 Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU
CN114022556A (en) * 2021-11-16 2022-02-08 浙江商汤科技开发有限公司 Positioning initialization method, device and computer readable storage medium
CN114563018B (en) * 2022-03-04 2024-06-25 闪耀现实(无锡)科技有限公司 Method and apparatus for calibrating head mounted display device
CN117128865B (en) * 2023-07-13 2024-07-02 清华大学 Computer vision structure displacement measuring method, system, terminal and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8855929B2 (en) * 2010-01-18 2014-10-07 Qualcomm Incorporated Using object to align and calibrate inertial navigation system
KR101208448B1 (en) * 2010-12-22 2012-12-05 국방과학연구소 Aided navigation apparatus using 3 dimensional image and inertial navigation system using the same
CN102768042B (en) * 2012-07-11 2015-06-24 清华大学 Visual-inertial combined navigation method
CN105444766B (en) * 2015-12-16 2018-04-10 清华大学 Indoor navigation method based on deep learning
CN106708066B (en) * 2015-12-20 2019-07-26 中国电子科技集团公司第二十研究所 View-based access control model/inertial navigation unmanned plane independent landing method
CN107687850B (en) * 2017-07-26 2021-04-23 哈尔滨工业大学深圳研究生院 Unmanned aerial vehicle pose estimation method based on vision and inertia measurement unit

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双目视觉和惯性器件的微小型无人机运动状态估计方法;李建 等;《航空学报》;20111231;第32卷(第12期);2310-2316 *

Also Published As

Publication number Publication date
CN109238277A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109238277B (en) Positioning method and device for visual inertial data depth fusion
US10295365B2 (en) State estimation for aerial vehicles using multi-sensor fusion
CN109520497B (en) Unmanned aerial vehicle autonomous positioning method based on vision and imu
EP3158412B1 (en) Sensor fusion using inertial and image sensors
EP3158293B1 (en) Sensor fusion using inertial and image sensors
Weiss et al. Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments
CN111792034B (en) Method and system for estimating state information of movable object using sensor fusion
CN107850899B (en) Sensor fusion using inertial and image sensors
Shen et al. Optical Flow Sensor/INS/Magnetometer Integrated Navigation System for MAV in GPS‐Denied Environment
CN106030430A (en) Multi-sensor fusion for robust autonomous filght in indoor and outdoor environments with a rotorcraft micro-aerial vehicle (MAV)
Chambers et al. Robust multi-sensor fusion for micro aerial vehicle navigation in GPS-degraded/denied environments
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN108846857A (en) The measurement method and visual odometry of visual odometry
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
Mercado et al. Gps/ins/optic flow data fusion for position and velocity estimation
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
Popov et al. Optical flow and inertial navigation system fusion in the UAV navigation
CN117710476A (en) Monocular vision-based unmanned aerial vehicle pose estimation and dense mapping method
CN111811421B (en) High-speed real-time deformation monitoring method and system
CN113465596A (en) Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion
Meier et al. Detection and characterization of moving objects with aerial vehicles using inertial-optical flow
Hosen et al. A vision-aided nonlinear observer for fixed-wing UAV navigation
Li et al. Real-time indoor navigation of uav based on visual delay compensation
Deng et al. Measurement model and observability analysis for optical flow-aided inertial navigation
Zhou et al. Adaptive noise identification in vision-assisted motion estimation for unmanned aerial vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant