CN113670327A - Visual inertial odometer initialization method, device, equipment and storage medium - Google Patents

Visual inertial odometer initialization method, device, equipment and storage medium Download PDF

Info

Publication number
CN113670327A
CN113670327A CN202110917529.XA CN202110917529A CN113670327A CN 113670327 A CN113670327 A CN 113670327A CN 202110917529 A CN202110917529 A CN 202110917529A CN 113670327 A CN113670327 A CN 113670327A
Authority
CN
China
Prior art keywords
data
initialization
visual
inertial odometer
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110917529.XA
Other languages
Chinese (zh)
Inventor
赖东东
谭明朗
谢亮
付伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Insta360 Innovation Technology Co Ltd
Original Assignee
Insta360 Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Insta360 Innovation Technology Co Ltd filed Critical Insta360 Innovation Technology Co Ltd
Priority to CN202110917529.XA priority Critical patent/CN113670327A/en
Publication of CN113670327A publication Critical patent/CN113670327A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The application relates to a visual inertial odometer initialization method, a device, computer equipment and a storage medium. The method comprises the steps of firstly carrying out dynamic initialization in the process of initializing the visual inertia odometer, obtaining static detection parameters of the visual inertia odometer corresponding to the image acquisition equipment in the dynamic initialization process to determine whether the static initialization conditions are met, and terminating the dynamic initialization when the static detection parameters represent that the image acquisition equipment is in a static state, namely the static initialization conditions are met, so as to carry out static initialization with higher initialization speed on the visual inertia odometer.

Description

Visual inertial odometer initialization method, device, equipment and storage medium
Technical Field
The present application relates to the field of computers, and in particular, to a method and an apparatus for initializing a visual inertial odometer, a computer device, and a storage medium.
Background
The system comprises a visual-inertial odometer (VIO), sometimes called a visual-inertial system, for fusing image acquisition equipment and IMU data to realize instant positioning and map construction, wherein after an inertial measurement unit is added in the visual odometer, measurement information of the inertial measurement unit is added between camera poses solved by two adjacent key frames as constraint, and inertial navigation can output various navigation parameters, so that more detailed motion information can be provided for the camera in each frame. Currently, the visual inertial navigation odometer is basically divided into two categories: loose coupling and tight coupling. The loose coupling means that the camera carries out motion estimation, the inertial measurement unit carries out motion calculation at the same time, and then the two motion estimation and calculation results are fused; the tight coupling refers to combining the states of the camera and the inertial measurement unit together to construct a motion equation and an observation equation, and then performing motion estimation uniformly. The tight coupling of the visual inertial odometer is a highly non-linear system because of the dimensions of the monocular visual odometer, the rotation of the image capture device with respect to the direction of gravity is an unobservable quantity in a system based on visual instantaneous localization and mapping, which results in a relatively good initial value for the monocular dimensions, the gravity vector, to be estimated when the visual and IMU are fused.
In practical applications, the visual inertia odometer often needs to be initialized during movement, for example, the visual inertia odometer fails to track and needs to be restarted during movement of the image acquisition device.
Meanwhile, in practical applications, initialization of the visual inertial odometer is generally continuously processed in a dynamic initialization mode, but the initialization speed of the dynamic initialization is slow and the calculation amount is large, so that the initialization efficiency of the visual inertial odometer system is influenced.
Disclosure of Invention
In view of the above, it is desirable to provide a visual inertial odometer initialization method, device, computer equipment and storage medium capable of effectively improving the initialization efficiency of the visual inertial odometer.
A visual inertial odometer initialization method, the method comprising:
acquiring an initialization request corresponding to a visual inertial odometer, wherein the visual inertial odometer is loaded on image acquisition equipment;
dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer.
In one embodiment, the still detection parameters include pixel data of feature points in an image frame acquired by an image acquisition device;
the method further comprises the following steps:
in the dynamic initialization process, identifying matching feature point pairs of a latest image frame and an adjacent image frame acquired by the image acquisition equipment, wherein the adjacent image frame is a last image frame of the latest image frame;
if the pixel difference between the characteristic points of the matched characteristic point pair is smaller than a preset pixel difference threshold value, identifying the pixel matched characteristic point pair as a similar matched characteristic point pair, and acquiring the number of the similar matched characteristic point pairs;
determining the number ratio of the similar matching characteristic point pairs in the matching characteristic point pairs according to the number of the similar matching characteristic point pairs and the number of the matching characteristic point pairs;
and when the number ratio is larger than a preset ratio threshold value, determining that the static detection parameters represent that the image acquisition equipment is in a static state.
In one embodiment, the still detection parameters comprise accelerometer measurement values and gyroscope measurement values corresponding to image frames acquired by an image acquisition device;
the method further comprises the following steps:
in the dynamic initialization process, identifying a latest image frame acquired by the image acquisition equipment, first variance data corresponding to a measurement value of an accelerometer in an adjacent image frame, and second variance data corresponding to a measurement value of a gyroscope, wherein the adjacent image frame is a previous image frame of the latest image frame;
and when the first variance data is smaller than a preset first measured value variance threshold value and the second variance data is smaller than a preset second measured value variance threshold value, determining that the static detection parameter represents that the image acquisition equipment is in a static state.
In one embodiment, the statically initializing the visual odometer comprises:
acquiring gravity acceleration data corresponding to the image acquisition equipment;
acquiring attitude data and gravity vector direction data according to the gravity acceleration data, and acquiring gyroscope measured value data measured by a gyroscope;
acquiring gyroscope offset data according to the gyroscope measurement value data, and setting position data and speed data corresponding to the visual inertial odometer to zero;
statically initializing the visual inertial odometer according to the gravity vector direction data, the gyroscope bias data, the attitude data, the position data, and the velocity data.
In one embodiment, the dynamically initializing the visual odometer according to the initialization request comprises:
acquiring a relative rotation amount and a relative translation amount corresponding to a latest image frame and an adjacent image frame, and performing pre-integration processing on IMU data between the latest image frame and the adjacent image frame to acquire a pre-integration result;
acquiring gyroscope bias data corresponding to the visual inertial odometer through rotation constraint based on the relative rotation amount and the pre-integration result;
aligning visual data and IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result to acquire gravity direction data, speed data, position data, attitude data and scale data corresponding to the visual inertial odometer;
dynamically initializing the visual inertial odometer based on the gyroscope bias data, the gravity direction data, the velocity data, the position data, the attitude data, and the scale data.
In one embodiment, the aligning the vision data and the IMU data according to the relative rotation amount, the relative translation amount, and the pre-integration result, and acquiring the gravity direction data, the speed data, the position data, the attitude data, and the scale data corresponding to the vision inertial odometer includes:
and aligning visual data and IMU data based on linear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result, and acquiring gravity direction data, speed data, position data, attitude data and scale data corresponding to the visual inertial odometer.
In one embodiment, the aligning the vision data and the IMU data according to the relative rotation amount, the relative translation amount, and the pre-integration result, and acquiring the gravity direction data, the speed data, the pose data, and the scale data corresponding to the vision inertial odometer includes:
and aligning visual data and IMU data based on nonlinear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result, and acquiring gravity direction data, speed data, position data, attitude data and scale data corresponding to the visual inertial odometer.
A visual inertial odometer initialization device, the device comprising:
the device comprises a request acquisition module, a display module and a display module, wherein the request acquisition module is used for acquiring an initialization request corresponding to a visual inertial odometer, and the visual inertial odometer is loaded on image acquisition equipment;
the static initialization module is used for dynamically initializing the visual inertial odometer according to the initialization request and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and the dynamic initialization module is used for terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer when the static detection parameters represent that the state of the image acquisition equipment is a static state.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring an initialization request corresponding to a visual inertial odometer, wherein the visual inertial odometer is loaded on image acquisition equipment;
dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring an initialization request corresponding to a visual inertial odometer, wherein the visual inertial odometer is loaded on image acquisition equipment;
dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer.
The visual inertial odometer initialization method, the device, the computer equipment and the storage medium are characterized in that the method comprises the steps of obtaining an initialization request corresponding to the visual inertial odometer; dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process; and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer. The dynamic initialization is terminated when the static detection parameter represents that the image acquisition equipment is in a static state, namely the static initialization condition is met, so that the visual inertial odometer is subjected to static initialization with higher initialization speed.
Drawings
FIG. 1 is a schematic flow chart diagram of a visual inertial odometer initialization method in one embodiment;
FIG. 2 is a flowchart illustrating the steps of identifying whether the image capture device is in a quiescent state, in one embodiment;
FIG. 3 is a flow diagram illustrating the static initialization step in one embodiment;
FIG. 4 is a flow diagram illustrating the dynamic initialization step in one embodiment;
FIG. 5 is a block diagram of the visual inertial odometer initialization device in one embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, a visual inertial odometer initialization method is provided, and this embodiment is illustrated by applying the method to a terminal, and it is to be understood that the method may also be applied to a server, and may also be applied to a system including a terminal and a server, and is implemented by interaction between the terminal and the server. In this embodiment, the terminal may specifically be a processor on an image acquisition device, and the method includes the following steps:
and 102, acquiring an initialization request corresponding to the visual inertial odometer, wherein the visual inertial odometer is loaded on the image acquisition equipment.
The visual inertial odometer is also called VIO and is used for fusing data of image acquisition equipment and IMU to realize instant positioning and map construction. And the initialization request is used for requesting the processor of the terminal to realize the initialization of the visual inertial odometer. The quantities that need to be estimated for the initialization process include: scale, gravity vector direction, gyroscope bias, pose of each frame, speed.
Specifically, the method mainly solves the problem of initialization of the visual inertial odometer, and realizes automatic detection and switching of static and dynamic initialization. Before initializing the visual inertial odometer, it is determined whether an initialization request corresponding to the visual inertial odometer is received, and the initialization request may be specifically sent to the processor when the tracking of the visual inertial odometer fails, and then the processor may perform an initialization process of the visual inertial odometer based on the initialization request corresponding to the visual inertial odometer.
And 104, dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process.
Wherein, the dynamic initialization refers to the initialization of the visual inertial odometer completed in the process of the motion of the image acquisition equipment. And the still detection parameter is used for determining whether the current state of the image acquisition device is in a still state or a moving state. The dynamic initialization can be initialized during the movement of the image acquisition equipment, but the initialization speed is slow, and generally two to three seconds are needed.
Specifically, when initializing the visual inertial odometer, the initialization operation may be performed by default in a dynamic initialization manner, and in this process, the still detection parameters corresponding to the image capturing device may be synchronously obtained to determine whether the current state of the image capturing device is in a still state or a moving state.
And 106, when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization, and performing static initialization on the visual inertial odometer.
In the process of dynamic initialization, if the state that the static detection parameters represent the image acquisition equipment is detected to be a static state, the current image acquisition equipment can be subjected to static initialization, and at the moment, in order to improve the initialization efficiency of the visual inertia odometer, the dynamic initialization can be directly terminated, and the static initialization can be carried out on the visual inertia odometer. The static initialization has the advantages that the initialization speed is high, the initialization can be completed only by one frame, but the image acquisition equipment is ensured to be in a static state in the initialization process, so the static detection of the image acquisition equipment is carried out through static detection parameters in the dynamic initialization process, the dynamic initialization is terminated after the image acquisition equipment enters the static state, and the static initialization is carried out on the visual inertia odometer. Therefore, the initialization of the visual inertial odometer can be completed in a very short time, and the efficiency of the initialization process is improved.
The visual inertia odometer initialization method comprises the steps of obtaining an initialization request corresponding to a visual inertia odometer; dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process; and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer. The method comprises the steps of carrying out dynamic initialization in the process of initializing the visual inertia odometer, obtaining static detection parameters of the visual inertia odometer corresponding to image acquisition equipment in the dynamic initialization process to determine whether the static initialization conditions are met, terminating the dynamic initialization when the static detection parameters represent that the image acquisition equipment is in a static state, namely meeting the static initialization conditions, and thus carrying out static initialization with higher initialization speed on the visual inertia odometer.
In one embodiment, the still detection parameters include pixel data for feature points in an image frame acquired by the image acquisition device.
As shown in fig. 2, the method further comprises:
step 201, in the process of dynamic initialization, identifying matching feature point pairs between the latest image frame acquired by the image acquisition device and an adjacent image frame, where the adjacent image frame is a previous image frame of the latest image frame.
In step 203, if the pixel difference between the feature points of the matched feature point pair is smaller than the preset pixel difference threshold, identifying the pixel matched feature point pair as a similar matched feature point pair, and obtaining the number of the similar matched feature point pairs.
And step 205, determining the number ratio of the similar matching characteristic point pairs in the matching characteristic point pairs according to the number of the similar matching characteristic point pairs and the number of the matching characteristic point pairs.
And step 207, when the number proportion is larger than a preset proportion threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
The pixel difference is specifically used for representing whether the pixel position changes. Specifically, whether the image capturing device is in a still state or not can be determined by whether or not two frames of images captured by the image capturing device can be matched. Firstly, in the process of dynamic initialization, the matching feature point pairs in the latest image frame and the adjacent image frames collected by the image collecting equipment can be identified, so that similar matching feature point pairs with the pixel difference between the feature points of the matching feature point pairs smaller than a preset pixel difference threshold are obtained, and the number of the similar matching feature point pairs is identified; determining the number ratio of the similar matching characteristic point pairs in the matching characteristic point pairs according to the number of the similar matching characteristic point pairs and the number of the matching characteristic point pairs; and when the number proportion is larger than a preset proportion threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state. That is, by judging whether the pixel positions of the feature points capable of being matched in the adjacent image frames are changed, and when most of the pixel positions in the feature points matched between the two adjacent frames are basically unchanged, it can be judged that the current image acquisition equipment is in a static state. In a specific embodiment, when 90% of the feature points on the matching have pixel differences smaller than a preset pixel difference threshold, it may be determined that the current image capturing device is in a still state. In the embodiment, whether the image acquisition equipment is in a static state or not is identified and determined by comparing the pixel data of the characteristic points of the previous frame and the next frame, so that the accuracy of static detection can be effectively ensured. Therefore, the accuracy of initialization of the visual inertial odometer is ensured.
In one embodiment, the method further comprises: in the dynamic initialization process, identifying a latest image frame acquired by image acquisition equipment, first variance data corresponding to a measurement value of an accelerometer in an adjacent image frame, and second variance data corresponding to a measurement value of a gyroscope, wherein the adjacent image frame is a previous image frame of the latest image frame; and when the first variance data is smaller than a preset first measured value variance threshold value and the second variance data is smaller than a preset second measured value variance threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
The image acquisition equipment comprises an accelerometer and a gyroscope which are respectively used for acquiring acceleration sensing data and angular velocity sensing data of the image acquisition equipment.
Specifically, when still detection is required, whether the two frames before and after are in a still state can be determined through the latest image frame acquired by the image acquisition device, the first variance data corresponding to the accelerometer measurement value corresponding to the adjacent image frame, and the second variance data corresponding to the gyroscope measurement value, and when the first variance data between the two frames is smaller than a preset first measurement value variance threshold value and the second variance data is smaller than a preset second measurement value variance threshold value, it is indicated that no displacement occurs between the two frames before and after, and at this time, it can be determined that the still detection parameter represents that the image acquisition device is in the still state. In the embodiment, whether the image acquisition equipment is in a static state or not is identified and determined by comparing the sensing data corresponding to the previous frame and the sensing data corresponding to the next frame, so that the accuracy of static detection can be effectively ensured. Therefore, the accuracy of initialization of the visual inertial odometer is ensured.
In one embodiment, as shown in fig. 3, the static initialization of the visual odometer comprises:
step 302, acquiring gravity acceleration data corresponding to the image acquisition device.
And 304, acquiring attitude data and gravity vector direction data according to the gravity acceleration data, and acquiring gyroscope measurement value data obtained by gyroscope measurement.
And step 306, acquiring gyroscope offset data according to the gyroscope measurement value data, and setting the position data and the speed data corresponding to the visual inertial odometer to zero.
And 308, statically initializing the visual inertial odometer according to the gravity vector direction data, the gyroscope bias data, the attitude data, the position data and the speed data.
Specifically, completing initialization of the visual inertial odometer requires data such as scale, gravity vector direction, gyroscope bias, pose of each frame, and speed. For static initialization, only one frame of image is needed during the static initialization, so that only the position and the speed of one frame of image are needed, the direction of the gravity vector is the measured value of the accelerometer of the image acquisition equipment when the image acquisition equipment is static, and the attitude of the image acquisition equipment is calculated through the measured value
Figure BDA0003206185880000091
I.e. the pitch angle pitch and roll angle roll are solved directly by the accelerometers, while the yaw angle can be given by 0. Because the accelerometer measures the acceleration of gravity at rest, and is vertically upward in the world coordinate system, the measurement of the accelerometer is equivalent to the expression that the z-axis of the world coordinate system is expressed in the IMU coordinate system. The measurement values of the accelerometer at rest are normalized
Figure BDA0003206185880000092
Then there are:
roll=atan2(ax,az),pitch=-arcsin(ax),yaw=0
and according to the Euler angle definition, obtaining a rotation matrix and further obtaining a quaternion corresponding to the attitude. And the position:
Figure BDA0003206185880000093
speed:
Figure BDA0003206185880000094
the simultaneous scale is the relative position between two frames and the scale of the coordinates of the 3D points, which is not present in only one frame, i.e. scale is not present. And bias b for gyroscopeg0The data may be obtained from the gyroscope measurement, specifically, the average value of the gyroscope readings during a period of inactivity. And then, performing static initialization on the visual inertial odometer according to the determined gravity vector direction data, gyroscope bias data, attitude data, position data and speed data. In the embodiment, the static initialization of the visual inertial odometer is realized by firstly calculating the initialization related data such as gravity vector direction data, gyroscope bias data, attitude data, position data, speed data and the like, and then the accuracy of the static initialization can be effectively ensured.
In one embodiment, as shown in fig. 4, dynamically initializing the visual odometer according to the initialization request includes:
step 401, obtaining a relative rotation amount and a relative translation amount corresponding to the latest image frame and the adjacent image frame, and performing pre-integration processing on the IMU data between the latest image frame and the adjacent image frame to obtain a pre-integration result.
And step 403, acquiring gyroscope bias data corresponding to the visual inertial odometer through rotation constraint based on the relative rotation amount and the pre-integration result.
And 405, aligning the vision data and the IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result to acquire gravity direction data, speed data, position data, posture data and scale data corresponding to the vision inertial odometer.
And step 407, dynamically initializing the visual inertial odometer according to the gyroscope bias data, the gravity direction data, the speed data, the position data, the attitude data and the scale data.
Firstly, when dynamic initialization is carried out, preprocessing is needed to be carried out, wherein the preprocessing comprises the steps of generating the relative rotation amount and the relative translation amount corresponding to the latest image frame and the adjacent image frame, and carrying out pre-integration processing on IMU data between the latest image frame and the adjacent image frame to obtain a pre-integration result, wherein for calculating the relative rotation amount q between the image framesc0_ci+1And the relative translation amount tc0_ci+1. A visual odometer or SLAM (instant positioning and mapping) can be operated according to the video collected by the image collecting equipment to generate the relative rotation amount q corresponding to all the image framesc0_ci+1And the relative translation amount tc0_ci+1And generating a corresponding three-dimensional map for alignment with the IMU data. And then, pre-integration processing is carried out on the IMU data to obtain a pre-integration result. The IMU pre-integration mainly aims to integrate a plurality of IMU measurement values between two frames of images into one measurement value in an integration mode, and the state quantity and the covariance thereof are pushed from the ith frame of image to the i +1 frame of image, so that a constraint equation or a residual error equation can be constructed along with the pose of the visual SLAM. The IMU measures linear acceleration and angular velocity. The velocity in the state quantity is an integral with respect to the acceleration, the position is a quadratic integral with respect to the acceleration, and the attitude is an integral with respect to the angular velocity. Therefore, the IMU measurement value can be integrated under the current coordinate system (IMU coordinate system), and when necessary, the coordinate system conversion and addition and subtraction are only needed to be carried out according to the situation, so that the integration operation is simplified, and the calculation amount in the subsequent calculation process is saved.
The main tasks of pre-integration are therefore:
the relative position, speed and rotation between two adjacent image frames can be obtained by deducing the state quantity pvq from the image time of the ith frame to the image time of the (i + 1) th frame;
the recursion of the measurement noise of the state quantity pvq and the covariance thereof from the ith frame of the image to the i +1 frame is completed;
the recurrence of the jacobian matrix of the completion state quantity pvq to IMU offsets ba, bg from frame i to frame i +1 of the image: the Jacobian matrix can linearize the integral formula of the state quantity at the measured value, and bias b is optimizeda,bgChanging, updating the state quantities using a Jacobian matrix without a second integration, where baZero drift/offset/zero offset for accelerometers in IMU, bgZero drift/bias/zero offset for the gyroscope in the IMU;
a residual equation and a residual to state quantity Jacobian matrix formula: in optimization-based initialization, residual terms are constructed and the state quantities are updated using the jacobian matrix.
After the preprocessing is completed, dynamic initialization can be performed, and the gyroscope bias b can be estimated according to the rotation constraintgRotation quaternion q of adjacent SLAM image frames in the camera coordinate system (the image capturing device is a camera)c0_ci,qc0_ci+1Rotation q between IMU coordinate systems (body coordinate systems) calculated by pre-integrationbi_bi+1External reference q between camera and IMUb_cThe objective function to derive the corrected gyroscope bias is:
Figure BDA0003206185880000111
wherein [ q ]]vecThe imaginary part of the quaternion q is taken as the representation,
Figure BDA0003206185880000112
Figure BDA0003206185880000113
qbi_bi+1consisting of an estimate and a gyroscope bias δ bg, δ bg typically being a small quantity, the taylor spread and first order approximation in the vicinity:
Figure BDA0003206185880000114
b denotes all image frames in the sliding window,
Figure BDA0003206185880000115
since the relative rotation between adjacent image frames has already been calculated during the above-described IMU data pre-integration, the jacobian matrix of the rotation angle with respect to δ bg can be directly determined.
If the minimum value of the above-mentioned objective function is a unit quaternion (indicating no relative rotation), the objective function can be rewritten as:
Figure BDA0003206185880000116
considering only the imaginary part, then there are:
Figure BDA0003206185880000117
(q)vecrepresenting taking the imaginary part of the quaternion q.
The conversion of the above equation into a positive definite matrix is:
Figure BDA0003206185880000121
solving the equation set can obtain the solution b with the minimum objective functiongAnd thus the estimation of the gyroscope bias is completed. And then aligning the vision and IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result to obtain gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer. The visual and IMU data alignment processing scheme comprises two schemes, namely a visual and IMU data alignment scheme based on a linear equation set; and a vision and IMU data alignment scheme based on non-linear optimization. The IMU data can provide scale information and pose estimation during rapid movement for the visual inertial odometer, the visual inertial odometer can effectively solve the static drift problem of the IMU, and the accuracy and robustness of the SLAM system can be improved after the IMU data and the visual inertial odometer are aligned.
In one embodiment, a vision and IMU alignment scheme based on a system of linear equations, which has two steps: first, a) the direction of gravity, velocity, and initial value of scale are estimated using translation constraints.
The variables to be estimated are:
Figure BDA0003206185880000122
wherein the content of the first and second substances,
Figure BDA0003206185880000123
representing a representation of the velocity of the IMU coordinate system at time i in the IMU coordinate system at time i. gc0Is a representation of the gravity vector in the frame 0 camera coordinate system. s represents the scale factor of the visual SLAM relative to the absolute scale. Note: monocular vision computed interframe translation and landmark points have no scale, so the scale factor s is estimated.
Three equations of position, speed and rotation constraint can be constructed by utilizing the relative relation between IMU pre-integration and visual SLAM, when the gyroscope bias is solved, the constraint equation of rotation is equivalently used, and the world coordinate system w in the constraint equation of position and speed is converted into c0In the system (frame 0 camera coordinate system), because the translation amount of the visual SLAM has no scale information, the translation amount obtained by all the visual SLAMs is multiplied by a scale factor s, items containing the amount to be estimated are moved to one side, and items not containing the amount to be estimated are moved to the other side of an equation, so that an equation set formed as Hx-b is obtained, and the equation set is converted into a linear least square problem to solve the state quantity:
Figure BDA0003206185880000124
namely, the x of the estimated quantity is obtainedIAmount to be estimated χiIncluding the velocities of all keyframes in the sliding window B, the components of the gravity vector in the c0 coordinate system, and the monocular scale factor s.
Figure BDA0003206185880000125
And
Figure BDA0003206185880000126
the coefficient matrix is composed of all constraint equations in the sliding window B.
Then, the second step, b) fine adjustment of the direction of the gravitational acceleration is carried out.
When solving the above, the gravity vector g is calculatedc0Solving for this as a three-dimensional vector of three degrees of freedom, the modal length of gravity is fixed in practice, i.e. 9.81. Thus, the gravity vector has only two degrees of freedom, can be subjected to gram-Schmidt orthogonalization, and is subjected to reparameterization on a spherical surface with the radius of 9.81, and the gravity vector after reparameterization is as follows:
Figure BDA0003206185880000131
wherein, w1,w2For the variables to be optimized, | g | | | is the modulo length of the gravity vector, typically 9.81:
Figure BDA0003206185880000132
Figure BDA0003206185880000133
and substituting the newly parameterized gravity vector into the system of equations Hx-b in the first step again, and solving a least square problem by using the same method in the first step, thereby realizing the fine adjustment of the gravity acceleration direction. In the embodiment, the alignment of the vision and the IMU can be effectively realized through the vision and IMU alignment scheme based on linear optimization, so that the initialization accuracy of dynamic initialization of the vision inertial odometer is ensured.
In yet another embodiment, the gravity direction data, velocity data, position data, pose data, and scale data corresponding to the visual odometer may also be obtained by using a vision-to-IMU alignment scheme based on non-linear optimization.
This method is the maximum a posteriori estimation of the IMU, with the aim of obtaining an optimal estimation of the IMU parameters.
In the preprocessing process, the data after the visual SLAM runs stably contains the rotation q of a plurality of key framesc0_ci+1And translation tc0_ci+1And a corresponding three-dimensional map, and pre-integration information of IMU measurements between these keyframes. Corresponding state vectors can be constructed based on the above information, so as to construct a corresponding optimization problem to solve the IMU parameter optimal estimation problem. The optimization variables here are:
Figure BDA0003206185880000134
δgc0=[δα,δβ,δγ]
the Inertial residual equation for this step is the residual of pvq built in IMU pre-integration, below rotation
Figure BDA0003206185880000135
Speed of rotation
Figure BDA0003206185880000136
And position
Figure BDA0003206185880000137
Residual terms of constraint construction:
Figure BDA0003206185880000141
Figure BDA0003206185880000142
Figure BDA0003206185880000143
wherein s represents VisThe scale factor of the SLAM relative to the absolute scale is perceived,
Figure BDA0003206185880000144
representing the non-scale quantity in the SLAM.
The difference between the residual equation and the residual in the IMU pre-integration is that the position and velocity terms are multiplied by a scale s, the scale is optimized as an optimization variable, and when the scale is used as an explicit optimization variable, the convergence speed is much faster than when the scale is used as implicit content for optimization. Meanwhile, the value g of the gravity vector under the c0 coordinate system needs to be estimatedc0Averaging all the gravity vector measurements in the sliding window to estimate an initial value ginitEstimating the error δ g relative to the initial value in the optimizationc0. The jacobian formula of the residual relative optimization variables is also easily obtained according to the above residual equation. And (4) constructing an optimization problem by using an optimization library such as cerees or g2o and the like, and solving to obtain the initialized gravity direction data, speed data, position data, attitude data and scale data of the visual inertial odometer. In the embodiment, the vision and IMU alignment can be effectively realized through the vision and IMU alignment scheme based on the nonlinear optimization, so that the initialization accuracy of the dynamic initialization of the vision inertial odometer is ensured.
It should be understood that although the various steps in the flow charts of fig. 1-4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 5, there is provided a visual inertial odometer initializing device including:
the request obtaining module 502 is configured to obtain an initialization request corresponding to a visual inertial odometer, where the visual inertial odometer is loaded on an image acquisition device.
The static initialization module 504 is configured to perform dynamic initialization on the visual inertial odometer according to the initialization request, and obtain a static detection parameter of the image acquisition device during the dynamic initialization.
And the dynamic initialization module 506 is configured to terminate dynamic initialization and perform static initialization on the visual inertial odometer when the static detection parameter indicates that the state of the image capturing device is a static state.
In one embodiment, the still detection parameters include pixel data of feature points in an image frame acquired by the image acquisition device; the apparatus also includes a stationary identification module to: in the dynamic initialization process, identifying matching characteristic point pairs of a latest image frame acquired by image acquisition equipment and an adjacent image frame, wherein the adjacent image frame is a previous image frame of the latest image frame; if the pixel difference between the characteristic points of the matched characteristic point pairs is smaller than a preset pixel difference threshold value, identifying the pixel matched characteristic point pairs as similar matched characteristic point pairs, and acquiring the number of the similar matched characteristic point pairs; determining the number ratio of the similar matching characteristic point pairs in the matching characteristic point pairs according to the number of the similar matching characteristic point pairs and the number of the matching characteristic point pairs; and when the number proportion is larger than a preset proportion threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
In one embodiment, the static identification module is further configured to: in the dynamic initialization process, identifying a latest image frame acquired by image acquisition equipment, first variance data corresponding to a measurement value of an accelerometer in an adjacent image frame, and second variance data corresponding to a measurement value of a gyroscope, wherein the adjacent image frame is a previous image frame of the latest image frame; and when the first variance data is smaller than a preset first measured value variance threshold value and the second variance data is smaller than a preset second measured value variance threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
In one embodiment, the static initialization module 506 is specifically configured to: acquiring gravity acceleration data corresponding to the image acquisition equipment; acquiring attitude data and gravity vector direction data according to the gravity acceleration data, and acquiring gyroscope measured value data measured by a gyroscope; acquiring gyroscope offset data according to the gyroscope measurement value data, and setting position data and speed data corresponding to the visual inertial odometer to zero; and statically initializing the visual inertial odometer according to the gravity vector direction data, the gyroscope bias data, the attitude data, the position data and the speed data.
In one embodiment, the dynamic initialization module 504 is specifically configured to: acquiring a relative rotation amount and a relative translation amount corresponding to the latest image frame and the adjacent image frame, and performing pre-integration processing on IMU data between the latest image frame and the adjacent image frame to acquire a pre-integration result; acquiring gyroscope bias data corresponding to the visual inertial odometer through rotation constraint based on the relative rotation amount and the pre-integration result; aligning the vision and IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result to obtain gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer; and dynamically initializing the visual inertial odometer according to the gyroscope bias data, the gravity direction data, the speed data, the position data, the attitude data and the scale data.
In one embodiment, the dynamic initialization module 504 is further configured to: and aligning the vision data and the IMU data based on linear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result to obtain gravity direction data, speed data, position data, posture data and scale data corresponding to the vision inertial odometer.
In one embodiment, the dynamic initialization module 504 is further configured to: and aligning the vision data and the IMU data based on nonlinear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result, and acquiring gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer.
For specific limitations of the initialization device of the visual inertial odometer, reference may be made to the above limitations of the initialization method of the visual inertial odometer, and details thereof are not repeated here. The various modules in the visual inertial odometer initialization apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a visual inertial odometry method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring an initialization request corresponding to a visual inertial odometer, wherein the visual inertial odometer is loaded on image acquisition equipment;
dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer.
In one embodiment, the processor, when executing the computer program, further performs the steps of: in the dynamic initialization process, identifying matching characteristic point pairs of a latest image frame acquired by image acquisition equipment and an adjacent image frame, wherein the adjacent image frame is a previous image frame of the latest image frame; if the pixel difference between the characteristic points of the matched characteristic point pairs is smaller than a preset pixel difference threshold value, identifying the pixel matched characteristic point pairs as similar matched characteristic point pairs, and acquiring the number of the similar matched characteristic point pairs; determining the number ratio of the similar matching characteristic point pairs in the matching characteristic point pairs according to the number of the similar matching characteristic point pairs and the number of the matching characteristic point pairs; and when the number proportion is larger than a preset proportion threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
In one embodiment, the processor, when executing the computer program, further performs the steps of: in the dynamic initialization process, identifying a latest image frame acquired by image acquisition equipment, first variance data corresponding to a measurement value of an accelerometer in an adjacent image frame, and second variance data corresponding to a measurement value of a gyroscope, wherein the adjacent image frame is a previous image frame of the latest image frame; and when the first variance data is smaller than a preset first measured value variance threshold value and the second variance data is smaller than a preset second measured value variance threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring gravity acceleration data corresponding to the image acquisition equipment; acquiring attitude data and gravity vector direction data according to the gravity acceleration data, and acquiring gyroscope measured value data measured by a gyroscope; acquiring gyroscope offset data according to the gyroscope measurement value data, and setting position data and speed data corresponding to the visual inertial odometer to zero; and statically initializing the visual inertial odometer according to the gravity vector direction data, the gyroscope bias data, the attitude data, the position data and the speed data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a relative rotation amount and a relative translation amount corresponding to the latest image frame and the adjacent image frame, and performing pre-integration processing on IMU data between the latest image frame and the adjacent image frame to acquire a pre-integration result; acquiring gyroscope bias data corresponding to the visual inertial odometer through rotation constraint based on the relative rotation amount and the pre-integration result; aligning the vision and IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result to obtain gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer; based on the gyroscope bias data, the gravity direction data, the velocity data, the position data, the attitude data, and the scale data, in one embodiment, the processor when executing the computer program further performs the steps of: the visual inertial odometer is dynamically initialized.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and aligning the vision data and the IMU data based on linear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result to obtain gravity direction data, speed data, position data, posture data and scale data corresponding to the vision inertial odometer.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and aligning the vision data and the IMU data based on nonlinear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result, and acquiring gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an initialization request corresponding to a visual inertial odometer, wherein the visual inertial odometer is loaded on image acquisition equipment;
dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer.
In one embodiment, the computer program when executed by the processor further performs the steps of: in the dynamic initialization process, identifying matching characteristic point pairs of a latest image frame acquired by image acquisition equipment and an adjacent image frame, wherein the adjacent image frame is a previous image frame of the latest image frame; if the pixel difference between the characteristic points of the matched characteristic point pairs is smaller than a preset pixel difference threshold value, identifying the pixel matched characteristic point pairs as similar matched characteristic point pairs, and acquiring the number of the similar matched characteristic point pairs; determining the number ratio of the similar matching characteristic point pairs in the matching characteristic point pairs according to the number of the similar matching characteristic point pairs and the number of the matching characteristic point pairs; and when the number proportion is larger than a preset proportion threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
In one embodiment, the computer program when executed by the processor further performs the steps of: in the dynamic initialization process, identifying a latest image frame acquired by image acquisition equipment, first variance data corresponding to a measurement value of an accelerometer in an adjacent image frame, and second variance data corresponding to a measurement value of a gyroscope, wherein the adjacent image frame is a previous image frame of the latest image frame; and when the first variance data is smaller than a preset first measured value variance threshold value and the second variance data is smaller than a preset second measured value variance threshold value, determining that the still detection parameter represents that the image acquisition equipment is in a still state.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring gravity acceleration data corresponding to the image acquisition equipment; acquiring attitude data and gravity vector direction data according to the gravity acceleration data, and acquiring gyroscope measured value data measured by a gyroscope; acquiring gyroscope offset data according to the gyroscope measurement value data, and setting position data and speed data corresponding to the visual inertial odometer to zero; and statically initializing the visual inertial odometer according to the gravity vector direction data, the gyroscope bias data, the attitude data, the position data and the speed data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a relative rotation amount and a relative translation amount corresponding to the latest image frame and the adjacent image frame, and performing pre-integration processing on IMU data between the latest image frame and the adjacent image frame to acquire a pre-integration result; acquiring gyroscope bias data corresponding to the visual inertial odometer through rotation constraint based on the relative rotation amount and the pre-integration result; aligning the vision and IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result to obtain gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer; based on the gyroscope bias data, the gravity direction data, the velocity data, the position data, the attitude data, and the scale data, in one embodiment, the processor when executing the computer program further performs the steps of: the visual inertial odometer is dynamically initialized.
In one embodiment, the computer program when executed by the processor further performs the steps of: and aligning the vision data and the IMU data based on linear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result to obtain gravity direction data, speed data, position data, posture data and scale data corresponding to the vision inertial odometer.
In one embodiment, the computer program when executed by the processor further performs the steps of: and aligning the vision data and the IMU data based on nonlinear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result, and acquiring gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile memory may include Read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical storage, or the like. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A visual inertial odometer initialization method, the method comprising:
acquiring an initialization request corresponding to a visual inertial odometer, wherein the visual inertial odometer is loaded on image acquisition equipment;
dynamically initializing the visual inertial odometer according to the initialization request, and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and when the static detection parameters represent that the state of the image acquisition equipment is a static state, terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer.
2. The method of claim 1, wherein the still detection parameters comprise pixel data of feature points in an image frame acquired by an image acquisition device;
the method further comprises the following steps:
in the dynamic initialization process, identifying matching feature point pairs of a latest image frame and an adjacent image frame acquired by the image acquisition equipment, wherein the adjacent image frame is a last image frame of the latest image frame;
if the pixel difference between the characteristic points of the matched characteristic point pair is smaller than a preset pixel difference threshold value, identifying the pixel matched characteristic point pair as a similar matched characteristic point pair, and acquiring the number of the similar matched characteristic point pairs;
determining the number ratio of the similar matching characteristic point pairs in the matching characteristic point pairs according to the number of the similar matching characteristic point pairs and the number of the matching characteristic point pairs;
and when the number ratio is larger than a preset ratio threshold value, determining that the static detection parameters represent that the image acquisition equipment is in a static state.
3. The method of claim 1, wherein the still detection parameters comprise accelerometer measurements and gyroscope measurements corresponding to image frames captured by an image capture device;
the method further comprises the following steps:
in the dynamic initialization process, identifying a latest image frame acquired by the image acquisition equipment, first variance data corresponding to a measurement value of an accelerometer in an adjacent image frame, and second variance data corresponding to a measurement value of a gyroscope, wherein the adjacent image frame is a previous image frame of the latest image frame;
and when the first variance data is smaller than a preset first measured value variance threshold value and the second variance data is smaller than a preset second measured value variance threshold value, determining that the static detection parameter represents that the image acquisition equipment is in a static state.
4. The method of claim 1, wherein the statically initializing the visual odometer comprises:
acquiring gravity acceleration data corresponding to the image acquisition equipment;
acquiring attitude data and gravity vector direction data according to the gravity acceleration data, and acquiring gyroscope measured value data measured by a gyroscope;
acquiring gyroscope offset data according to the gyroscope measurement value data, and setting position data and speed data corresponding to the visual inertial odometer to zero;
statically initializing the visual inertial odometer according to the gravity vector direction data, the gyroscope bias data, the attitude data, the position data, and the velocity data.
5. The method of claim 1, wherein the dynamically initializing the visual odometer according to the initialization request comprises:
acquiring a relative rotation amount and a relative translation amount corresponding to a latest image frame and an adjacent image frame, and performing pre-integration processing on IMU data between the latest image frame and the adjacent image frame to acquire a pre-integration result;
acquiring gyroscope bias data corresponding to the visual inertial odometer through rotation constraint based on the relative rotation amount and the pre-integration result;
aligning the vision and IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result to acquire gravity direction data, speed data, position data, attitude data and scale data corresponding to the vision inertial odometer;
dynamically initializing the visual inertial odometer based on the gyroscope bias data, the gravity direction data, the velocity data, the position data, the attitude data, and the scale data.
6. The method according to claim 5, wherein the aligning the vision data and the IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result, and the obtaining the gravity direction data, the speed data, the position data, the attitude data and the scale data corresponding to the vision inertial odometer comprises:
and aligning visual data and IMU data based on linear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result, and acquiring gravity direction data, speed data, position data, attitude data and scale data corresponding to the visual inertial odometer.
7. The method according to claim 5, wherein the aligning the vision data and the IMU data according to the relative rotation amount, the relative translation amount and the pre-integration result, and the obtaining the gravity direction data, the speed data, the pose data and the scale data corresponding to the vision inertial odometer comprises:
and aligning visual data and IMU data based on nonlinear optimization according to the relative rotation amount, the relative translation amount and the pre-integration result, and acquiring gravity direction data, speed data, position data, attitude data and scale data corresponding to the visual inertial odometer.
8. A visual inertial odometer initialization device, the device comprising:
the device comprises a request acquisition module, a display module and a display module, wherein the request acquisition module is used for acquiring an initialization request corresponding to a visual inertial odometer, and the visual inertial odometer is loaded on image acquisition equipment;
the static initialization module is used for dynamically initializing the visual inertial odometer according to the initialization request and acquiring static detection parameters of the image acquisition equipment in the dynamic initialization process;
and the dynamic initialization module is used for terminating the dynamic initialization and carrying out static initialization on the visual inertial odometer when the static detection parameters represent that the state of the image acquisition equipment is a static state.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202110917529.XA 2021-08-11 2021-08-11 Visual inertial odometer initialization method, device, equipment and storage medium Pending CN113670327A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110917529.XA CN113670327A (en) 2021-08-11 2021-08-11 Visual inertial odometer initialization method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110917529.XA CN113670327A (en) 2021-08-11 2021-08-11 Visual inertial odometer initialization method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113670327A true CN113670327A (en) 2021-11-19

Family

ID=78542239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110917529.XA Pending CN113670327A (en) 2021-08-11 2021-08-11 Visual inertial odometer initialization method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113670327A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117705094A (en) * 2023-05-16 2024-03-15 荣耀终端有限公司 Navigation positioning method and terminal equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109211277A (en) * 2018-10-31 2019-01-15 北京旷视科技有限公司 The state of vision inertia odometer determines method, apparatus and electronic equipment
US20190163198A1 (en) * 2017-11-29 2019-05-30 Qualcomm Incorporated Radar aided visual inertial odometry initialization
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
US20200217873A1 (en) * 2019-01-08 2020-07-09 Qualcomm Incorporated In-motion initialization of accelerometer for accurate vehicle positioning
CN112649016A (en) * 2020-12-09 2021-04-13 南昌大学 Visual inertial odometer method based on point-line initialization
CN112798010A (en) * 2019-11-13 2021-05-14 北京三快在线科技有限公司 Initialization method and device for VIO system of visual inertial odometer

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190163198A1 (en) * 2017-11-29 2019-05-30 Qualcomm Incorporated Radar aided visual inertial odometry initialization
CN109211277A (en) * 2018-10-31 2019-01-15 北京旷视科技有限公司 The state of vision inertia odometer determines method, apparatus and electronic equipment
US20200217873A1 (en) * 2019-01-08 2020-07-09 Qualcomm Incorporated In-motion initialization of accelerometer for accurate vehicle positioning
CN110702107A (en) * 2019-10-22 2020-01-17 北京维盛泰科科技有限公司 Monocular vision inertial combination positioning navigation method
CN112798010A (en) * 2019-11-13 2021-05-14 北京三快在线科技有限公司 Initialization method and device for VIO system of visual inertial odometer
CN112649016A (en) * 2020-12-09 2021-04-13 南昌大学 Visual inertial odometer method based on point-line initialization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊敏君;卢惠民;熊丹;肖军浩;吕鸣;: "基于单目视觉与惯导融合的无人机位姿估计", 计算机应用, no. 2 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117705094A (en) * 2023-05-16 2024-03-15 荣耀终端有限公司 Navigation positioning method and terminal equipment

Similar Documents

Publication Publication Date Title
CN111811506B (en) Visual/inertial odometer combined navigation method, electronic equipment and storage medium
Qin et al. Vins-mono: A robust and versatile monocular visual-inertial state estimator
US11295456B2 (en) Visual-inertial odometry with an event camera
CN110763251B (en) Method and system for optimizing visual inertial odometer
US9071829B2 (en) Method and system for fusing data arising from image sensors and from motion or position sensors
Lupton et al. Visual-inertial-aided navigation for high-dynamic motion in built environments without initial conditions
Indelman et al. Information fusion in navigation systems via factor graph based incremental smoothing
CN110084832B (en) Method, device, system, equipment and storage medium for correcting camera pose
Kottas et al. Efficient and consistent vision-aided inertial navigation using line observations
CN112815939B (en) Pose estimation method of mobile robot and computer readable storage medium
US20140316698A1 (en) Observability-constrained vision-aided inertial navigation
US20220051031A1 (en) Moving object tracking method and apparatus
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN110260861B (en) Pose determination method and device and odometer
CN111932674A (en) Optimization method of line laser vision inertial system
CN114013449A (en) Data processing method and device for automatic driving vehicle and automatic driving vehicle
CN113066127A (en) Visual inertial odometer method and system for calibrating equipment parameters on line
CN112179373A (en) Measuring method of visual odometer and visual odometer
Pöppl et al. Integrated trajectory estimation for 3D kinematic mapping with GNSS, INS and imaging sensors: A framework and review
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN108827287B (en) Robust visual SLAM system in complex environment
Choi et al. Monocular SLAM with undelayed initialization for an indoor robot
CN113670327A (en) Visual inertial odometer initialization method, device, equipment and storage medium
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
Gutierrez-Gomez et al. True scaled 6 DoF egocentric localisation with monocular wearable systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination