CN108827315A - Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration - Google Patents

Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration Download PDF

Info

Publication number
CN108827315A
CN108827315A CN201810939064.6A CN201810939064A CN108827315A CN 108827315 A CN108827315 A CN 108827315A CN 201810939064 A CN201810939064 A CN 201810939064A CN 108827315 A CN108827315 A CN 108827315A
Authority
CN
China
Prior art keywords
inertia
frame
integration
vision
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810939064.6A
Other languages
Chinese (zh)
Other versions
CN108827315B (en
Inventor
刘富春
苏泫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201810939064.6A priority Critical patent/CN108827315B/en
Publication of CN108827315A publication Critical patent/CN108827315A/en
Application granted granted Critical
Publication of CN108827315B publication Critical patent/CN108827315B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of vision inertia odometer position and orientation estimation method and device based on manifold pre-integration, the method carry out following steps after the initialization for completing vision inertia mileage system:The alignment of vision inertial data;Vision light stream pose calculates;Carry out inertia pre-integration;Vision and inertia combined optimization;Window edge;It finally repeats the above steps, realizes the continuous estimation to camera pose.The present invention uses the positioning accuracy more higher than independent visual odometry system of the vision inertia mileage system based on manifold pre-integration algorithm, pre-integration algorithm based on manifold effectively uses Inertia information in mileage system, and inhibit the noise transmission of system, reduce influence of the inertia drift to odometer positioning accuracy.

Description

Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
Technical field
The present invention relates to a kind of vision inertia odometer position and orientation estimation method, especially a kind of views based on manifold pre-integration Feel inertia odometer position and orientation estimation method and device, belongs to independent navigation field.
Background technique
Videogrammetry (VO) and simultaneously positioning and mapping (SLAM) have become the important component of independent navigation research, Because they provide convenience reliable alternative solution for classical robot/vehicle location solution.With laser scanner Etc. environment sensings sensor compare, the advantages of camera be it is light-weight, can detecte most solid material, and with quite high frequency Rate is read.Therefore, they are navigating, most important in motion planning and avoidance.However, in typical application, camera The visual field be limited.Close barrier has blocked most of scene, and structureless surface often lacks visual cues, and weighs Multiple texture makes lookup corresponding relationship become complicated.It can help to mitigate these influences using inertial sensor.Including camera Sensor setting with IMU is very suitable for navigating, but brings huge research challenge.Fusion inertia and visual cues are Very useful, because they provide supplemental information, facilitate the robustness of estimation procedure.
In pure visual odometry system, system obtains the image information of ambient enviroment using camera sensing device, passes through Image is analyzed, the motion state of estimating system.But there is scale problem in single camera vision system, i.e., system without Method learns the physical size of motion process obtained, can only obtain opposite length information.Meanwhile single camera vision system is also deposited In initialization matter.But when system initialization, need to carry out camera translational motion ability work well, if only carrying out pure Rotation process then will lead to system initialization process failure.
For vision inertia mileage system, due to being added to inertial sensor information in system, for pure vision system Insurmountable scale problem and initialization matter can smoothly obtain system by carrying out scale alignment using Inertia information The dimensional information of system simultaneously completes initialization procedure.But inertial sensor can have accumulation drift in running hours. Therefore, carrying out zero shift rectifying using measurement result of the visual information to inertance element when working long hours becomes in raising combination The key of journey meter systems positioning accuracy.
Summary of the invention
The purpose of the present invention is to solve the defects of the above-mentioned prior art, provide a kind of view based on manifold pre-integration Feel inertia odometer position and orientation estimation method, this method effectively uses Inertia information in mileage system, and inhibits system Noise transmission, reduce influence of the inertia drift to odometer positioning accuracy.
Another object of the present invention is to provide a kind of, and the vision inertia odometer pose based on manifold pre-integration estimates dress It sets.
The purpose of the present invention can be reached by adopting the following technical scheme that:
Vision inertia odometer position and orientation estimation method based on manifold pre-integration, the method complete vision inertia mileage After the initialization of meter systems, following steps are carried out:
By marking timestamp to vision data and inertial data, all inertia between two frame visual patterns are obtained Data;
To the visual pattern that each frame obtains, the matching of LK optical flow method is carried out with the previous frame visual pattern of system storage, is obtained The trace point information between two frame visual patterns is obtained, PnP is carried out to two frame visual image datas using trace point information and matches work, The camera for obtaining view-based access control model image information estimates pose;
Pre-integration processing is carried out to all inertial datas between two frame visual patterns, calculates the pre-integration variable of definition, And error propagation update is carried out, obtain the kinematic constraint relationship between two frame visual patterns;
Using the pose speed of camera motion and drift as state variable, according to a preliminary estimate and inertia motion using data image Constraint optimizes system mode;It is optimized using the library g2o as optimization tool, with the rotation of representation of Lie algebra camera motion Journey is turned over, the calculation amount of optimization process is effectively reduced.It is estimated to be using the camera pose that vision and the optimization of inertia close coupling obtain Effect reduces the influence that inertial data drifts about at any time, effectively inhibits the propagation of error, obtains higher positioning accuracy;
Judge whether present frame is key frame, if if so, optimization window current at this time has been expired, in next frame arriving When, a time earliest frame is subjected to marginalisation processing, treated that the frame eliminates optimization window by marginalisation, and will be current Optimization window is added in frame;If it is not, then directly rejecting current frame data from optimization window, guarantee the calculation amount of system certain In limit, the real-time of odometer work is realized;
It repeats the above steps, realizes the continuous estimation to camera pose.
Further, the initialization for completing vision inertia mileage system, specially:It is regarded using Inertia information Feel the scale size of image, completes the initialization operation of vision inertia mileage system.
Further, the scale size for obtaining visual pattern using Inertia information completes initialization operation, specific to wrap It includes:
Pure visual movement estimation is carried out, the scale free relative motion estimation between two corresponding moment is obtained;
Inertia information is integrated, the camera motion relationship based on Inertia information is obtained;
The scale free movement relation of view-based access control model and the actual motion relationship based on inertia are compared, vision estimation procedure is obtained Scale factor, complete vision inertia mileage system initialization procedure.
Further, the visual pattern obtained to each frame carries out LK with the previous frame visual pattern of system storage Optical flow method matching, specifically includes:
Taylor expansion is carried out to image constraint equation;
According to the image constraint equation after Taylor expansion, light stream vectors are calculated using LK optical flow method;
According to light stream vectors, corresponding position of the visual pattern point of present frame in next frame visual pattern is obtained, is completed LK optical flow method matches work.
Further, described image constraint equation, such as following formula:
I (x, y, z, t)=I (x+ δ x, y+ δ y, z+ δ z, t+ δ t)
Wherein, I (x, y, z, t) is the voxel in the position (x, y, z);
Taylor expansion, such as following formula are carried out to image constraint equation:
Wherein, Vx,Vy,VzFor the x of the light stream vectors of I (x, y, z, t), y, z-component,Then image (x, Y, z, t) difference of respective direction at the point;
Above formula is abbreviated as:
▽ITV=-It
According to the image constraint equation after Taylor expansion, light stream vectors, such as following formula are calculated using LK optical flow method:
Ix1Vx+Iy1Vy+Iz1Vz=-It1
Ix2Vx+Iy2Vy+Iz2Vz=-It2
IxnVx+IynVy+IznVz=-Itn
Using this equation group of least square solution, such as following formula:
Further, all inertial datas between two frame visual patterns carry out pre-integration processing, calculate definition Pre-integration variable, specifically include:
Inertia measurement process is modeled, such as following formula:
Wherein, B represents inertial coodinate system, and W represents world coordinate system;
Consider the motion model of inertance element, such as following formula:
It is integrated within the t+ Δ t time, discretization, in conjunction with measurement equation, obtains following formula:
By in above formula, related variable pools together with inertia measurement value, carries out pre-integration processing, obtains pre-integration As a result;
Define pre-integration variable Δ Rij,Δvij,ΔpijIt is as follows:
After obtaining pre-integration variable, it is calculate by the following formula and obtains the estimation pose based on pre-integration in manifold:
Rj=RiΔRij
vj=vi-WgΔtij+RiΔvij
Further, the progress error propagation update obtains the kinematic constraint relationship between two frame visual patterns, specifically Including:
Error component in pre-integration variable is separated:
Wherein, δ φij,δvij,δpijFor the corresponding noise variance of pre-integration variable, arrangement is obtained:
Operator () ^ indicates the corresponding antisymmetric matrix of vector;
Instrument error equation of transfer defines the quantity of state of noiseFor the corresponding noise of pre-integration variable, the input of noise AmountFor the raw noise of inertial data:
Arrange the noise equation of transfer for obtaining inertia pre-integration:
Further, the pose speed using camera motion and drift are preliminary using data image as state variable Estimation and inertia motion constraint optimize system mode, specifically include:
The state vector for defining vision inertia mileage system is as follows:
χ=[p, v, q, ba,bg]
The state vector of system is 15 dimensional vectors, and translation distance, movement velocity comprising camera motion, are used at rotation angle Property acceleration drift and angular speed drift, carry out figure optimization to the state vector, are defined as follows majorized function:
Wherein,Indicate the corresponding optimization residual error of inertia pre-integration variable, andIndicate vision measurement Corresponding optimization residual error.
Further, described to judge whether present frame is key frame, specially:According to the pose estimated result of present frame, Judge whether present frame is key frame;Wherein, the basis for selecting of key frame is more than with previous keyframe move distance for present frame The threshold value of setting.
Another object of the present invention can be reached by adopting the following technical scheme that:
Vision inertia odometer pose estimation device based on manifold pre-integration, described device include:
Initialization module, for completing the initialization of vision inertia mileage system;
Inertial data alignment module, for obtaining and being in two frames by marking timestamp to vision data and inertial data All inertial datas between visual pattern;
Vision light stream pose calculates module, and the visual pattern for obtaining to each frame is regarded with the previous frame of system storage Feel that image carries out the matching of LK optical flow method, obtains the trace point information between two frame visual patterns, two frames are regarded using trace point information Feel that image data carries out PnP and matches work, the camera for obtaining view-based access control model image information estimates pose;
Inertia pre-integration module, for carrying out pre-integration processing, meter to all inertial datas between two frame visual patterns The pre-integration variable of definition is calculated, and carries out error propagation update, obtains the kinematic constraint relationship between two frame visual patterns;
First optimization module, for utilizing data image using the pose speed of camera motion and drift as state variable System mode is optimized with inertia motion constraint according to a preliminary estimate;
Second optimization module, for judging whether present frame is key frame, if if so, optimization window current at this time It is full, then come in next frame interim, a time earliest frame is subjected to marginalisation processing, treated that the frame eliminates by marginalisation Optimize window, and optimization window is added in present frame;If it is not, then current frame data is directly rejected from optimization window.
The present invention has following beneficial effect compared with the existing technology:
1, the present invention is handled visual image information using optical flow method, obtains the initial pose estimation of system.To used Property information carry out manifold on pre-integration processing, formed visual information interframe constraint, using combined optimization device to vision inertia Output data carries out fusion optimization, by theoretically proving the vision inertia mileage system based on manifold pre-integration than individually view Feel the higher positioning accuracy of mileage system, Inertia information is effectively used odometer system by the pre-integration algorithm based on manifold In system, and inhibit the noise transmission of system, reduces influence of the inertia drift to odometer positioning accuracy.
2, the optical flow method basic principle that the present invention uses is to work as camera during exercise, it is assumed that in a short period of time, upper one Acute variation does not occur for the position of certain point on frame image and brightness is basically unchanged, while the point of proximity projection on a upper image On to new image be still point of proximity and speed is consistent, seeks position the process of local derviation using image grayscale at this time, in acquisition Corresponding position of the point in next image on one image, and then estimate the motion state of camera, by using image pyramid, Light stream corresponding points are found in pyramid, can be obtained more accurate light stream matching, be inhibited the error of estimation.
3, processing of the present invention for Inertia information, in order to avoid in optimization process, acceleration and gyroscope information Integral process is repeated, variation unrelated with system current state in Inertia information is independent, pre-integration variable is constructed, significantly The calculation amount for reducing optimization process, is pushed away using the rotary course of representation of Lie algebra camera using the Operation Nature in Lie Group Manifold The noise transmission equation of mileage system is led, inhibits the error in system operation to increase, improves odometer positioning accuracy;Together When, operation and optimization are carried out using Lie algebra, reduce the dimension of system mode vector, reduces system-computed amount.
4, the present invention is believed in the mathematical model selection of combined optimization device using sensor of the slip window sampling to odometer Breath optimizes.Odometer records the vision and inertial sensor information of fixed quantity, carries out to the sensing data in window It optimizes and revises, when new sensor information enters window, marginalisation processing is carried out to the information of (time is earliest) at most, by it Grand window range, and new window information is optimized.In vision and the combination of Inertia information, using close coupling The output information of two subsystems is carried out Feature-level fusion optimization by the method for optimization, defeated using optimum results as positioning result Out.The Information Superiority of each subsystem is fully considered using this assembled scheme, forms message complementary sense, improves the stabilization of system work Property and robustness, obtain higher positioning accuracy.
Detailed description of the invention
Fig. 1 is the structural block diagram of vision inertia mileage system of the invention.
Fig. 2 is vision inertia odometer position and orientation estimation method flow chart of the invention.
Fig. 3 is the calculation flow chart of inertia pre-integration of the invention.
Fig. 4 is the comparison of the method for the present invention and real trace, classics Okvis method positioning result on public data collection Figure.
Fig. 5 a be public data collection on the method for the present invention positioning result and classics Okvis method positioning result in x-axis Error comparison diagram.
Fig. 5 b be public data collection on the method for the present invention positioning result and classics Okvis method positioning result on the y axis Error comparison diagram.
Fig. 5 c be public data collection on the method for the present invention positioning result and classics Okvis method positioning result in z-axis Error comparison diagram.
Fig. 5 d is for the method for the present invention positioning result on public data collection with classics Okvis method positioning result in total displacement On error comparison diagram.
Fig. 6 is the structural block diagram of vision inertia odometer pose estimation device of the invention.
Specific embodiment
Present invention will now be described in further detail with reference to the embodiments and the accompanying drawings, but embodiments of the present invention are unlimited In this.
Embodiment 1:
The vision inertia mileage system structure of the present embodiment is crucial as shown in Figure 1, in vision inertia mileage system Step is the processing method of Inertia of design information and the combined optimization device that combines vision and Inertia information.Combined optimization device Main function be the interface that system in combination is realized as two positioning subsystems, and the location information that system reception comes is carried out Data fusion.In the research process to Inertial Subsystem, the pre-integration method in manifold plays an important role.It is used when obtaining Property information after, by inertia measurement value by the pre-integration process in Riemann manifold, form the pact between two frame vision measurement information Beam.Using combined optimization device, visual information and Inertia information are optimized jointly, and then improves vision inertia mileage system Positioning accuracy.
As shown in Fig. 2, present embodiments providing a kind of vision inertia odometer pose estimation side based on manifold pre-integration Method, this method comprises the following steps:
The initialization of S101, vision inertia mileage system
The scale size of visual pattern, the initialization operation of deadline mileage system are obtained using Inertia information.It is first Pure visual movement estimation is first carried out, the scale free relative motion estimation between two corresponding moment is obtained.Then, Inertia information is carried out Integral obtains the camera motion relationship based on Inertia information.Due within a short period of time, the drift of Inertia information can approximation neglect Slightly, it is believed that Inertia information has higher accuracy at this time.Compare the scale free movement relation of view-based access control model and the reality based on inertia Border movement relation obtains the scale factor of vision estimation procedure, completes the initialization procedure of mileage system.
S102, the alignment of vision inertial data
It is whole pre- due to needing to carry out the inertial data between two visual pattern frames based on the pre-integration algorithm in manifold Integral, therefore the alignment work of vision and inertial data is carried out by stamping timestamp when obtaining image and inertial data; Meanwhile simple mean filter processing is carried out to inertia initial data, eliminate outlier;It is generated due to selected accelerometer Random error can largely effect on subsequent data precision, and it is necessary to carry out simple process to initial data at the beginning;In number According to mean filter is used in reading process, the continuous acceleration value for reading five times is averaged, and can effectively get rid of miscellaneous value.
S103, vision light stream pose calculate
The vision processing algorithm that the present embodiment uses, to the visual pattern that each frame obtains, stores for optical flow method with system Previous frame visual pattern carry out the matching of LK optical flow method, obtain two frame visual patterns between trace point information, believed using trace point Breath carries out PnP to two frame visual image datas and matches work, and the camera for obtaining view-based access control model image information estimates pose.
Due to during optical flow tracking, due to the movement of camera, optical flow tracking to point can constantly reduce.To solve to be somebody's turn to do Problem, this system is using the method for updating light stream point.In each light stream matching, the point in tracking can be used for subsequent image light Stream tracking.And for the point of tracking failure, being considered as the point has been moved off field range, all abandon, then again to image into Row Fast angle point grid tracks light stream point quantity completion to setting value for subsequent image.
LK optical flow method is using most common optical flow method in current optical flow method, and LK optical flow method calculates two frames between the time arrives The movement of each pixel position, since it is the Taylor series based on picture signal, this method is known as difference, this is just It is that partial derivative is used for room and time coordinate.
Image constraint equation, such as following formula:
I (x, y, z, t)=I (x+ δ x, y+ δ y, z+ δ z, t+ δ t)
Wherein, I (x, y, z, t) is the voxel in the position (x, y, z).
Assuming that movement is sufficiently small, Taylor expansion, such as following formula are carried out to image constraint equation:
Wherein, Vx,Vy,VzFor the x of the light stream vectors of I (x, y, z, t), y, z-component,Then image (x, Y, z, t) difference of respective direction at the point.
Above formula is abbreviated as:
▽ITV=-It
Wherein, unknown quantity is three-dimensional light stream vectors Vx,Vy,Vz, which is referred to as aperture problem, and LK optical flow method is using non- Iterative method calculates light stream vectors.
Assuming that stream (Vx,Vy,Vz) it in the small window that a size is m*m*m (m > 1) is a constant, then from pixel 1, 2 ..., n (n=m3) in available following one group of equation:
Ix1Vx+Iy1Vy+Iz1Vz=-It1
Ix2Vx+Iy2Vy+Iz2Vz=-It2
IxnVx+IynVy+IznVz=-Itn
The equation is overdetermined equation, using this equation group of least square solution, such as following formula:
According to the light stream vectors, corresponding position of the visual pattern point of present frame in next frame visual pattern is obtained, it is complete Work is matched at LK optical flow method.
The pose estimation an of camera can be calculated by graphic information system, but added based on pure image optical flow tracking The pose that PnP matching algorithm is calculated has very big error, is not used to practical application, in order to solve this problem, this implementation Example uses inertia pre-integration algorithm, carries out vision and the optimization of inertial data close coupling, obtains more accurate camera positioning result.
S104, inertia pre-integration is carried out
In vision inertia mileage system, the frequency of inertial data is higher than the frequency of image key frame, inertia pre-integration By Reparameterization, the inertia measurement value between key frame is integrated into the constraint of relative motion, avoided because of initial strip Part repeats to integrate caused by changing, and the process of pre-integration is as shown in Figure 3.
Inertial data includes three-dimensional acceleration information and spreads the angular velocity information tieed up.Due to being mingled in inertial data Measurement noise and drift first model inertia measurement process to more accurately estimate object camera pose.
Wherein B represents inertial coodinate system, and W represents world coordinate system.
Consider the motion model of inertance element:
It integrates within the t+ Δ t time, discretization, in conjunction with measurement equation, can obtain:
The system mode for obtaining current time can be integrated by the state and inertial data of system last moment by above formula.But It is in visual odometry system, since system pose is continued to optimize, when system last moment pose changes, when system is current The pose at quarter is also required to re-start integral, and in order to avoid multiple integral process, the present embodiment uses pre-integration algorithm.
By in above formula, related variable pools together with inertia measurement value, first carries out pre-integration processing, obtains pre- product The current pose of computing system again after point result.
Define pre-integration variable Δ Rij,Δvij,ΔpijΔRij,Δvij,ΔpijIt is as follows:
After obtaining pre-integration variable, it is calculate by the following formula and obtains the estimation pose based on pre-integration in manifold:
Rj=RiΔRij
vj=vi-WgΔtij+RiΔvij
The noise transmission during inertia pre-integration is considered below.
It is assumed that the inertia drift between adjacent two field pictures is constant, influence of the white noise to pre-integration variable is considered.
Error component in pre-integration variable is separated:
Wherein δ φij,δvij,δpijFor the corresponding noise variance of pre-integration variable, arrangement is obtained:
Operator () ^ indicates the corresponding antisymmetric matrix of vector.
Instrument error equation of transfer defines the quantity of state of noiseFor the corresponding noise of pre-integration variable, the input of noise AmountFor the raw noise of inertial data:
Arrange the noise equation of transfer for obtaining inertia pre-integration:
Wherein
Using the noise transmission equation, the noise of the method recurrence estimation inertia system of iteration can use, and then excellent Directly noise item is removed when changing estimation, greatly improves the positioning accuracy of mileage system.
S105, vision and inertia combined optimization
Obtain view-based access control model tracking and based on inertia pre-integration camera pose estimation after, this vision inertia odometer system System is carried out close coupling optimization to vision and inertia, is obtained higher camera pose estimated accuracy using the method for figure optimization.
The state vector for defining this system is as follows:
χ=[p, v, q, ba,bg]
The state vector of system be 15 dimensional vectors, the translation distance comprising camera motion, movement velocity, rotate angle, Inertial acceleration drift and angular speed drift.Figure optimization is carried out to the state vector, obtains more accurate system pose estimation.This System carries out figure optimization operation using the library g2o, is defined as follows majorized function:
Wherein,The corresponding optimization residual error of inertia pre-integration variable is represented, andIndicate vision measurement Corresponding optimization residual error.By progress inertia pre-integration process and the figure optimization process based on g2o, estimating for camera can be finally obtained Count pose.Due to merging Inertia information and vision measurement information in calculating process, in the vision inertia of the present embodiment Journey meter systems can obtain positioning accuracy more higher than Normal visual mileage system.
S106, window edge
The present embodiment uses the calculation amount of slip window sampling control vision inertia mileage system within certain limits, in turn The real-time for guaranteeing the operation of vision inertia mileage system will need the vision optimized and inertia measurement value to be put into optimization window In mouthful, optimization window is then eliminated without what is optimized, guarantees that the amount of calculation of vision inertia mileage system is constant, It after joint pose optimization every time, requires to carry out sliding window adjustment according to positioning result, so that obtaining new data When can still provide for optimizing, specifically include:
1) determine whether present frame is key frame.First according to the pose estimated result of present frame, whether present frame is judged For key frame, the basis for selecting of key frame is present frame and previous frame key frame move distance is more than threshold size.
If 2) present frame is not key frame, illustrate that system and previous keyframe move distance are shorter, then by current frame data Directly rejected from optimization window.
If 3) present frame is key frame, the earliest frame of time in sliding window is rejected from optimization window, will be worked as Previous frame is as frame second from the bottom.
Those of ordinary skill in the art will appreciate that implement the method for the above embodiments be can be with Relevant hardware is instructed to complete by program, corresponding program can be stored in a computer readable storage medium, The storage medium such as ROM/RAM, disk or CD etc..
The comparison of the method for the present invention and real trace, classics Okvis method positioning result is as shown in fig. 4, it can be seen that originally Inventive method is positioned closer to real trace, and the method for the present invention positioning result and classics Okvis method positioning result are in x-axis, y Error comparison diagram in axis, z-axis and total displacement is respectively as shown in Fig. 5 a~5d, it can be seen that the error of the method for the present invention is smaller.
Embodiment 2:
As shown in fig. 6, present embodiments providing a kind of vision inertia odometer pose estimation dress based on manifold pre-integration Set, the device include initialization module, inertial data alignment module, vision light stream pose calculate module, inertia pre-integration module, First optimization module and the second optimization module, the function of modules are as follows:
The initialization module, for completing the initialization of vision inertia mileage system;
The inertial data alignment module, for by marking timestamp to vision data and inertial data, acquisition to be in All inertial datas between two frame visual patterns;
The vision light stream pose calculates module, the visual pattern for obtaining to each frame, upper one stored with system Frame visual pattern carries out the matching of LK optical flow method, the trace point information between two frame visual patterns is obtained, using trace point information to two Frame visual image data carries out PnP and matches work, and the camera for obtaining view-based access control model image information estimates pose;
The inertia pre-integration module, for being carried out at pre-integration to all inertial datas between two frame visual patterns Reason, calculates the pre-integration variable of definition, and carry out error propagation update, and the kinematic constraint obtained between two frame visual patterns closes System;
First optimization module, for utilizing data using the pose speed of camera motion and drift as state variable Image according to a preliminary estimate optimizes system mode with inertia motion constraint;
Second optimization module, for judging whether present frame is key frame, if if so, optimization window current at this time It has been expired that, then come in next frame interim, a time earliest frame is subjected to marginalisation processing, treated that the frame is rejected by marginalisation Optimize window out, and optimization window is added in present frame;If it is not, then current frame data is directly rejected from optimization window.
It should be noted that the device of the present embodiment is only the example of the division of the above functional modules, In practical applications, it can according to need and be completed by different functional modules above-mentioned function distribution, i.e., draw internal structure It is divided into different functional modules, to complete all or part of the functions described above.
It is appreciated that term " first ", " second " used in the device of above-described embodiment etc. can be used for describing various moulds Block, but these modules should not be limited by these terms.These terms are only used to distinguish first module and another module.Citing For, without departing from the scope of the invention, the first optimization module can be referred to as and be known as the second optimization module, and class As, the second optimization module can be known as to the first optimization module, the first optimization module and the second optimization module both optimize Module, but it is not same optimization module.
In conclusion the present invention is handled visual image information using optical flow method, the initial pose for obtaining system is estimated Meter.Pre-integration processing in manifold is carried out to Inertia information, the interframe constraint of visual information is formed, using combined optimization device to view Feel that inertia output data carries out fusion optimization, by theoretically proving the vision inertia mileage system ratio based on manifold pre-integration The higher positioning accuracy of independent visual odometry system, the pre-integration algorithm based on manifold effectively uses Inertia information inner In journey meter systems, and inhibit the noise transmission of system, reduces influence of the inertia drift to odometer positioning accuracy.
The above, only the invention patent preferred embodiment, but the scope of protection of the patent of the present invention is not limited to This, anyone skilled in the art is in the range disclosed in the invention patent, according to the present invention the skill of patent Art scheme and its patent of invention design are subject to equivalent substitution or change, belong to the scope of protection of the patent of the present invention.

Claims (10)

1. the vision inertia odometer position and orientation estimation method based on manifold pre-integration, it is characterised in that:The method is completed to regard After the initialization for feeling inertia mileage system, following steps are carried out:
By marking timestamp to vision data and inertial data, all inertia numbers between two frame visual patterns are obtained According to;
To the visual pattern that each frame obtains, the matching of LK optical flow method is carried out with the previous frame visual pattern of system storage, obtains two Trace point information between frame visual pattern carries out PnP to two frame visual image datas using trace point information and matches work, obtains The camera of view-based access control model image information estimates pose;
Pre-integration processing is carried out to all inertial datas between two frame visual patterns, the pre-integration variable of definition is calculated, goes forward side by side Row error propagation updates, and obtains the kinematic constraint relationship between two frame visual patterns;
Using the pose speed of camera motion and drift as state variable, constrained according to a preliminary estimate with inertia motion using data image System mode is optimized;
Judge whether present frame is key frame, if if so, optimization window current at this time has been expired, come in next frame it is interim, will A time earliest frame carries out marginalisation processing, and by marginalisation, treated that the frame eliminates optimization window, and present frame is added Enter to optimize window;If it is not, then current frame data is directly rejected from optimization window;
It repeats the above steps, realizes the continuous estimation to camera pose.
2. the vision inertia odometer position and orientation estimation method according to claim 1 based on manifold pre-integration, feature exist In:The initialization for completing vision inertia mileage system, specially:The scale for obtaining visual pattern using Inertia information is big It is small, complete the initialization operation of vision inertia mileage system.
3. the vision inertia odometer position and orientation estimation method according to claim 2 based on manifold pre-integration, feature exist In:The scale size for obtaining visual pattern using Inertia information completes initialization operation, specifically includes:
Pure visual movement estimation is carried out, the scale free relative motion estimation between two corresponding moment is obtained;
Inertia information is integrated, the camera motion relationship based on Inertia information is obtained;
The scale free movement relation of view-based access control model and the actual motion relationship based on inertia are compared, the ruler of vision estimation procedure is obtained The factor is spent, the initialization procedure of vision inertia mileage system is completed.
4. the vision inertia odometer position and orientation estimation method according to claim 1 based on manifold pre-integration, feature exist In:The visual pattern obtained to each frame carries out the matching of LK optical flow method with the previous frame visual pattern of system storage, specifically Including:
Taylor expansion is carried out to image constraint equation;
According to the image constraint equation after Taylor expansion, light stream vectors are calculated using LK optical flow method;
According to light stream vectors, corresponding position of the visual pattern point of present frame in next frame visual pattern is obtained, completes LK light Stream method matches work.
5. the vision inertia odometer position and orientation estimation method according to claim 4 based on manifold pre-integration, feature exist In:Described image constraint equation, such as following formula:
I (x, y, z, t)=I (x+ δ x, y+ δ y, z+ δ z, t+ δ t)
Wherein, I (x, y, z, t) is the voxel in the position (x, y, z);
Taylor expansion, such as following formula are carried out to image constraint equation:
Wherein, Vx,Vy,VzFor the x of the light stream vectors of I (x, y, z, t), y, z-component,It is then image at (x, y, z, t) The difference of respective direction at the point;
Above formula is abbreviated as:
According to the image constraint equation after Taylor expansion, light stream vectors, such as following formula are calculated using LK optical flow method:
Ix1Vx+Iy1Vy+Iz1Vz=-It1
Ix2Vx+Iy2Vy+Iz2Vz=-It2
IxnVx+IynVy+IznVz=-Itn
Using this equation group of least square solution, such as following formula:
6. the vision inertia odometer position and orientation estimation method according to claim 1 based on manifold pre-integration, feature exist In:All inertial datas between two frame visual patterns carry out pre-integration processing, calculate the pre-integration variable of definition, have Body includes:
Inertia measurement process is modeled, such as following formula:
Wherein, B represents inertial coodinate system, and W represents world coordinate system;
Consider the motion model of inertance element, such as following formula:
It is integrated within the t+ Δ t time, discretization, in conjunction with measurement equation, obtains following formula:
By in above formula, related variable pools together with inertia measurement value, carries out pre-integration processing, obtains pre-integration result;
Define pre-integration variable Δ Rij,Δvij,ΔpijIt is as follows:
After obtaining pre-integration variable, it is calculate by the following formula and obtains the estimation pose based on pre-integration in manifold:
Rj=RiΔRij
vj=vi-WgΔtij+RiΔvij
7. the vision inertia odometer position and orientation estimation method according to claim 6 based on manifold pre-integration, feature exist In:The progress error propagation update obtains the kinematic constraint relationship between two frame visual patterns, specifically includes:
Error component in pre-integration variable is separated:
Wherein, δ φij,δvij,δpijFor the corresponding noise variance of pre-integration variable, arrangement is obtained:
Operator ()Indicate the corresponding antisymmetric matrix of vector;
Instrument error equation of transfer defines the quantity of state of noiseFor the corresponding noise of pre-integration variable, the input quantity of noise For the raw noise of inertial data:
Arrange the noise equation of transfer for obtaining inertia pre-integration:
8. the vision inertia odometer position and orientation estimation method according to claim 1 based on manifold pre-integration, feature exist In:The pose speed using camera motion and drift are as state variable, according to a preliminary estimate and inertia motion using data image Constraint optimizes system mode, specifically includes:
The state vector for defining vision inertia mileage system is as follows:
χ=[p, v, q, ba,bg]
The state vector of system is 15 dimensional vectors, and translation distance, movement velocity comprising camera motion, rotation angle, inertia add Speed drift and angular speed drift carry out figure optimization to the state vector, are defined as follows majorized function:
Wherein,Indicate the corresponding optimization residual error of inertia pre-integration variable, andIndicate that vision measurement is corresponding Optimization residual error.
9. the vision inertia odometer position and orientation estimation method according to claim 1 based on manifold pre-integration, feature exist In:It is described to judge whether present frame is key frame, specially:According to the pose estimated result of present frame, whether present frame is judged For key frame;Wherein, the basis for selecting of key frame is present frame and previous keyframe move distance is more than the threshold value of setting.
10. the vision inertia odometer pose estimation device based on manifold pre-integration, it is characterised in that:Described device includes:
Initialization module, for completing the initialization of vision inertia mileage system;
Inertial data alignment module, for obtaining and being in two frame visions by marking timestamp to vision data and inertial data All inertial datas between image;
Vision light stream pose calculates module, the visual pattern for obtaining to each frame, the previous frame vision figure with system storage As carrying out the matching of LK optical flow method, the trace point information between two frame visual patterns is obtained, using trace point information to two frame vision figures Work is matched as data carry out PnP, the camera for obtaining view-based access control model image information estimates pose;
Inertia pre-integration module, for carrying out pre-integration processing to all inertial datas between two frame visual patterns, it is fixed to calculate The pre-integration variable of justice, and error propagation update is carried out, obtain the kinematic constraint relationship between two frame visual patterns;
First optimization module, for using the pose speed of camera motion and drift as state variable, it is preliminary using data image Estimation and inertia motion constraint optimize system mode;
Second optimization module, for judging whether present frame is key frame, if if so, optimization window current at this time has been expired, Come in next frame interim, a time earliest frame is subjected to marginalisation processing, treated that the frame eliminates optimization by marginalisation Window, and optimization window is added in present frame;If it is not, then current frame data is directly rejected from optimization window.
CN201810939064.6A 2018-08-17 2018-08-17 Manifold pre-integration-based visual inertial odometer pose estimation method and device Active CN108827315B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810939064.6A CN108827315B (en) 2018-08-17 2018-08-17 Manifold pre-integration-based visual inertial odometer pose estimation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810939064.6A CN108827315B (en) 2018-08-17 2018-08-17 Manifold pre-integration-based visual inertial odometer pose estimation method and device

Publications (2)

Publication Number Publication Date
CN108827315A true CN108827315A (en) 2018-11-16
CN108827315B CN108827315B (en) 2021-03-30

Family

ID=64150264

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810939064.6A Active CN108827315B (en) 2018-08-17 2018-08-17 Manifold pre-integration-based visual inertial odometer pose estimation method and device

Country Status (1)

Country Link
CN (1) CN108827315B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109631938A (en) * 2018-12-28 2019-04-16 湖南海迅自动化技术有限公司 Development machine autonomous positioning orientation system and method
CN109658507A (en) * 2018-11-27 2019-04-19 联想(北京)有限公司 Information processing method and device, electronic equipment
CN109798889A (en) * 2018-12-29 2019-05-24 航天信息股份有限公司 Optimization method, device, storage medium and electronic equipment based on monocular VINS system
CN109917644A (en) * 2018-12-26 2019-06-21 达闼科技(北京)有限公司 It is a kind of improve vision inertial navigation system robustness method, apparatus and robot device
CN110207692A (en) * 2019-05-13 2019-09-06 南京航空航天大学 A kind of inertia pre-integration pedestrian navigation method of map auxiliary
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110260861A (en) * 2019-06-13 2019-09-20 北京华捷艾米科技有限公司 Pose determines method and device, odometer
CN110296702A (en) * 2019-07-30 2019-10-01 清华大学 Visual sensor and the tightly coupled position and orientation estimation method of inertial navigation and device
CN110411475A (en) * 2019-07-24 2019-11-05 南京航空航天大学 A kind of robot vision odometer assisted based on template matching algorithm and IMU
CN110428452A (en) * 2019-07-11 2019-11-08 北京达佳互联信息技术有限公司 Detection method, device, electronic equipment and the storage medium of non-static scene point
CN110455301A (en) * 2019-08-01 2019-11-15 河北工业大学 A kind of dynamic scene SLAM method based on Inertial Measurement Unit
CN110617813A (en) * 2019-09-26 2019-12-27 中国科学院电子学研究所 Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110763251A (en) * 2019-10-18 2020-02-07 华东交通大学 Method and system for optimizing visual inertial odometer
CN110874569A (en) * 2019-10-12 2020-03-10 西安交通大学 Unmanned aerial vehicle state parameter initialization method based on visual inertia fusion
CN111220155A (en) * 2020-03-04 2020-06-02 广东博智林机器人有限公司 Method, device and processor for estimating pose based on binocular vision inertial odometer
CN111272192A (en) * 2020-02-11 2020-06-12 清华大学 Combined pose determination method and device for skid-steer robot
CN111307176A (en) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111351433A (en) * 2020-04-14 2020-06-30 深圳市异方科技有限公司 Handheld volume measuring device based on inertial equipment and camera
CN111429524A (en) * 2020-03-19 2020-07-17 上海交通大学 Online initialization and calibration method and system for camera and inertial measurement unit
CN111507132A (en) * 2019-01-31 2020-08-07 杭州海康机器人技术有限公司 Positioning method, device and equipment
CN111780754A (en) * 2020-06-23 2020-10-16 南京航空航天大学 Visual inertial odometer pose estimation method based on sparse direct method
CN111932616A (en) * 2020-07-13 2020-11-13 清华大学 Binocular vision inertial odometer method for accelerating by utilizing parallel computing
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN111982148A (en) * 2020-07-06 2020-11-24 杭州易现先进科技有限公司 Processing method, device and system for VIO initialization and computer equipment
CN112129272A (en) * 2019-06-25 2020-12-25 京东方科技集团股份有限公司 Method and device for realizing visual odometer
CN112284381A (en) * 2020-10-19 2021-01-29 北京华捷艾米科技有限公司 Visual inertia real-time initialization alignment method and system
CN112556692A (en) * 2020-11-27 2021-03-26 绍兴市北大信息技术科创中心 Vision and inertia odometer method and system based on attention mechanism
CN112683305A (en) * 2020-12-02 2021-04-20 中国人民解放军国防科技大学 Visual-inertial odometer state estimation method based on point-line characteristics
CN112710308A (en) * 2019-10-25 2021-04-27 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN113034538A (en) * 2019-12-25 2021-06-25 杭州海康威视数字技术股份有限公司 Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
CN113074754A (en) * 2021-03-27 2021-07-06 上海智能新能源汽车科创功能平台有限公司 Visual inertia SLAM system initialization method based on vehicle kinematic constraint
CN113587916A (en) * 2021-07-27 2021-11-02 北京信息科技大学 Real-time sparse visual odometer, navigation method and system
CN113865584A (en) * 2021-08-24 2021-12-31 知微空间智能科技(苏州)有限公司 UWB three-dimensional object finding method and device based on visual inertial odometer
CN113936120A (en) * 2021-10-12 2022-01-14 北京邮电大学 Mark-free lightweight Web AR method and system
CN113936120B (en) * 2021-10-12 2024-07-12 北京邮电大学 Label-free lightweight Web AR method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160305784A1 (en) * 2015-04-17 2016-10-20 Regents Of The University Of Minnesota Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation
CN106815861A (en) * 2017-01-17 2017-06-09 湖南优象科技有限公司 A kind of optical flow computation method and apparatus of compact
CN107025657A (en) * 2016-01-31 2017-08-08 天津新天星熠测控技术有限公司 A kind of vehicle action trail detection method based on video image
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN107767425A (en) * 2017-10-31 2018-03-06 南京维睛视空信息科技有限公司 A kind of mobile terminal AR methods based on monocular vio

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160305784A1 (en) * 2015-04-17 2016-10-20 Regents Of The University Of Minnesota Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation
CN107025657A (en) * 2016-01-31 2017-08-08 天津新天星熠测控技术有限公司 A kind of vehicle action trail detection method based on video image
CN106815861A (en) * 2017-01-17 2017-06-09 湖南优象科技有限公司 A kind of optical flow computation method and apparatus of compact
CN107255476A (en) * 2017-07-06 2017-10-17 青岛海通胜行智能科技有限公司 A kind of indoor orientation method and device based on inertial data and visual signature
CN107767425A (en) * 2017-10-31 2018-03-06 南京维睛视空信息科技有限公司 A kind of mobile terminal AR methods based on monocular vio

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FUCHUN LIU,ETC.: "IMU Preintegration for Visual-Inertial Odometry Pose Estimation", 《PROCEEDINGS OF THE 37TH CHINESE CONTROL CONFERENCE》 *

Cited By (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658507A (en) * 2018-11-27 2019-04-19 联想(北京)有限公司 Information processing method and device, electronic equipment
CN109917644A (en) * 2018-12-26 2019-06-21 达闼科技(北京)有限公司 It is a kind of improve vision inertial navigation system robustness method, apparatus and robot device
US11188754B2 (en) 2018-12-26 2021-11-30 Cloudminds (Beijing) Technologies Co., Ltd. Method for improving robustness of visual-inertial navigation system, and robot thereof
CN109631938A (en) * 2018-12-28 2019-04-16 湖南海迅自动化技术有限公司 Development machine autonomous positioning orientation system and method
CN109798889A (en) * 2018-12-29 2019-05-24 航天信息股份有限公司 Optimization method, device, storage medium and electronic equipment based on monocular VINS system
CN111507132B (en) * 2019-01-31 2023-07-07 杭州海康机器人股份有限公司 Positioning method, device and equipment
CN111507132A (en) * 2019-01-31 2020-08-07 杭州海康机器人技术有限公司 Positioning method, device and equipment
CN110243358B (en) * 2019-04-29 2023-01-03 武汉理工大学 Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN110243358A (en) * 2019-04-29 2019-09-17 武汉理工大学 The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN110207692A (en) * 2019-05-13 2019-09-06 南京航空航天大学 A kind of inertia pre-integration pedestrian navigation method of map auxiliary
CN110260861A (en) * 2019-06-13 2019-09-20 北京华捷艾米科技有限公司 Pose determines method and device, odometer
CN110260861B (en) * 2019-06-13 2021-07-27 北京华捷艾米科技有限公司 Pose determination method and device and odometer
CN112129272B (en) * 2019-06-25 2022-04-26 京东方科技集团股份有限公司 Method and device for realizing visual odometer
CN112129272A (en) * 2019-06-25 2020-12-25 京东方科技集团股份有限公司 Method and device for realizing visual odometer
CN110428452B (en) * 2019-07-11 2022-03-25 北京达佳互联信息技术有限公司 Method and device for detecting non-static scene points, electronic equipment and storage medium
CN110428452A (en) * 2019-07-11 2019-11-08 北京达佳互联信息技术有限公司 Detection method, device, electronic equipment and the storage medium of non-static scene point
CN110411475A (en) * 2019-07-24 2019-11-05 南京航空航天大学 A kind of robot vision odometer assisted based on template matching algorithm and IMU
CN110296702A (en) * 2019-07-30 2019-10-01 清华大学 Visual sensor and the tightly coupled position and orientation estimation method of inertial navigation and device
CN110455301A (en) * 2019-08-01 2019-11-15 河北工业大学 A kind of dynamic scene SLAM method based on Inertial Measurement Unit
CN110617813A (en) * 2019-09-26 2019-12-27 中国科学院电子学研究所 Monocular visual information and IMU (inertial measurement Unit) information fused scale estimation system and method
CN110717927A (en) * 2019-10-10 2020-01-21 桂林电子科技大学 Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110874569A (en) * 2019-10-12 2020-03-10 西安交通大学 Unmanned aerial vehicle state parameter initialization method based on visual inertia fusion
CN110874569B (en) * 2019-10-12 2022-04-22 西安交通大学 Unmanned aerial vehicle state parameter initialization method based on visual inertia fusion
CN110763251A (en) * 2019-10-18 2020-02-07 华东交通大学 Method and system for optimizing visual inertial odometer
CN110763251B (en) * 2019-10-18 2021-07-13 华东交通大学 Method and system for optimizing visual inertial odometer
CN112710308B (en) * 2019-10-25 2024-05-31 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN112710308A (en) * 2019-10-25 2021-04-27 阿里巴巴集团控股有限公司 Positioning method, device and system of robot
CN113034538A (en) * 2019-12-25 2021-06-25 杭州海康威视数字技术股份有限公司 Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
CN113034538B (en) * 2019-12-25 2023-09-05 杭州海康威视数字技术股份有限公司 Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
CN111272192A (en) * 2020-02-11 2020-06-12 清华大学 Combined pose determination method and device for skid-steer robot
CN111307176A (en) * 2020-03-02 2020-06-19 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111307176B (en) * 2020-03-02 2023-06-16 北京航空航天大学青岛研究院 Online calibration method for visual inertial odometer in VR head-mounted display equipment
CN111220155A (en) * 2020-03-04 2020-06-02 广东博智林机器人有限公司 Method, device and processor for estimating pose based on binocular vision inertial odometer
CN111429524A (en) * 2020-03-19 2020-07-17 上海交通大学 Online initialization and calibration method and system for camera and inertial measurement unit
CN111429524B (en) * 2020-03-19 2023-04-18 上海交通大学 Online initialization and calibration method and system for camera and inertial measurement unit
CN111351433A (en) * 2020-04-14 2020-06-30 深圳市异方科技有限公司 Handheld volume measuring device based on inertial equipment and camera
CN111780754B (en) * 2020-06-23 2022-05-20 南京航空航天大学 Visual inertial odometer pose estimation method based on sparse direct method
CN111780754A (en) * 2020-06-23 2020-10-16 南京航空航天大学 Visual inertial odometer pose estimation method based on sparse direct method
CN111982148A (en) * 2020-07-06 2020-11-24 杭州易现先进科技有限公司 Processing method, device and system for VIO initialization and computer equipment
CN111932616A (en) * 2020-07-13 2020-11-13 清华大学 Binocular vision inertial odometer method for accelerating by utilizing parallel computing
CN111932616B (en) * 2020-07-13 2022-10-14 清华大学 Binocular vision inertial odometer method accelerated by utilizing parallel computation
CN111983639B (en) * 2020-08-25 2023-06-02 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN111983639A (en) * 2020-08-25 2020-11-24 浙江光珀智能科技有限公司 Multi-sensor SLAM method based on Multi-Camera/Lidar/IMU
CN112284381B (en) * 2020-10-19 2022-09-13 北京华捷艾米科技有限公司 Visual inertia real-time initialization alignment method and system
CN112284381A (en) * 2020-10-19 2021-01-29 北京华捷艾米科技有限公司 Visual inertia real-time initialization alignment method and system
CN112556692A (en) * 2020-11-27 2021-03-26 绍兴市北大信息技术科创中心 Vision and inertia odometer method and system based on attention mechanism
CN112683305A (en) * 2020-12-02 2021-04-20 中国人民解放军国防科技大学 Visual-inertial odometer state estimation method based on point-line characteristics
CN113074754A (en) * 2021-03-27 2021-07-06 上海智能新能源汽车科创功能平台有限公司 Visual inertia SLAM system initialization method based on vehicle kinematic constraint
CN113587916A (en) * 2021-07-27 2021-11-02 北京信息科技大学 Real-time sparse visual odometer, navigation method and system
CN113587916B (en) * 2021-07-27 2023-10-03 北京信息科技大学 Real-time sparse vision odometer, navigation method and system
CN113865584A (en) * 2021-08-24 2021-12-31 知微空间智能科技(苏州)有限公司 UWB three-dimensional object finding method and device based on visual inertial odometer
CN113865584B (en) * 2021-08-24 2024-05-03 知微空间智能科技(苏州)有限公司 UWB three-dimensional object searching method and device based on visual inertial odometer
CN113936120A (en) * 2021-10-12 2022-01-14 北京邮电大学 Mark-free lightweight Web AR method and system
CN113936120B (en) * 2021-10-12 2024-07-12 北京邮电大学 Label-free lightweight Web AR method and system

Also Published As

Publication number Publication date
CN108827315B (en) 2021-03-30

Similar Documents

Publication Publication Date Title
CN108827315A (en) Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
US11519729B2 (en) Vision-aided inertial navigation
CN112484725B (en) Intelligent automobile high-precision positioning and space-time situation safety method based on multi-sensor fusion
CN110375738B (en) Monocular synchronous positioning and mapping attitude calculation method fused with inertial measurement unit
Iocchi et al. Visually realistic mapping of a planar environment with stereo
CN111220153B (en) Positioning method based on visual topological node and inertial navigation
CN110243358A (en) The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN107909614B (en) Positioning method of inspection robot in GPS failure environment
CN113781582A (en) Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
Gemeiner et al. Simultaneous motion and structure estimation by fusion of inertial and vision data
CN114526745B (en) Drawing construction method and system for tightly coupled laser radar and inertial odometer
CN111595334B (en) Indoor autonomous positioning method based on tight coupling of visual point-line characteristics and IMU (inertial measurement Unit)
CN111780781B (en) Template matching vision and inertia combined odometer based on sliding window optimization
CN112734841B (en) Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN106489170A (en) The inertial navigation of view-based access control model
CN109059907A (en) Track data processing method, device, computer equipment and storage medium
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
Zhang et al. Vision-aided localization for ground robots
CN113189613B (en) Robot positioning method based on particle filtering
CN115371665B (en) Mobile robot positioning method based on depth camera and inertial fusion
CN113192140A (en) Binocular vision inertial positioning method and system based on point-line characteristics
CN115930977A (en) Method and system for positioning characteristic degradation scene, electronic equipment and readable storage medium
CN112945233B (en) Global drift-free autonomous robot simultaneous positioning and map construction method
CN109785428A (en) A kind of handheld three-dimensional method for reconstructing based on polymorphic constrained Kalman filter
CN117075158A (en) Pose estimation method and system of unmanned deformation motion platform based on laser radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant