CN110084832A - Correcting method, device, system, equipment and the storage medium of camera pose - Google Patents

Correcting method, device, system, equipment and the storage medium of camera pose Download PDF

Info

Publication number
CN110084832A
CN110084832A CN201910338855.8A CN201910338855A CN110084832A CN 110084832 A CN110084832 A CN 110084832A CN 201910338855 A CN201910338855 A CN 201910338855A CN 110084832 A CN110084832 A CN 110084832A
Authority
CN
China
Prior art keywords
initial
image frame
current image
translation vector
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910338855.8A
Other languages
Chinese (zh)
Other versions
CN110084832B (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiscene Information Technology Co Ltd
Original Assignee
Bright Wind Taiwan (shanghai) Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bright Wind Taiwan (shanghai) Mdt Infotech Ltd filed Critical Bright Wind Taiwan (shanghai) Mdt Infotech Ltd
Priority to CN201910338855.8A priority Critical patent/CN110084832B/en
Publication of CN110084832A publication Critical patent/CN110084832A/en
Application granted granted Critical
Publication of CN110084832B publication Critical patent/CN110084832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses correcting method, device, system, equipment and the storage mediums of a kind of camera pose, this method comprises: initial translation vector of the camera in the corresponding initial pre-integration value of current image frame and initial pose is obtained, wherein initial pre-integration value and initial translation vector carry out pre-integration according to the information that IMU Inertial Measurement Unit acquires and determine;Information, initial pre-integration value and initial translation vector are handled according to the picture frame of camera, the corresponding overall estimate error of initial translation vector is calculated, and correct to initial translation vector based on overall estimate error, determines the corresponding target translation vector of current image frame.Technical solution through the embodiment of the present invention can carry out noise control to IMU, reduce the influence of IMU noise, improve the accuracy and precision of initial pose estimation, to improve the accuracy and precision of final camera pose estimation.

Description

Correcting method, device, system, equipment and the storage medium of camera pose
Technical field
The present embodiments relate to pose estimation field more particularly to a kind of correcting method of camera pose, device, it is System, equipment and storage medium.
Background technique
In computer vision research field, usually camera pose is estimated using sequence of frames of video.SLAM (Simultaneous localization and mapping is positioned simultaneously and built figure) is a kind of common technology, it is by chasing after The pose of track sensor (usually camera) to construct the track 3D of sensor, and carries out environment to build figure.The applied field of SLAM Scape is extensive, such as robot navigation, automatic Pilot, augmented reality etc..SLAM system is generally by front-end vision odometer (Visual Odometry, VO) and rear end optimization constitute, visual odometry is for estimating between adjacent image the movement of camera and locally Figure.Rear end optimization is optimized for the information of movement and winding detection to estimation, obtain globally consistent track and Map.
Generally, based on number of cameras and type, SLAM can be divided into monocular SLAM, binocular SLAM and RGBD-SLAM.Its In, monocular SLAM due at low cost, equipment size is small, may operate in large scale environment the advantages that, receive more and more Research.However, monocular SLAM can not obtain dimensional information, i.e. ratio between true value and observation when estimating camera pose Value, so as to cause a zoom factor (scale) poor between the track and real trace estimated.In order to solve this problem, It is the IMU (inertial measurement unit Inertial Measurement Unit) and vision that will have dimensional properties in the prior art Information (image information of camera shooting) is combined, and can be obtained using the monocular SLAM system for combining IMU information to have scale It obtains and preferably tracks precision and the track estimation with scale.
However, will be directly inputted into after IMU data pre-integration in the poses optimization modules such as sliding window optimization in the prior art, Final track estimated result is highly dependent on that pose optimizes as a result, still there is no to gyroscope in IMU and accelerometer Existing measurement noise and random walk noise carry out noise control.The presence of noise frequently can lead to camera track estimated result Accuracy reduce, and for cheap IMU device, noise can be bigger, so that estimated result is more uncontrollable, or even also It will lead to the estimation of camera track and the case where accumulation drift occur.
Summary of the invention
The embodiment of the invention provides correcting method, device, system, equipment and the storage medium of a kind of camera pose, with Noise control is carried out to IMU, reduces the influence of IMU noise, the accuracy and precision of initial pose estimation are improved, to improve most The accuracy and precision of last phase seat in the plane appearance estimation.
In a first aspect, the embodiment of the invention provides a kind of correcting methods of camera pose, comprising:
Initial translation vector of the camera in the corresponding initial pre-integration value of current image frame and initial pose is obtained, wherein The information progress pre-integration that the initial pre-integration value and the initial translation vector are acquired according to IMU Inertial Measurement Unit is true It is fixed;
Information, the initial pre-integration value and the initial translation vector are handled according to the picture frame of the camera, is calculated The corresponding overall estimate error of the initial translation vector, and the initial translation vector is entangled based on the overall estimate error Just, the corresponding target translation vector of the current image frame is determined.
Optionally, after determining the corresponding target translation vector of the current image frame, further includes:
According to target translation vector described in the initial velocity of the current image frame, the initial translation vector sum, to institute The direction for stating initial velocity is corrected, and determines the corresponding target velocity of the current image frame.
Optionally, after determining the corresponding target translation vector of the current image frame, further includes:
According to the corresponding current goal translation vector of the current image frame and the corresponding upper target of a upper picture frame Translation vector and a upper initial velocity, initial pre-integration value corresponding to the current image frame are corrected, and are worked as described in determination The corresponding target pre-integration value of preceding picture frame.
Optionally, after determining the corresponding target velocity of the current image frame, further includes:
According to the corresponding current goal translation vector of the current image frame, current goal speed and a upper picture frame pair The upper target translation vector answered and a upper target velocity, initial pre-integration value corresponding to the current image frame are entangled Just, the corresponding target pre-integration value of the current image frame is determined.
Second aspect, the embodiment of the invention also provides a kind of correcting devices of camera pose, comprising:
Initial information obtains module, for obtaining camera in the corresponding initial pre-integration value of current image frame and initial pose In initial translation vector, wherein the initial pre-integration value and the initial translation vector are adopted according to IMU Inertial Measurement Unit The information of collection carries out pre-integration and determines;
Initial translation vector corrects module, for handling information, the initial pre-integration according to the picture frame of the camera Value and the initial translation vector are calculated the corresponding overall estimate error of the initial translation vector, and are missed based on the overall estimate Difference corrects the initial translation vector, determines the corresponding target translation vector of the current image frame.
The third aspect, the embodiment of the invention also provides a kind of correcting system of camera pose, the system comprises: pre- place It manages module, initialization module and pose and corrects module;Wherein,
The image information that the preprocessing module is used to shoot camera carries out detection processing, determines picture frame processing letter Breath, and pre-integration is carried out to the information of IMU Inertial Measurement Unit acquisition, determine the corresponding initial pre-integration value of each picture frame With initial pose, wherein the initial bit appearance include initial translation vector;
The initialization module is used to handle information, the initial pre-integration value and the initial bit according to described image frame Appearance carries out system initialization;
The pose corrects module for realizing the correcting method of camera pose such as provided by any embodiment of the invention.
Fourth aspect, the embodiment of the invention also provides a kind of equipment, the equipment includes:
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing Device realizes the correcting method such as camera pose provided by any embodiment of the invention.
5th aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer Program realizes the correcting method such as camera pose provided by any embodiment of the invention when the program is executed by processor.
The embodiment of the present invention is by obtaining camera in the corresponding initial pre-integration value of current image frame and initial pose Initial translation vector, wherein initially pre-integration value and initial translation vector are the information acquired according to IMU Inertial Measurement Unit Carry out what pre-integration determined.Utilize picture frame processing information, the initial pre- product obtained after handling the image that camera is shot Score value and initial translation vector, calculate the corresponding overall estimate error of initial translation vector, and can be with based on the overall estimate error Initial translation vector is corrected, the corresponding target translation vector of the current image frame after being corrected.The present invention passes through benefit The initial translation vector determined based on IMU information is corrected with the visual information of camera, so as to carry out noise to IMU Control, reduces influence of the IMU noise to initial translation vector, to improve the accuracy and essence of initial translation vector estimation Degree, and then improve the accuracy and precision of initial pose.It, can also be to present image and after correcting initial translation vector The initial velocity of frame and/or initial pre-integration value are corrected, to can also be improved initial velocity and initial pre-integration value Accuracy and precision, to further increase the accuracy and precision of final camera pose estimation.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the correcting method for camera pose that the embodiment of the present invention one provides;
Fig. 2 is a kind of flow chart of the correcting method of camera pose provided by Embodiment 2 of the present invention;
Fig. 3 (a) is true initial velocity between adjacent two picture frames of one kind involved in the embodiment of the present invention two Change example;
Fig. 3 (b) is that one kind involved in the embodiment of the present invention two is based on calculated adjacent two images of kinematics formula The variation example of initial velocity between frame;
Fig. 4 is the example that a kind of pair of initial velocity is corrected involved in the embodiment of the present invention two;
Fig. 5 is a kind of flow chart of the correcting method for camera pose that the embodiment of the present invention three provides;
Fig. 6 is a kind of structural schematic diagram of the correcting device for camera pose that the embodiment of the present invention four provides;
Fig. 7 is a kind of structural schematic diagram of the correcting system for camera pose that the embodiment of the present invention five provides;
Fig. 8 is a kind of structural schematic diagram for equipment that the embodiment of the present invention six provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is a kind of flow chart of the correcting method for camera pose that the embodiment of the present invention one provides, and the present embodiment can fit The case where for correcting to the initial translation vector determined based on IMU information, especially can be adapted for pre- product In SLAM system or VIO (Visual-Inertial Odometry, vision inertia odometer) system of sub-module, to pre- product The scene that the initial translation vector that sub-module obtains is corrected.This method can be executed by the correcting device of camera pose, The device can be realized by the mode of software and/or hardware, be integrated in the equipment with camera pose estimation function.The party Method specifically includes the following steps:
S110, obtain initial translation of the camera in the corresponding initial pre-integration value of current image frame and initial pose to Amount, wherein initial pre-integration value and initial translation vector carry out pre-integration according to the information that IMU Inertial Measurement Unit acquires and determine.
Wherein, current image frame can refer to the picture frame of current time camera shooting.Since camera is movement, so that Pose of the camera in world coordinate system also changes in real time, to need to estimate that camera is corresponding when shooting every picture frame Camera pose.IMU Inertial Measurement Unit can use gyroscope and accelerometer to acquire the camera acceleration with scale Information and angular velocity information.Initial pose can be the information progress pre-integration operation for referring to and acquiring to IMU, the initial phase of acquisition Seat in the plane appearance.Initial pose may include the initial translation vector sum initial rotation vector of camera.Initial pre-integration value can refer to The numerical value of pre-integration acquisition is carried out to the information of IMU acquisition, is obtained for example, carrying out primary integral to the acceleration information of IMU acquisition Obtain velocity amplitude;Integral twice is carried out to the acceleration information of IMU acquisition and obtains shift value;To IMU acquisition angular velocity information into Row integral obtains rotation angle value.At the beginning of initial pre-integration value in the present embodiment may include the first initial pre-integration value, second Beginning pre-integration value and the initial pre-integration value of third;Wherein, the first initial pre-integration value be current image frame and a upper picture frame it Between initial relative displacement variable quantity;Second initial pre-integration value is initial between current image frame and a upper picture frame Relative velocity variable quantity;The initial pre-integration value of third is the initial relative rotation angle between current image frame and a upper picture frame Spend variable quantity.
Specifically, the information that the present embodiment can be acquired previously according to IMU Inertial Measurement Unit carries out pre-integration and determines phase Machine is in the corresponding initial pre-integration value of current image frame and initial pose.Illustratively, based on IMU acquisition current image frame with IMU information between a upper picture frame carries out pre-integration operation, obtains the corresponding initial pre-integration value of current image frame, then base Determine that (i.e. initial translation vector sum initially revolves the corresponding initial pose of current image frame in kinematics formula and initial pre-integration value Torque battle array), there are biggish errors due to the influence of IMU noise for the initial translation vector in initial pose obtained at this time.
Illustratively, the present embodiment can pass through the acceleration between the current image frame acquired to IMU and a upper picture frame It spends information and carries out pre-integration, the corresponding first initial pre-integration value of current image frame is determined, such as current image frame and a upper image Initial relative displacement variable quantity between frame, and can determine that current image frame is corresponding initial based on following kinematics formula Translation vector:
Wherein,For the corresponding initial translation vector of current image frame j, the i.e. corresponding camera coordinates system of current image frame j To world coordinate system displacement (due to camera be movement, to be to be not fixed by the camera coordinates system of origin of camera photocentre ),It can be appreciated that position of the current time camera under world coordinate system;For world coordinate system to a upper figure As the spin matrix of the corresponding camera coordinates system of frame j-1, rotation of the spin matrix for coordinate system is converted;It is upper one The corresponding target translation vector of picture frame j-1;For the corresponding initial velocity of upper picture frame j-1;Δ t is present image Time interval between frame j and upper picture frame j-1;gwFor the acceleration of gravity under world coordinate system;For a upper figure As frame j-1 to the first initial pre-integration value between current image frame, i.e. relative displacement variable quantity.
Wherein, the present embodiment can be based on the correction mode of camera pose provided by step S110-S120, to a upper figure As the corresponding initial translation vector of frame j-1 is corrected, it is hereby achieved that the corresponding target translation vector of a upper picture frame
S120, information, initial pre-integration value and initial translation vector are handled according to the picture frame of camera, calculates initial translation The corresponding overall estimate error of vector, and initial translation vector is corrected based on overall estimate error, determine current image frame pair The target translation vector answered.
Wherein, picture frame processing information can refer to the visual information that the image to camera shooting obtains after handling. Illustratively, picture frame processing information can refer to that the picture frame to camera shooting carries out the feature that feature detection and tracking obtain Point information.The corresponding overall estimate error of initial translation vector may include to the picture frame relevant collimation error of processing information and because Imu error caused by IMU noise.The corresponding target translation vector of current image frame, which can refer to, entangles initial translation vector The more accurate initial translation vector just obtained afterwards.
Specifically, information can be handled based on the picture frame of camera calculate the corresponding collimation error of initial translation vector, with And initial pre-integration value and the corresponding imu error of kinematics formula calculating initial translation vector based on camera, so as to obtain Obtain the corresponding overall estimate error of initial translation vector.By being minimized to the overall estimate error, to initial translation vector It is corrected, it is hereby achieved that the more accurate initial translation vector after correcting, the i.e. corresponding target of current image frame are flat The amount of shifting to.The present embodiment by using picture frame handle information, that is, visual information to based on IMU information determine initial translation to Amount is corrected, and so as to carry out noise control to IMU, reduces influence of the IMU noise to initial translation vector.
It should be noted that utilizing visual information using the initial rotation vector ratio of IMU information acquisition by experimental verification The initial rotation vector of acquisition is more accurate, to can not be further increased using visual information first based on IMU information acquisition The accuracy and precision of beginning spin matrix, so the present embodiment is merely with visual information to the initial translation vector in initial pose It is corrected, and more accurate initial translation vector can be obtained, to improve the accuracy of initial pose.
It should be noted that being corrected to the corresponding initial translation vector of current image frame, obtains target and be translated towards After amount, which can further be optimized, such as sliding window optimization etc., to estimate current image frame Corresponding final camera pose.When optimizing using the target translation vector, the convergence rate of optimization can be greatly speeded up, Improve the estimated efficiency of camera pose.
The technical solution of the present embodiment, by obtaining camera in the corresponding initial pre-integration value of current image frame and initial bit Initial translation vector in appearance, wherein initially pre-integration value and initial translation vector are acquired according to IMU Inertial Measurement Unit Information carry out pre-integration determine.Using the picture frame processing information obtained after handling the image that camera is shot, just Beginning pre-integration value and initial translation vector are calculated the corresponding overall estimate error of initial translation vector, and are missed based on the overall estimate Difference can correct initial translation vector, the corresponding target translation vector of the current image frame after being corrected.The present invention By the visual information using camera to based on IMU information determine initial translation vector correct, so as to IMU into Row noise control reduces influence of the IMU noise to initial translation vector, to improve initial translation vector in initial pose The accuracy and precision of estimation, to improve the accuracy and precision of final camera pose estimation.
Based on the above technical solution, S120 may include: according to each target feature point in current image frame Three-dimensional coordinate in the target image frame before current image frame of pixel coordinate, each target feature point, current image frame with Transition matrix between target image frame calculates the corresponding collimation error of current image frame, wherein transition matrix is according to initial flat The corresponding initial rotation vector of vector sum current image frame is moved to determine;According to the corresponding initial translation vector of current image frame, when The corresponding upper target translation vector of time interval, a upper picture frame between preceding picture frame and a upper picture frame and upper one initial Speed and the corresponding initial pre-integration value of current image frame calculate the corresponding imu error of current image frame;By the collimation error It is added with imu error, determines the corresponding overall estimate error of initial translation vector;By adjusting the big of initial translation vector It is small, overall estimate error is minimized, and the corresponding initial translation vector of the smallest overall estimate error is determined as currently scheming As the corresponding target translation vector of frame.
Wherein, the target feature point in current image frame can refer to the characteristic point for meeting default screening conditions.Default sieve Select condition that can refer to the characteristic point in current image frame, occurred in the picture frame before current image frame this feature point and The depth information of this feature point is retained.Illustratively, in SLAM system, for each characteristic point in current image frame Speech, can will appear in this feature point on the key frame in sliding window as target feature point.Target image frame can refer to Occurred target feature point before current image frame and remained the picture frame of the depth information of target feature point, so as to obtain Obtain the corresponding three-dimensional coordinate of target feature point.It illustratively,, can be by cunning for each target feature point in SLAM system The key images frame that the target feature point is first appeared in window is determined as the corresponding target image frame of the target feature point, wherein Sliding window includes multiple key images frames and current image frame.Current image frame, which can be key images frame, may not be crucial figure As frame;If current image frame is key images frame, current image frame can be retained in sliding window after sliding window optimization;If working as Preceding picture frame is non-key picture frame, then would not remain in sliding window after optimization.
Wherein, the transition matrix between current image frame and target image frame can be feeling the pulse with the finger-tip logo image frame to present image The transition matrix of frame, can be according to the translation vector of target image frame to current image frame and target image frame to present image The spin matrix of frame determines.The translation vector of target image frame to current image frame can be according to the corresponding translation of target image frame Vector (translation vector of the corresponding camera coordinates system of target image frame to world coordinate system) and current image frame are corresponding initial Translation vector (the initial translation vector of the corresponding camera coordinates system of current image frame to world coordinate system) determines.Target image frame Spin matrix to current image frame can be according to the corresponding spin matrix of target image frame (the corresponding camera seat of target image frame Mark system arrives the spin matrix of world coordinate system) and the corresponding initial rotation vector of current image frame (the corresponding phase of current image frame Initial rotation vector of the machine coordinate system to world coordinate system) it determines.
Illustratively, it can be based on following formula, calculate the corresponding collimation error of current image frame:
Wherein, rprojFor the corresponding collimation error of current image frame j;It is special for i-th of target in current image frame j Levy the pixel coordinate of point;PiFor normalization three of i-th of target feature point in the target image frame k before current image frame j Tie up coordinate;λiFor the corresponding depth value of i-th of target feature point;For the conversion square of target image frame k to current image frame j Battle array;π expression projects three-dimensional coordinate to the two-dimensional surface of current image frame;ρ is Hans Huber's loss function;C is present image The set of each target feature point in frame j;F is the set of each target image frame.
Specifically, the present embodiment can calculate the corresponding collimation error of current image frame based on re-projection error.Passing through will The corresponding three-dimensional coordinate P including depth information of target feature point in target image frameiλiIt projects to the two dimension of current image frame In plane, the corresponding two-dimensional coordinate of the target feature point can be obtainedIt is missed since the initial pose of camera exists Difference, so that the pixel coordinate of the target feature point obtained in current image frameWith two-dimensional coordinateIt can not It is overlapped, so as to calculate the corresponding re-projection error of each target feature point in current image frame, and by each target The corresponding re-projection error of characteristic point, which be added, obtains the corresponding collimation error of current image frame.It should be noted that Hu shellfish Your loss function ρ is a kind of loss function returned using robustness, can be used for reducing initial pose and optimized subsequent The influence of exceptional value in journey, so that the camera pose that optimization faster and finally obtains is more accurate.
Illustratively, it can be based on following formula, calculate the corresponding imu error of current image frame:
Wherein, rIMUFor the corresponding imu error of current image frame j,It is world coordinate system to upper j-1 pairs of a picture frame First spin matrix of the camera coordinates system answered;For the corresponding initial translation vector of current image frame j, i.e. current image frame j Displacement of the corresponding camera coordinates system to world coordinate system;It is translated towards for the corresponding upper target of upper picture frame j-1 Amount;Time interval of the Δ t between current image frame j and upper picture frame j-1;It is a upper picture frame in world coordinate system Under a upper initial velocity;gwFor the acceleration of gravity under world coordinate system;For upper picture frame j-1 to present image Initial pre-integration value between frame.
Wherein, a upper initial velocity for a upper picture frame can refer to the camera motion estimated in a upper picture frame Speed.
It specifically, can be based on the operation pair of above-mentioned steps S110-S120 for each picture frame of camera shooting The corresponding initial translation vector of each picture frame is corrected, and the corresponding target translation vector of each picture frame is obtained.This implementation Example can calculate the corresponding imu error of current image frame based on above-mentioned kinematics formula, that is, above-mentioned kinematics is public The formula of left side of the equal sign subtracts the formula of right side of the equal sign in formula, and the difference of acquisition is the corresponding imu error of current image frame.It needs It is noted that before being corrected to the corresponding initial translation vector of current image frame, since initial translation vector is basis What above-mentioned kinematics formula determined, so that the corresponding imu error of current image frame at this time is zero.
The present embodiment obtains the corresponding overall estimate error of initial translation vector be added the collimation error and imu error Later, the size of initial translation vector can be adjusted, so as to obtain the smallest overall estimate error.The present embodiment is logical It crosses and overall estimate error is minimized, so that the collimation error of the initial translation vector after correcting and the sum of imu error are minimum, To be corrected using visual information to the initial translation vector obtained based on IMU information, the initial bit of camera is improved The accuracy of appearance reduces the influence of IMU noise.
It based on the above technical solution, can also include: corresponding according to current image frame after step S120 Current goal translation vector and the corresponding upper target translation vector of a upper picture frame and a upper initial velocity, to present image The corresponding initial pre-integration value of frame is corrected, and determines the corresponding target pre-integration value of current image frame.
Specifically, initial translation vector is corrected due to the present embodiment, so as to be based on kinematics formula, root According to the corresponding target translation vector of two adjacent images frame and the corresponding upper initial velocity of a upper picture frame, to current image frame Corresponding initial pre-integration value is corrected, the initial pre-integration value after being corrected, i.e. target pre-integration value.It needs to illustrate It is that the present embodiment can calculate the phase between current image frame and a upper picture frame based on the initial translation vector of current image frame To this first initial pre-integration value of displacement variable, so as to initial pre- to first using the initial translation vector after correcting Integrated value is corrected, and determines the first object pre-integration value after correcting.The present embodiment passes through the initial pre-integration to picture frame Value is corrected, the error introduced when can reduce Time Continuous model discretization, to further increase the standard of pose estimation Exactness and precision.
Illustratively, the present embodiment can be according to the corresponding current goal translation vector of current image frame, a upper picture frame Time interval between corresponding upper target translation vector and a upper initial velocity and current image frame and a upper picture frame, The first initial pre-integration value corresponding to current image frame is corrected, and determines the corresponding first object pre-integration of current image frame Value.Specifically, it can be based on following formula, determine the corresponding first object pre-integration value of current image frame:
Wherein,For the corresponding first object pre-integration value of current image frame j;For world coordinate system to upper one First spin matrix of the corresponding camera coordinates system of picture frame j-1;It is translated towards for the corresponding current goal of current image frame j Amount;For the corresponding upper target translation vector of upper picture frame j-1 of current image frame j;For a upper picture frame The corresponding upper initial velocity of j-1;Time interval of the Δ t between current image frame j and upper picture frame j-1;gwIt is alive Acceleration of gravity under boundary's coordinate system.
Based on the above technical solution, after determining the corresponding first object pre-integration value of current image frame, also It include: using first object pre-integration value as constraint information, to the initial rotation vector in target translation vector and initial pose It optimizes, the camera pose after determining optimization.
Specifically, the present embodiment can be based on the pre-integration value after correction to the initial rotation vector and mesh in initial pose Mark translation vector optimizes, so as to obtain the corresponding more accurate camera pose of current image frame.Illustratively, exist In SLAM system, the initial rotation vector in the corresponding initial pose of current image frame and the target after correction can be translated towards The initial value as the camera pose in sliding window optimization is measured, and the pre-integration value obtained after correction is optimized as constraint information Camera pose, so that the pose optimum results of current image frame are more accurate, so as to greatly improve the estimation of camera pose Accuracy.
Embodiment two
Fig. 2 is a kind of flow chart of the correcting method of camera pose provided by Embodiment 2 of the present invention, and the present embodiment is upper On the basis of stating embodiment, after determining the corresponding target translation vector of current image frame, further includes: " according to present image Initial velocity, the initial translation vector sum target translation vector of frame, correct the direction of initial velocity, determine present image The corresponding target velocity of frame ".Details are not described herein for wherein same as the previously described embodiments or corresponding term explanation.
Referring to fig. 2, camera pose provided in this embodiment correcting method specifically includes the following steps:
S210, obtain initial translation of the camera in the corresponding initial pre-integration value of current image frame and initial pose to Amount, wherein initial pre-integration value and initial translation vector carry out pre-integration according to the information that IMU Inertial Measurement Unit acquires and determine.
S220, information, initial pre-integration value and initial translation vector are handled according to the picture frame of camera, calculates initial translation The corresponding overall estimate error of vector, and initial translation vector is corrected based on overall estimate error, determine current image frame pair The target translation vector answered.
S230, the initial velocity according to current image frame, initial translation vector sum target translation vector, to initial velocity Direction is corrected, and determines the corresponding target velocity of current image frame.
Wherein, the information that the corresponding initial velocity of current image frame can be acquired according to IMU Inertial Measurement Unit carries out pre- Integral determines.Initial velocity in the present embodiment is vector comprising velocity magnitude and directional velocity.Illustratively, IMU is adopted Acceleration information between the current image frame of collection and a upper picture frame carries out pre-integration, can obtain current image frame and upper one Relative velocity variable quantity this second initial pre-integration value between picture frame, and it can be based on following kinematics formula, according to Time interval between second initial pre-integration value, the target velocity of a upper picture frame and current image frame and a upper picture frame is true Make the initial velocity of current image frame:
Wherein,For the corresponding initial velocity of current image frame j;For world coordinate system to upper picture frame j-1 First spin matrix of corresponding camera coordinates system;For the corresponding target velocity of upper picture frame j-1;Δ t is current figure As the time interval between frame j and upper picture frame j-1;gwFor the acceleration of gravity under world coordinate system;It is current The corresponding second initial pre-integration value of picture frame j.
Fig. 3 (a) gives the variation example of true initial velocity between adjacent two picture frames of one kind;Fig. 3 (b) is provided A kind of variation example based on the initial velocity between calculated adjacent two picture frames of kinematics formula.Fig. 3 (a) and 3 (b) the black circular block in indicates endpoint location present when image shot by camera frame;Black rectangle block indicates two images The corresponding middle position of each IMU data acquired between frame;The direction of arrow expression speed;It is current image frame (jth Frame) the corresponding target translation vector of a upper picture frame (l frame), it can be understood as the corresponding phase of a upper picture frame (l frame) Position of the machine under world coordinate system;It is the corresponding initial translation vector of current image frame (jth frame), it can be understood as when Position of the corresponding camera of preceding picture frame under world coordinate system;For the corresponding target velocity of a upper picture frame;To work as The corresponding initial velocity of preceding picture frame.
As shown in Fig. 3 (a) and 3 (b), since IMU frequency acquisition is higher than camera frame per second, so that IMU data are more than camera frame, To have the corresponding middle position of multiple IMU data between two adjacent images frame (i.e. two endpoint locations).Fig. 3 (b) initial velocity inIt is, the true initial velocity that with Fig. 3 (a) provides calculated based on kinematics formula It compares, it will thus be seen that initial velocity in Fig. 3 (b)There are large errors due to the influence of IMU noise in direction, thus this reality Example is applied also to be corrected the direction of initial velocity.
Specifically, Fig. 4 gives the example that a kind of pair of initial velocity is corrected.As shown in figure 4, current image frame is corresponding Initial translation vectorIt is corrected as target translation vectorAfterwards, initial velocityDirectional velocity also become Change, but the angle between the direction of speed and camera position be it is constant, i.e.,WithIt is equal, so as to be based on working as Initial velocity, the initial translation vector of preceding picture frame calculate angle between the twoAnd it is translated towards according to the angle and target Amount determines the corresponding target velocity of current image frameDirection mentioned so that the direction to initial velocity is corrected The high accuracy of initial velocity.
The technical solution of the present embodiment, by after the initial translation vector to current image frame is corrected, Ke Yigen According to the target translation vector obtained after correction, the direction of the initial velocity of current image frame is corrected, so as into one Step reduces influence of the IMU noise to speed, improves the accuracy and precision of initial velocity estimation.
Based on the above technical solution, S230 may include: calculate current image frame corresponding initial velocity with Rotating vector and angle between initial translation vector;According to rotating vector and angle, calculate initial velocity and initial translation to The second spin matrix between amount;According to the second spin matrix and the corresponding target translation vector of current image frame, target is determined Direction;According to the size and target direction of initial velocity, the corresponding target velocity of current image frame is determined.
Specifically, the present embodiment can calculate the corresponding initial velocity of current image frame based on following formula and initially put down Rotating vector and angle between the amount of shifting to:
Wherein, n is the corresponding initial velocity of current image frameWith initial translation vectorBetween rotating vector,For the corresponding initial translation vector of current image frame, it can be understood as the corresponding camera of current image frame is in world coordinates Position under system.
When calculating the second spin matrix between initial velocity and initial translation vector, since the present embodiment is only to initial The direction of speed is corrected, and the size of initial velocity can't be corrected, so as to pass through the second rotation that a modulus value is 1 Matrix changes directional velocity.Illustratively, the present embodiment can determine initial velocity based on following formula and initially put down The second spin matrix between the amount of shifting to:
Wherein,For initial velocityWith initial translation vectorBetween the second spin matrix;I is unit matrix; nTFor the transposed matrix of rotating vector n;N^ is the antisymmetric matrix of rotating vector n.
For the present embodiment after correcting to initial translation vector, the angle between the direction and camera position of speed is constant , so as to correct based on direction of the following formula to initial velocity, determine target direction:
Wherein,For target direction,For the corresponding target translation vector of current image frame.
After obtaining target direction, the size and the target direction that can use the initial velocity of current image frame are entangled Target velocity after just, it can be based on formulaDetermine the corresponding target velocity of current image frame
Based on the above technical solution, S220 may include: according to each target feature point in current image frame Three-dimensional coordinate in the target image frame before current image frame of pixel coordinate, each target feature point, current image frame with Transition matrix between target image frame calculates the corresponding collimation error of current image frame, wherein transition matrix is according to initial flat The corresponding initial rotation vector of vector sum current image frame is moved to determine;According to the corresponding initial translation vector of current image frame, when The corresponding upper target translation vector of time interval, a upper picture frame and a upper target between preceding picture frame and a upper picture frame Speed and the corresponding initial pre-integration value of current image frame calculate the corresponding imu error of current image frame;By the collimation error It is added with imu error, determines the corresponding overall estimate error of initial translation vector;By adjusting the big of initial translation vector It is small, overall estimate error is minimized, and the corresponding initial translation vector of the smallest overall estimate error is determined as currently scheming As the corresponding target translation vector of frame.
Wherein, the present embodiment can be corresponding to each picture frame of camera shooting based on the operation of step S210-230 The direction of initial velocity is corrected, to obtain the corresponding target velocity of each picture frame.Current figure is calculated in the present embodiment It may refer to the associated description in above-described embodiment one as the concrete mode of the corresponding collimation error of frame.The present embodiment is worked as in calculating When the corresponding imu error of preceding picture frame, a upper target velocity is obtained after can correcting based on the initial velocity to a upper picture frame It determines the corresponding imu error of current image frame, is missed compared to IMU is directly calculated using the initial velocity of a upper picture frame For difference, the calculated imu error of the present embodiment is more accurate, improves the error-correcting effect of initial translation vector, thus into one Step improves the accuracy and precision of initial pose estimation, to improve the accuracy and precision of final camera pose estimation.
Illustratively, the present embodiment can be based on following formula as follows, calculate the corresponding imu error of current image frame:
Wherein, rIMUFor the corresponding imu error of current image frame j,It is world coordinate system to upper j-1 pairs of a picture frame First spin matrix of the camera coordinates system answered;For the corresponding initial translation vector of current image frame j, i.e. current image frame j Displacement of the corresponding camera coordinates system to world coordinate system;It is translated towards for the corresponding upper target of upper picture frame j-1 Amount;Time interval of the Δ t between current image frame j and upper picture frame j-1;It is a upper picture frame in world coordinates A upper target velocity under system;gwFor the acceleration of gravity under world coordinate system;For upper picture frame j-1 to current figure As the initial pre-integration value between frame.
It should be noted that being corrected to the corresponding initial translation vector sum initial velocity of current image frame, obtain After target translation vector and target velocity, the target translation vector and target velocity can further be optimized, such as Sliding window optimization etc., to estimate the corresponding final camera pose of current image frame.Utilizing target translation vector and target speed When degree optimizes, the convergence rate of optimization can be greatly speeded up, the estimated efficiency of camera pose is improved.
Embodiment three
Fig. 5 is a kind of flow chart of the correcting method for camera pose that the embodiment of the present invention three provides, and the present embodiment is upper On the basis of stating embodiment, after determining the corresponding target velocity of current image frame, further includes: " according to current image frame pair The corresponding upper target translation vector of current goal translation vector, current goal speed and a upper picture frame answered and a upper mesh Speed is marked, initial pre-integration value corresponding to current image frame is corrected, and determines the corresponding target pre-integration of current image frame Value ".Wherein details are not described herein for the explanation of term identical or corresponding with the various embodiments described above.
Referring to Fig. 5, the correcting method of camera pose provided in this embodiment specifically includes the following steps:
S310, obtain initial translation of the camera in the corresponding initial pre-integration value of current image frame and initial pose to Amount, wherein initial pre-integration value and initial translation vector carry out pre-integration according to the information that IMU Inertial Measurement Unit acquires and determine.
S320, information, initial pre-integration value and initial translation vector are handled according to the picture frame of camera, calculates initial translation The corresponding overall estimate error of vector, and initial translation vector is corrected based on overall estimate error, determine current image frame pair The target translation vector answered.
S330, the initial velocity according to current image frame, initial translation vector sum target translation vector, to initial velocity Direction is corrected, and determines the corresponding target velocity of current image frame.
S340, according to the corresponding current goal translation vector of current image frame, current goal speed and a upper picture frame Corresponding upper target translation vector and a upper target velocity, initial pre-integration value corresponding to current image frame are corrected, Determine the corresponding target pre-integration value of current image frame.
Specifically, in the operation based on step S310-S330, the corresponding initial translation vector of picture frame that camera is shot It is corrected with initial velocity, obtains the upper corresponding upper target translation vector of a picture frame and a upper target velocity and currently It, can initial pre-integration corresponding to current image frame after the corresponding current goal translation vector of picture frame and current goal speed Value is corrected, so that obtaining more accurate pre-integration value, further reduced influence of the IMU noise to pre-integration value, with Pose can be optimized using more accurate pre-integration value, further improve the accuracy and essence of pose estimation Degree.
Initial pre-integration value in the present embodiment may include the first initial pre-integration value, the second initial pre-integration value and Three initial pre-integration values;Wherein, the first initial pre-integration value is initial opposite between current image frame and a upper picture frame Displacement variable;Second initial pre-integration value is the initial relative velocity variation between current image frame and a upper picture frame Amount;The initial pre-integration value of third is the initial relative rotation angle variable quantity between current image frame and a upper picture frame.It needs It is noted that since the present embodiment is only corrected initial translation vector sum initial velocity, not to initial rotation vector It is corrected, so that the present embodiment can be based on the target translation vector and target velocity for correcting acquisition to the first initial pre-integration Value and the second initial pre-integration value are corrected, so that obtaining more accurate pre-integration value.
Illustratively, S340 may include: according to the corresponding current goal translation vector of current image frame, a upper picture frame Time interval between corresponding upper target translation vector and a upper target velocity and current image frame and a upper picture frame, The first initial pre-integration value corresponding to current image frame is corrected, and determines the corresponding first object pre-integration of current image frame Value;According to the corresponding current goal speed of current image frame and a upper target velocity and current image frame and a upper picture frame it Between time interval, the second initial pre-integration value corresponding to current image frame corrects, and determines that current image frame is corresponding Second target pre-integration value.
Wherein, first object pre-integration value refers to the first initial pre-integration value after correcting.Second target pre-integration value is Refer to the second initial pre-integration value after correcting.
Illustratively, the present embodiment can be based on following formula, determine the corresponding first object pre-integration of current image frame Value:
Wherein,For the corresponding first object pre-integration value of current image frame j;For world coordinate system to upper one First spin matrix of the corresponding camera coordinates system of picture frame j-1;It is translated towards for the corresponding current goal of current image frame j Amount;For the corresponding upper target translation vector of upper picture frame j-1 of current image frame j;For a upper picture frame The corresponding upper target velocity of j-1;Time interval of the Δ t between current image frame j and upper picture frame j-1;gwIt is alive Acceleration of gravity under boundary's coordinate system.
Illustratively, the present embodiment can be based on following formula, determine the corresponding second target pre-integration of current image frame Value:
Wherein,For the corresponding second target pre-integration value of current image frame j;For world coordinate system to upper one First spin matrix of the corresponding camera coordinates system of picture frame j-1;For the corresponding current goal speed of current image frame j;For the corresponding upper target velocity of upper picture frame j-1;Δ t is between current image frame j and upper picture frame j-1 Time interval;gwFor the acceleration of gravity under world coordinate system.
The technical solution of the present embodiment can be according to the corresponding upper target translation vector of a upper picture frame and a upper target Speed and the corresponding current goal translation vector of current image frame and current goal speed, it is corresponding to current image frame initial Pre-integration value is corrected, so that obtaining more accurate pre-integration value, further reduced IMU noise to pre- product The influence of score value further improves pose estimation so as to optimize using more accurate pre-integration value to pose Accuracy and precision.
Based on the above technical solution, after determining the corresponding target pre-integration value of current image frame, further includes: Using target pre-integration value as constraint information, the initial rotation vector in target translation vector and initial pose is optimized, Camera pose after determining optimization.
Specifically, using picture frame processing information to initial translation vector, initial velocity and initial pre-integration value into After row is corrected, the target translation vector obtained, target velocity can will be corrected as the corresponding ginseng in subsequent pose optimization process Several initial values, and using the target pre-integration value obtained after correction as constraint information, Lai Jinhang pose optimization, so that after optimization Final pose it is more accurate, thus improve camera pose estimation accuracy and tracking precision.
It is the embodiment of the correcting device of camera pose provided in an embodiment of the present invention, the device and above-mentioned each implementation below The correcting method of the camera pose of example belongs to the same inventive concept, not detailed in the embodiment of the correcting device of camera pose The detail content of description, can be with reference to the embodiment of the correcting method of above-mentioned camera pose.
Example IV
Fig. 6 is a kind of structural schematic diagram of the correcting device for camera pose that the embodiment of the present invention four provides, the present embodiment It is applicable to the case where correcting to the initial translation vector determined based on IMU information, which can specifically include: initial Data obtaining module 410 and initial translation vector correct module 420.
Wherein, initial information obtains module 410, for obtain camera in the corresponding initial pre-integration value of current image frame and Initial translation vector in initial pose, wherein initial pre-integration value and initial translation vector are adopted according to IMU Inertial Measurement Unit The information of collection carries out pre-integration and determines;Initial translation vector correct module 420, for according to the picture frame of camera handle information, Initial pre-integration value and initial translation vector calculate the corresponding overall estimate error of initial translation vector, and are based on overall estimate error Initial translation vector is corrected, determines the corresponding target translation vector of current image frame.
Optionally, device further include: initial velocity corrects module, for determining that the corresponding target of current image frame is flat After the amount of shifting to, according to the initial velocity of current image frame, initial translation vector sum target translation vector, to the side of initial velocity To being corrected, the corresponding target velocity of current image frame is determined.
Optionally, device further include: the first initial pre-integration corrects module, for determining that current image frame is corresponding After target translation vector, according to the corresponding current goal translation vector of current image frame and a upper picture frame corresponding upper one Target translation vector and a upper initial velocity, initial pre-integration value corresponding to current image frame are corrected, and determine current figure As the corresponding target pre-integration value of frame.
Optionally, device further include: the second initial pre-integration corrects module, for determining that current image frame is corresponding After target velocity, further includes: according to the corresponding current goal translation vector of current image frame, current goal speed and upper one The corresponding upper target translation vector of picture frame and a upper target velocity, initial pre-integration value corresponding to current image frame carry out It corrects, determines the corresponding target pre-integration value of current image frame.
Optionally, initial translation vector corrects module 420, is specifically used for: according to each target signature in current image frame Three-dimensional coordinate, present image of the pixel coordinate, each target feature point of point in the target image frame before current image frame Transition matrix between frame and target image frame calculates the corresponding collimation error of current image frame, wherein transition matrix is according to just Beginning translation vector and the corresponding initial rotation vector of current image frame determine;According to the corresponding initial translation of current image frame to The corresponding upper target translation vector of time interval, a upper picture frame between amount, current image frame and a upper picture frame and on One target velocity and the corresponding initial pre-integration value of current image frame calculate the corresponding imu error of current image frame;It will view Feel that error is added with imu error, determines the corresponding overall estimate error of initial translation vector;By adjusting initial translation vector Size, overall estimate error is minimized, and the corresponding initial translation vector of the smallest overall estimate error is determined as working as The corresponding target translation vector of preceding picture frame.
Optionally, the device further include: target image frame determining module, for calculating the corresponding vision of current image frame Before error, for each target feature point, the key images frame that target feature point is first appeared in sliding window is determined as the mesh Mark the corresponding target image frame of characteristic point, wherein sliding window includes multiple key images frames and current image frame.
Optionally, it is based on following formula, calculates the corresponding collimation error of current image frame:
Wherein, rprojFor the corresponding collimation error of current image frame j;It is special for i-th of target in current image frame j Levy the pixel coordinate of point;PiFor normalization three of i-th of target feature point in the target image frame k before current image frame j Tie up coordinate;λiFor the corresponding depth value of i-th of target feature point;For the conversion square of target image frame k to current image frame j Battle array;π expression projects three-dimensional coordinate to the two-dimensional surface of current image frame;ρ is Hans Huber's loss function;C is present image The set of each target feature point in frame j;F is the set of each target image frame.
Optionally, it is based on following formula, calculates the corresponding imu error of current image frame:
Wherein, rIMUFor the corresponding imu error of current image frame j,It is world coordinate system to upper j-1 pairs of a picture frame First spin matrix of the camera coordinates system answered;For the corresponding initial translation vector of current image frame j, i.e. current image frame j Displacement of the corresponding camera coordinates system to world coordinate system;It is translated towards for the corresponding upper target of upper picture frame j-1 Amount;Time interval of the Δ t between current image frame j and upper picture frame j-1;It is a upper picture frame in world coordinates A upper target velocity under system;gwFor the acceleration of gravity under world coordinate system;For upper picture frame j-1 to current figure As the initial pre-integration value between frame.
Optionally, initial velocity corrects module, is specifically used for: calculating the corresponding initial velocity of current image frame and initial Rotating vector and angle between translation vector;According to rotating vector and angle, calculate initial velocity and initial translation vector it Between the second spin matrix;According to the second spin matrix and the corresponding target translation vector of current image frame, target direction is determined; According to the size and target direction of initial velocity, the corresponding target velocity of current image frame is determined.
Optionally, initial pre-integration value includes the first initial pre-integration value and the second initial pre-integration value;Wherein, at the beginning of first Beginning pre-integration value is the initial relative displacement variable quantity between current image frame and a upper picture frame;Second initial pre-integration value It is the initial relative velocity variable quantity between current image frame and a upper picture frame;
Correspondingly, the second initial pre-integration corrects module, is specifically used for: flat according to the corresponding current goal of current image frame The corresponding upper target translation vector of the amount of shifting to, a upper picture frame and a upper target velocity and current image frame and a upper image Time interval between frame, the first initial pre-integration value corresponding to current image frame are corrected, and determine current image frame pair The first object pre-integration value answered;According to the corresponding current goal speed of current image frame and a upper target velocity and current figure As the time interval between frame and a upper picture frame, the second initial pre-integration value corresponding to current image frame is corrected, really Determine the corresponding second target pre-integration value of current image frame.
Optionally, it is based on following formula, determines the corresponding first object pre-integration value of current image frame:
Wherein,For the corresponding first object pre-integration value of current image frame j;For world coordinate system to upper one First spin matrix of the corresponding camera coordinates system of picture frame j-1;It is translated towards for the corresponding current goal of current image frame j Amount;For the corresponding upper target translation vector of upper picture frame j-1 of current image frame j;For a upper picture frame The corresponding upper target velocity of j-1;Time interval of the Δ t between current image frame j and upper picture frame j-1;gwIt is alive Acceleration of gravity under boundary's coordinate system.
Optionally, it is based on following formula, determines the corresponding second target pre-integration value of current image frame:
Wherein,For the corresponding second target pre-integration value of current image frame j;For world coordinate system to upper one First spin matrix of the corresponding camera coordinates system of picture frame j-1;For the corresponding current goal speed of current image frame j;For the corresponding upper target velocity of upper picture frame j-1;Δ t is between current image frame j and upper picture frame j-1 Time interval;gwFor the acceleration of gravity under world coordinate system.
Optionally, the device further include: camera pose optimization module, for determining that the corresponding target of current image frame is pre- After integrated value, using target pre-integration value as constraint information, to the initial rotation square in target translation vector and initial pose Battle array optimizes, the camera pose after determining optimization.
The correcting device of camera pose provided by the embodiment of the present invention can be performed provided by any embodiment of the invention The correcting method of camera pose has the corresponding functional module of correcting method and beneficial effect for executing camera pose.
Embodiment five
Fig. 7 is a kind of structural schematic diagram of the correcting system for camera pose that the embodiment of the present invention five provides.Referring to Fig. 7, The system includes: that preprocessing module 510, initialization module 520 and pose correct module 530.
Wherein, the image information that preprocessing module 510 is used to shoot camera carries out detection processing, determines that picture frame is handled Information, and pre-integration is carried out to the information of IMU Inertial Measurement Unit acquisition, determine the corresponding initial pre-integration of each picture frame Value and initial pose, wherein initial pose includes initial translation vector;Initialization module 520, which is used to be handled according to picture frame, to be believed Breath, initial pre-integration value and initial pose carry out system initialization;It is any real for realizing such as present invention that pose corrects module 530 Apply the correcting method of camera pose provided by example.
Wherein, preprocessing module 510 may include image frame processing unit and IMU pre-integration unit, wherein at picture frame The image information that unit is managed for camera shooting carries out detection processing, determines that picture frame handles information.IMU pre-integration unit is used for Pre-integration is carried out to the information of IMU Inertial Measurement Unit acquisition, determines the corresponding initial pre-integration value of each picture frame and initial Pose.
The course of work of the correcting system of camera pose provided in this embodiment are as follows: firstly, preprocessing module 510 is right respectively The image information of camera shooting carries out detection processing and determines that picture frame handles information, and to the acquisition of IMU Inertial Measurement Unit Information carries out pre-integration and determines the corresponding initial pre-integration value of each picture frame and initial pose, and by picture frame handle information, Initial pre-integration value and initial pose are exported to initialization module 520.Initialization module 520 is defeated according to preprocessing module 510 Result carries out system initialization out, and the picture frame processing information of not scale and the IMU information with scale are carried out vision and is used to Alignment is led, to complete the initialization to offset of gyroscope, acceleration of gravity, scale, first speed.Pose corrects module 530 After successful initialization, at least one parameter in initial translation vector, initial velocity and initial pre-integration value is corrected, To reduce the influence of IMU noise, and pose is optimized using the parameter after correction, the more accurate phase after being optimized Seat in the plane appearance.
It should be noted that initialization module 520 initialize successfully to system in the information based on current image frame Afterwards, it no longer needs to be initialized when estimating the camera pose of next image frame, unless target following fails, needs to re-start Positioning, re-starts initialization again at this time.
The correcting system of camera pose in the present embodiment is to initial translation vector, initial velocity and initial pre-integration value When being corrected, it is thus only necessary to which the system operation time cost of 2-3ms can greatly improve the essence of system camera pose estimation Exactness.
Illustratively, for SLAM system, it may include that pose corrects unit, speed is entangled that pose, which corrects module 530, Positive unit, pre-integration value correct unit, sliding window optimization unit and global pose and optimize unit.Wherein, pose is corrected unit and is used for Information is handled according to the picture frame of camera to correct the initial translation vector determined based on IMU information;Pace remediation unit For being corrected according to the initial translation vector after correction to the initial velocity determined based on IMU information;Pre-integration value is corrected Initial velocity after unit is used to be corrected according to the initial translation vector sum after correction corrects initial pre-integration value;Sliding window Optimize unit to be used for according to the initial pre-integration value pair after the initial translation vector after correction, the initial velocity after correction and correction Camera pose optimizes, triangulation, marginalisation, obtains more accurate pose and other quantity of states.Global pose optimization Unit is used to optimize the pose of four freedom degrees, obtains a globally consistent pose estimation.Wherein, global pose optimizes unit It can also include winding detection sub-unit, be provided for detecting whether camera reaches previous position, and the information detected The optimization processing of camera pose is carried out to global pose optimization unit.
When SLAM system initialization, local map if it does not exist, then initialization module 520 can obtain some initial 3D point, for the pose of optimal estimating, and according to the pose of optimization, to the 2D point 3Dization newly detected, (triangulation is estimated deep Degree constructs 3D point using this depth), and update into local map.For example, in unmodified SLAM system, (pose is corrected Only include that sliding window optimization unit and global pose optimize unit in module 530) in, if system is in initialization without locally Figure, preprocessing module 510 obtain initial pose, and initialization module 520 obtains some initial 3D points, single for optimizing in sliding window Optimize initial pose in member, and updates the 3D point of local map.However in the improved SLAM system for correcting unit is added, it is When system initialization, preprocessing module 510 obtains initial pose, and initialization module 520 obtains some initial 3D points, these are initial 3D point, on the one hand for correct unit in correct pre-integration value/initial pose, on the other hand for sliding window optimize unit In advanced optimize pose, and update the 3D point of local map.
Illustratively, for VIO system, it may include that pose corrects unit, pace remediation that pose, which corrects module 530, Unit, pre-integration value correct unit, pose optimizes unit.Wherein, pose corrects unit and is used to be handled according to the picture frame of camera Information corrects the initial translation vector determined based on IMU information;Pace remediation unit is used for according to initial after correction Translation vector corrects the initial velocity determined based on IMU information;After pre-integration value correction unit is used for according to correction Initial velocity after initial translation vector sum is corrected corrects initial pre-integration value;Pose optimizes unit and is used for according to correction Initial velocity after rear initial translation vector, correction and the initial pre-integration value after correction optimize camera pose, obtain Obtain more accurate pose and other quantity of states.Wherein, pose optimization unit can be but be not limited to sliding window optimization unit or filter Wave optimizes unit.On the one hand pose optimization unit can carry out pose optimization according to result is corrected and export pose, on the other hand It to the 2D point 3Dization newly detected, and can be updated into local map according to the camera pose after optimization.
The correcting system of camera pose provided in this embodiment is corresponding to picture frame initial flat by view-based access control model information At least one parameter in the amount of shifting to, initial velocity and initial pre-integration value is corrected, so as to reduce the shadow of IMU noise It rings, and pose is optimized using the parameter after correction, the more accurate camera pose after being optimized.
Embodiment six
Fig. 8 is a kind of structural schematic diagram for equipment that the embodiment of the present invention six provides.Referring to Fig. 8, which includes:
One or more processors 810;
Memory 820, for storing one or more programs;
When one or more programs are executed by one or more processors 810, so that one or more processors 810 are realized The correcting method of camera pose as provided by any embodiment in above-described embodiment, this method comprises:
Initial translation vector of the camera in the corresponding initial pre-integration value of current image frame and initial pose is obtained, wherein Initial pre-integration value and initial translation vector carry out pre-integration according to the information that IMU Inertial Measurement Unit acquires and determine;
Information, initial pre-integration value and initial translation vector are handled according to the picture frame of camera, calculates initial translation vector Corresponding overall estimate error, and initial translation vector is corrected based on overall estimate error, determine that current image frame is corresponding Target translation vector.
In Fig. 8 by taking a processor 810 as an example;Processor 810 and memory 820 in server can by bus or Other modes connect, in Fig. 8 for being connected by bus.
Memory 820 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer Sequence and module, if the corresponding program instruction/module of the correcting method of the camera pose in the embodiment of the present invention is (for example, camera Initial information in the correcting device of pose obtains module 410 and initial translation vector corrects module 420).Processor 810 passes through Run the software program, instruction and the module that are stored in memory 820, thereby executing server various function application and The correcting method of above-mentioned camera pose is realized in data processing.
Memory 820 mainly includes storing program area and storage data area, wherein storing program area can store operation system Application program needed for system, at least one function;Storage data area, which can be stored, uses created data etc. according to server. It can also include nonvolatile memory in addition, memory 820 may include high-speed random access memory, for example, at least one A disk memory, flush memory device or other non-volatile solid state memory parts.In some instances, memory 820 can be into One step includes the memory remotely located relative to processor 810, these remote memories can pass through network connection to service Device.The example of above-mentioned network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
The correcting method for the camera pose that the server and above-described embodiment that the present embodiment proposes propose belongs to same invention Design, the technical detail of detailed description not can be found in above-described embodiment in the present embodiment, and the present embodiment has execution phase The identical beneficial effect of the correcting method of seat in the plane appearance.
Embodiment seven
The present embodiment seven provides a kind of computer readable storage medium, is stored thereon with computer program, which is located The correcting method that the camera pose such as any embodiment of that present invention is realized when device executes is managed, this method comprises:
Initial translation vector of the camera in the corresponding initial pre-integration value of current image frame and initial pose is obtained, wherein Initial pre-integration value and initial translation vector carry out pre-integration according to the information that IMU Inertial Measurement Unit acquires and determine;
Information, initial pre-integration value and initial translation vector are handled according to the picture frame of camera, calculates initial translation vector Corresponding overall estimate error, and initial translation vector is corrected based on overall estimate error, determine that current image frame is corresponding Target translation vector.
The computer storage medium of the embodiment of the present invention, can be using any of one or more computer-readable media Combination.Computer-readable medium can be computer-readable signal media or computer readable storage medium.It is computer-readable Storage medium can be for example but not limited to: electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or Any above combination of person.The more specific example (non exhaustive list) of computer readable storage medium includes: with one Or the electrical connections of multiple conducting wires, portable computer diskette, hard disk, random access memory (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light Memory device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer readable storage medium can With to be any include or the tangible medium of storage program, the program can be commanded execution system, device or device use or Person is in connection.
Computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal, Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for By the use of instruction execution system, device or device or program in connection.
The program code for including on computer-readable medium can transmit with any suitable medium, including but not limited to: Wirelessly, electric wire, optical cable, RF etc. or above-mentioned any appropriate combination.
The computer for executing operation of the present invention can be write with one or more programming languages or combinations thereof Program code, described program design language include object oriented program language, such as Java, Smalltalk, C++, also Including conventional procedural programming language-such as " C " language or similar programming language.Program code can be complete It executes, partly executed on the user computer on the user computer entirely, being executed as an independent software package, part Part executes on the remote computer or executes on a remote computer or server completely on the user computer.It is relating to And in the situation of remote computer, remote computer can pass through the network of any kind, including local area network (LAN) or wide area network (WAN), it is connected to subscriber computer, or, it may be connected to outer computer (such as led to using ISP Cross internet connection).
Will be appreciated by those skilled in the art that each module of the above invention or each step can use general meter Device is calculated to realize, they can be concentrated on single computing device, or be distributed in network constituted by multiple computing devices On, optionally, they can be realized with the program code that computer installation can be performed, so as to be stored in storage It is performed by computing device in device, perhaps they are fabricated to each integrated circuit modules or will be more in them A module or step are fabricated to single integrated circuit module to realize.In this way, the present invention is not limited to any specific hardware and The combination of software.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (17)

1. a kind of correcting method of camera pose characterized by comprising
Initial translation vector of the camera in the corresponding initial pre-integration value of current image frame and initial pose is obtained, wherein described Initial pre-integration value and the initial translation vector carry out pre-integration according to the information that IMU Inertial Measurement Unit acquires and determine;
Information, the initial pre-integration value and the initial translation vector are handled according to the picture frame of the camera, described in calculating The corresponding overall estimate error of initial translation vector, and the initial translation vector is corrected based on the overall estimate error, Determine the corresponding target translation vector of the current image frame.
2. the method according to claim 1, wherein determining that the corresponding target of the current image frame is translated towards After amount, further includes:
According to target translation vector described in the initial velocity of the current image frame, the initial translation vector sum, to described first The direction of beginning speed is corrected, and determines the corresponding target velocity of the current image frame.
3. the method according to claim 1, wherein determining that the corresponding target of the current image frame is translated towards After amount, further includes:
According to the corresponding upper target translation of the corresponding current goal translation vector of the current image frame and a upper picture frame An initial velocity on vector sum, initial pre-integration value corresponding to the current image frame are corrected, and determine the current figure As the corresponding target pre-integration value of frame.
4. according to the method described in claim 2, it is characterized in that, determine the corresponding target velocity of the current image frame it Afterwards, further includes:
It is corresponding according to the corresponding current goal translation vector of the current image frame, current goal speed and a upper picture frame Upper target translation vector and a upper target velocity, initial pre-integration value corresponding to the current image frame are corrected, really Determine the corresponding target pre-integration value of the current image frame.
5. method according to claim 2 or 4, which is characterized in that handle information, described according to the picture frame of the camera Initial pre-integration value and the initial translation vector calculate the corresponding overall estimate error of the initial translation vector, and are based on institute It states overall estimate error to correct the initial translation vector, determines the corresponding target translation vector of the current image frame, Include:
According to the pixel coordinate of each target feature point in the current image frame, each target feature point described current The transition matrix between the three-dimensional coordinate in target image frame, the current image frame and target image frame before picture frame, Calculate the corresponding collimation error of the current image frame, wherein the transition matrix is according to the initial translation vector sum The corresponding initial rotation vector of current image frame determines;
According to the time between the corresponding initial translation vector of the current image frame, the current image frame and a upper picture frame Interval, the upper corresponding upper target translation vector of a picture frame and a upper target velocity and the current image frame pair The initial pre-integration value answered, calculates the corresponding imu error of the current image frame;
The collimation error is added with the imu error, determines the corresponding overall estimate error of the initial translation vector;
By adjusting the size of the initial translation vector, the overall estimate error is minimized, and is always estimated the smallest The corresponding initial translation vector of meter error is determined as the corresponding target translation vector of the current image frame.
6. according to the method described in claim 5, it is characterized in that, calculate the corresponding collimation error of the current image frame it Before, further includes:
For each target feature point, the key images frame that the target feature point is first appeared in sliding window is determined as this The corresponding target image frame of target feature point, wherein the sliding window includes multiple key images frames and the current image frame.
7. according to the method described in claim 5, it is characterized in that, it is corresponding to calculate the current image frame based on following formula The collimation error:
Wherein, rprojFor the corresponding collimation error of the current image frame j;For i-th of mesh in the current image frame j Mark the pixel coordinate of characteristic point;PiIt is i-th of target feature point in the target image frame k before the current image frame j Normalize three-dimensional coordinate;λiFor the corresponding depth value of i-th of target feature point;For target image frame k to current image frame j Transition matrix;π expression projects three-dimensional coordinate to the two-dimensional surface of current image frame;ρ is Hans Huber's loss function;C is The set of each target feature point in the current image frame j;F is the set of each target image frame.
8. according to the method described in claim 5, it is characterized in that, it is corresponding to calculate the current image frame based on following formula Imu error:
Wherein, rIMUFor the corresponding imu error of the current image frame j,It is world coordinate system to upper j-1 pairs of a picture frame First spin matrix of the camera coordinates system answered;For the corresponding initial translation vector of the current image frame j, i.e., current figure As the displacement of the corresponding camera coordinates system of frame j to world coordinate system;For the corresponding upper mesh of the upper picture frame j-1 Mark translation vector;Time interval of the Δ t between the current image frame j and the upper picture frame j-1;It is described A upper target velocity of the upper picture frame under world coordinate system;gwFor the acceleration of gravity under world coordinate system;For Upper picture frame j-1 is to the initial pre-integration value between current image frame.
9. according to the method described in claim 2, it is characterized in that, according to the initial velocity of the current image frame, it is described just Beginning translation vector and the target translation vector, correct the direction of the initial velocity, determine the current image frame Corresponding target velocity, comprising:
Calculate the rotating vector and angle between the corresponding initial velocity of the current image frame and the initial translation vector;
According to the rotating vector and the angle, the second rotation between the initial velocity and the initial translation vector is calculated Torque battle array;
According to second spin matrix and the corresponding target translation vector of the current image frame, target direction is determined;
According to the size of the initial velocity and the target direction, the corresponding target velocity of the current image frame is determined.
10. according to the method described in claim 4, it is characterized in that, the initial pre-integration value includes the first initial pre-integration Value and the second initial pre-integration value;Wherein, the described first initial pre-integration value is between current image frame and a upper picture frame Initial relative displacement variable quantity;The second initial pre-integration value is initial between current image frame and a upper picture frame Relative velocity variable quantity;
Correspondingly, according to the corresponding current goal translation vector of the current image frame, current goal speed and a upper image The corresponding upper target translation vector of frame and a upper target velocity, initial pre-integration value corresponding to the current image frame carry out It corrects, determines the corresponding target pre-integration value of the current image frame, comprising:
According to the corresponding current goal translation vector of the current image frame, the corresponding upper target translation vector of a upper picture frame Time interval between a upper target velocity and the current image frame and a upper picture frame, to the present image The corresponding first initial pre-integration value of frame is corrected, and determines the corresponding first object pre-integration value of the current image frame;
According to the corresponding current goal speed of the current image frame and a upper target velocity and the current image frame With the time interval between a upper picture frame, the second initial pre-integration value corresponding to the current image frame is entangled Just, the corresponding second target pre-integration value of the current image frame is determined.
11. according to the method described in claim 10, it is characterized in that, determining the current image frame pair based on following formula The first object pre-integration value answered:
Wherein,For the corresponding first object pre-integration value of the current image frame j;For world coordinate system to upper one First spin matrix of the corresponding camera coordinates system of picture frame j-1;It is flat for the corresponding current goal of the current image frame j The amount of shifting to;For the corresponding upper target translation vector of upper picture frame j-1 of the current image frame j;For institute State the corresponding upper target velocity of a picture frame j-1;Δ t is between the current image frame j and the upper picture frame j-1 Time interval;gwFor the acceleration of gravity under world coordinate system.
12. according to the method described in claim 10, it is characterized in that, determining the current image frame pair based on following formula The the second target pre-integration value answered:
Wherein,For the corresponding second target pre-integration value of the current image frame j;For world coordinate system to upper one First spin matrix of the corresponding camera coordinates system of picture frame j-1;For the corresponding current goal speed of the current image frame j Degree;For the corresponding upper target velocity of the upper picture frame j-1;Δ t is the current image frame j and described upper one Time interval between picture frame j-1;gwFor the acceleration of gravity under world coordinate system.
13. according to the method any in claim 3,4,10-12, which is characterized in that determining the current image frame After corresponding target pre-integration value, further includes:
Using the target pre-integration value as constraint information, to the initial rotation in the target translation vector and the initial pose Torque battle array optimizes, the camera pose after determining optimization.
14. a kind of correcting device of camera pose characterized by comprising
Initial information obtains module, for obtaining camera in the corresponding initial pre-integration value of current image frame and initial pose Initial translation vector, wherein what the initial pre-integration value and the initial translation vector were acquired according to IMU Inertial Measurement Unit Information carries out pre-integration and determines;
Initial translation vector correct module, for according to the picture frame of the camera handle information, the initial pre-integration value and The initial translation vector calculates the corresponding overall estimate error of the initial translation vector, and is based on the overall estimate error pair The initial translation vector is corrected, and determines the corresponding target translation vector of the current image frame.
15. a kind of correcting system of camera pose, which is characterized in that the system comprises: preprocessing module, initialization module and Pose corrects module;Wherein,
The image information that the preprocessing module is used to shoot camera carries out detection processing, determines that picture frame handles information, with And to IMU Inertial Measurement Unit acquisition information carry out pre-integration, determine the corresponding initial pre-integration value of each picture frame and just Beginning pose, wherein the initial bit appearance includes initial translation vector;
The initialization module be used to be handled according to described image frame information, the initial pre-integration value and the initial pose into Row system initialization;
The pose corrects module for realizing the correcting method of the camera pose as described in any in claim 1-13.
16. a kind of equipment, which is characterized in that the equipment includes:
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real The now correcting method of the camera pose as described in any in claim 1-13.
17. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor The correcting method of the camera pose as described in any in claim 1-13 is realized when execution.
CN201910338855.8A 2019-04-25 2019-04-25 Method, device, system, equipment and storage medium for correcting camera pose Active CN110084832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910338855.8A CN110084832B (en) 2019-04-25 2019-04-25 Method, device, system, equipment and storage medium for correcting camera pose

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910338855.8A CN110084832B (en) 2019-04-25 2019-04-25 Method, device, system, equipment and storage medium for correcting camera pose

Publications (2)

Publication Number Publication Date
CN110084832A true CN110084832A (en) 2019-08-02
CN110084832B CN110084832B (en) 2021-03-23

Family

ID=67416750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910338855.8A Active CN110084832B (en) 2019-04-25 2019-04-25 Method, device, system, equipment and storage medium for correcting camera pose

Country Status (1)

Country Link
CN (1) CN110084832B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648370A (en) * 2019-09-29 2020-01-03 百度在线网络技术(北京)有限公司 Calibration data screening method and device and electronic equipment
CN110880187A (en) * 2019-10-17 2020-03-13 北京达佳互联信息技术有限公司 Camera position information determining method and device, electronic equipment and storage medium
CN110910423A (en) * 2019-11-15 2020-03-24 小狗电器互联网科技(北京)股份有限公司 Target tracking method and storage medium
CN111539982A (en) * 2020-04-17 2020-08-14 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN112414400A (en) * 2019-08-21 2021-02-26 浙江商汤科技开发有限公司 Information processing method and device, electronic equipment and storage medium
CN112669381A (en) * 2020-12-28 2021-04-16 北京达佳互联信息技术有限公司 Pose determination method and device, electronic equipment and storage medium
CN112904359A (en) * 2019-11-19 2021-06-04 沃尔沃汽车公司 Velocity estimation based on remote laser detection and measurement
CN113409391A (en) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113748693A (en) * 2020-03-27 2021-12-03 深圳市速腾聚创科技有限公司 Roadbed sensor and pose correction method and device thereof
CN114882023A (en) * 2022-07-07 2022-08-09 苏州小牛自动化设备有限公司 Battery string position and posture correction method, device, control equipment and system
WO2022188334A1 (en) * 2021-03-12 2022-09-15 浙江商汤科技开发有限公司 Positioning initialization method and apparatus, device, storage medium, and program product
WO2022193508A1 (en) * 2021-03-16 2022-09-22 浙江商汤科技开发有限公司 Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
US11468599B1 (en) * 2021-06-17 2022-10-11 Shenzhen Reolink Technology Co., Ltd. Monocular visual simultaneous localization and mapping data processing method apparatus, terminal, and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
US20180075609A1 (en) * 2016-09-12 2018-03-15 DunAn Precision, Inc. Method of Estimating Relative Motion Using a Visual-Inertial Sensor
CN108051002A (en) * 2017-12-04 2018-05-18 上海文什数据科技有限公司 Transport vehicle space-location method and system based on inertia measurement auxiliary vision
CN108613675A (en) * 2018-06-12 2018-10-02 武汉大学 Low-cost unmanned aircraft traverse measurement method and system
CN108629793A (en) * 2018-03-22 2018-10-09 中国科学院自动化研究所 The vision inertia odometry and equipment demarcated using line duration
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075609A1 (en) * 2016-09-12 2018-03-15 DunAn Precision, Inc. Method of Estimating Relative Motion Using a Visual-Inertial Sensor
CN107193279A (en) * 2017-05-09 2017-09-22 复旦大学 Robot localization and map structuring system based on monocular vision and IMU information
CN108051002A (en) * 2017-12-04 2018-05-18 上海文什数据科技有限公司 Transport vehicle space-location method and system based on inertia measurement auxiliary vision
CN108629793A (en) * 2018-03-22 2018-10-09 中国科学院自动化研究所 The vision inertia odometry and equipment demarcated using line duration
CN108981693A (en) * 2018-03-22 2018-12-11 东南大学 VIO fast joint initial method based on monocular camera
CN108613675A (en) * 2018-06-12 2018-10-02 武汉大学 Low-cost unmanned aircraft traverse measurement method and system
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TONG QIN ET.AL: "VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator", 《ARXIV:1708.03852V1 [CS.RO]》 *
ZIKANG YUAN ET.AL: "Visual-Inertial State Estimation with Pre-integration Correction for Robust Mobile Augmented Reality", 《MM 19》 *
赵天阳: "融合惯性与视觉的多传感器空间位姿计算方法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112414400A (en) * 2019-08-21 2021-02-26 浙江商汤科技开发有限公司 Information processing method and device, electronic equipment and storage medium
CN112414400B (en) * 2019-08-21 2022-07-22 浙江商汤科技开发有限公司 Information processing method and device, electronic equipment and storage medium
CN110648370B (en) * 2019-09-29 2022-06-03 阿波罗智联(北京)科技有限公司 Calibration data screening method and device and electronic equipment
CN110648370A (en) * 2019-09-29 2020-01-03 百度在线网络技术(北京)有限公司 Calibration data screening method and device and electronic equipment
CN110880187B (en) * 2019-10-17 2022-08-12 北京达佳互联信息技术有限公司 Camera position information determining method and device, electronic equipment and storage medium
CN110880187A (en) * 2019-10-17 2020-03-13 北京达佳互联信息技术有限公司 Camera position information determining method and device, electronic equipment and storage medium
CN110910423B (en) * 2019-11-15 2022-08-23 小狗电器互联网科技(北京)股份有限公司 Target tracking method and storage medium
CN110910423A (en) * 2019-11-15 2020-03-24 小狗电器互联网科技(北京)股份有限公司 Target tracking method and storage medium
CN112904359A (en) * 2019-11-19 2021-06-04 沃尔沃汽车公司 Velocity estimation based on remote laser detection and measurement
CN112904359B (en) * 2019-11-19 2024-04-09 沃尔沃汽车公司 Speed estimation based on remote laser detection and measurement
CN113748693A (en) * 2020-03-27 2021-12-03 深圳市速腾聚创科技有限公司 Roadbed sensor and pose correction method and device thereof
CN113748693B (en) * 2020-03-27 2023-09-15 深圳市速腾聚创科技有限公司 Position and pose correction method and device of roadbed sensor and roadbed sensor
CN111539982A (en) * 2020-04-17 2020-08-14 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111539982B (en) * 2020-04-17 2023-09-15 北京维盛泰科科技有限公司 Visual inertial navigation initialization method based on nonlinear optimization in mobile platform
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN111795686B (en) * 2020-06-08 2024-02-02 南京大学 Mobile robot positioning and mapping method
CN112669381A (en) * 2020-12-28 2021-04-16 北京达佳互联信息技术有限公司 Pose determination method and device, electronic equipment and storage medium
WO2022188334A1 (en) * 2021-03-12 2022-09-15 浙江商汤科技开发有限公司 Positioning initialization method and apparatus, device, storage medium, and program product
WO2022193508A1 (en) * 2021-03-16 2022-09-22 浙江商汤科技开发有限公司 Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
US11468599B1 (en) * 2021-06-17 2022-10-11 Shenzhen Reolink Technology Co., Ltd. Monocular visual simultaneous localization and mapping data processing method apparatus, terminal, and readable storage medium
CN113409391A (en) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113409391B (en) * 2021-06-25 2023-03-03 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN114882023A (en) * 2022-07-07 2022-08-09 苏州小牛自动化设备有限公司 Battery string position and posture correction method, device, control equipment and system

Also Published As

Publication number Publication date
CN110084832B (en) 2021-03-23

Similar Documents

Publication Publication Date Title
CN110084832A (en) Correcting method, device, system, equipment and the storage medium of camera pose
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
JP6896077B2 (en) Vehicle automatic parking system and method
US20240221402A1 (en) Method and apparatus for 3-d auto tagging
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN107990899B (en) Positioning method and system based on SLAM
CN111210463B (en) Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN110125928A (en) A kind of binocular inertial navigation SLAM system carrying out characteristic matching based on before and after frames
Kneip et al. Robust real-time visual odometry with a single camera and an IMU
CN106920259B (en) positioning method and system
US9613420B2 (en) Method for locating a camera and for 3D reconstruction in a partially known environment
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
WO2018159168A1 (en) System and method for virtually-augmented visual simultaneous localization and mapping
CN110246147A (en) Vision inertia odometer method, vision inertia mileage counter device and mobile device
CN109671120A (en) A kind of monocular SLAM initial method and system based on wheel type encoder
CN107590827A (en) A kind of indoor mobile robot vision SLAM methods based on Kinect
US11788845B2 (en) Systems and methods for robust self-relocalization in a visual map
CN107888828A (en) Space-location method and device, electronic equipment and storage medium
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
JP6229041B2 (en) Method for estimating the angular deviation of a moving element relative to a reference direction
KR20210058686A (en) Device and method of implementing simultaneous localization and mapping
JP7138361B2 (en) User Pose Estimation Method and Apparatus Using 3D Virtual Space Model
CN112700486B (en) Method and device for estimating depth of road surface lane line in image
Kunz et al. Stereo self-calibration for seafloor mapping using AUVs
CN112731503B (en) Pose estimation method and system based on front end tight coupling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210923

Address after: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Patentee after: HUAZHONG University OF SCIENCE AND TECHNOLOGY

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211223

Address after: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Patentee before: Huazhong University of Science and Technology

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Patentee after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Patentee before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190802

Assignee: SHANGHAI LILITH TECHNOLOGY Corp.

Assignor: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2024980002950

Denomination of invention: Method, device, system, equipment, and storage medium for correcting camera pose

Granted publication date: 20210323

License type: Common License

Record date: 20240319