CN111833402B - Visual odometer rotary motion processing method based on pause information supplementing mechanism - Google Patents

Visual odometer rotary motion processing method based on pause information supplementing mechanism Download PDF

Info

Publication number
CN111833402B
CN111833402B CN202010621410.3A CN202010621410A CN111833402B CN 111833402 B CN111833402 B CN 111833402B CN 202010621410 A CN202010621410 A CN 202010621410A CN 111833402 B CN111833402 B CN 111833402B
Authority
CN
China
Prior art keywords
rotation
camera
image
frame
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010621410.3A
Other languages
Chinese (zh)
Other versions
CN111833402A (en
Inventor
刘世光
樊家辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202010621410.3A priority Critical patent/CN111833402B/en
Publication of CN111833402A publication Critical patent/CN111833402A/en
Application granted granted Critical
Publication of CN111833402B publication Critical patent/CN111833402B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visual odometer rotary motion processing method based on a pause information supplementing mechanism, which comprises the following steps: step 1, monitoring image frames and judging the motion category of the image frames; step 2, switching the data stream direction according to the judging result of the step 1; and the image frames are divided into three categories: a rotation initial frame RB, rotation process frames R1, R2, …, rn, and a rotation end frame RE; the rotating initial frame RB is used for inheriting the current camera pose information and providing initialization for the subsequent frames; the rotation process frames R1, R2, …, rn are used to integrate the rotation images for computation; the rotation end frame RE is used for processing the accumulated error of each rotation and is connected to a common calculation flow; step 3, performing image frame matching and pose calculation to finish the image processing of camera rotation; according to the number of image frames, carrying out error optimization processing on pose calculation results; and 4, optimizing the back-end data.

Description

Visual odometer rotary motion processing method based on pause information supplementing mechanism
Technical Field
The invention belongs to the field of image/graphic processing, relates to target positioning, camera pose calculation and environment map representation, and can maintain the robustness of a visual mileage calculation method under the condition of rotary motion.
Background
The synchronous positioning and map construction (Simultaneous Localization and Mapping, SLAM) refers to a process that an unmanned platform carries out pose estimation in a moving process based on a carried sensor under the condition that no environment priori information exists and own pose information is uncertain, and meanwhile senses surrounding environments in an incremental mode to construct a map in a specific form. With the progress of computer vision technology and the rapid improvement of computing power, SLAM technology research based on vision sensors has been rapidly developed, and researchers have proposed a series of representative methods including feature point method [1-2], direct method [3-5], and hybrid method [6 ].
Aiming at the SLAM visual odometer rotation movement problem, gauglitz et al [7] propose a SLAM algorithm supporting both general and degenerate camera movements. The method adopts a geometric steady information criterion to judge the motion state change, and comprehensively considers a general model and a rotary model. Pirchheim et al [8] propose a method of generating panoramic subgraphs using rotated frames and using image information as much as possible rather than simply discarding them. The method is based on a key frame SLAM system, and features are divided into two types of finite points and infinite points. Zhou et al [9] studied the robustness of rotational motion in a semi-dense SLAM system and built a probabilistic model based on LSD-SLAM [4 ]. Liu et al [10] propose a multi-homography sliding window method based on feature tracking and pose optimization, and make an Android version program to practice the robustness effect of the algorithm. Lakshm et al [11] propose a method to resist rotational motion while taking into account both motion type photometric residuals. The robustness problem of SLAM under specific application scene still needs intensive study, and this patent mainly focuses on the motion degradation problem that camera rotation brought.
In summary, the existing algorithms of different categories can show excellent performance in terms of precision and stability indexes. Domestic researchers have made partial results 12-13 on SLAM application and module improvement, but have not fully studied and studied special motion conditions such as rotation, and camera motion conditions considered by vision odometer modules of existing SLAM algorithms are considered to be stable and non-degraded motion conditions, which is disadvantageous for VR/AR and similar application conditions. In a general motion state, the pose of the camera is represented by parameters of six degrees of freedom; when the camera is rotated or moved approximately rotationally, the degree of freedom drops to 3 as the position parameter does not change. The visual odometer cannot directly solve the problem of freedom change, which is the case that most algorithms have not considered.
Reference is made to:
[1]Klein G,Murray D.Parallel tracking and mapping for small AR workspaces[C]//The 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.IEEE,2007: 225-234.
[2]Mur-Artal R,Montiel J M M,Tardos J D.ORB-SLAM:a versatile and accurate monocular SLAM system[J].IEEE Transactions on Robotics,2015,31(5):1147-1163.
[3]Newcombe R A,Lovegrove S J,Davison A J.DTAM:Dense tracking and mapping in real-time[C]//2011 International Conference on Computer Vision.IEEE,2011:2320-2327.
[4]Engel J,
Figure BDA0002563139750000021
T,Cremers D.LSD-SLAM:Large-scale direct monocular SLAM[C]//European Conference on Computer Vision.Springer,Cham,2014:834-849.
[5]Engel J,Koltun V,Cremers D.Direct sparse odometry[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(3):611-625.
[6]Forster C,Pizzoli M,Scaramuzza D.SVO:Fast semi-direct monocular visual odometry[C]//2014IEEE International Conference on Robotics and Automation(ICRA).IEEE, 2014:15-22.
[7]Gauglitz S,Sweeney C,Ventura J,et al.Live tracking and mapping from both general and rotation-only camera motion[C]//2012IEEE International Symposium on Mixed and Augmented Reality(ISMAR).IEEE,2012:13-22.
[8]Pirchheim C,Schmalstieg D,Reitmayr G.Handling pure camera rotation in keyframe-based SLAM[C]//2013 IEEE International Symposium on Mixed and Augmented Reality(ISMAR). IEEE,2013:229-238.
[9]Zhou Y,Yan F,Zhou Z.Probabilistic depth map model for rotation-only camera motion in semi-dense monocular SLAM[C]//2016 International Conference on Virtual Reality and Visualization(ICVRV).IEEE,2016:8-15.
[10]Liu H,Zhang G,Bao H.Robust keyframe-based monocular SLAM for augmented reality[C]//2016 IEEE International Symposium on Mixed and Augmented Reality(ISMAR). IEEE,2016:1-10.
[11]Lakshmi A,Faheema A G J,Deodhare D.Robust direct visual odometry estimation for a monocular camera under rotations[J].IEEE Robotics and Automation Letters,2017,3(1): 367-372.
[12] yang Jiduo, cheng Yuehua, xu Guili, dong Wende, xie. A monocular vision SLAM method and system [ P ]. CN110189390A,2019-08-30.
[13] Zhou Jian, liu Zhongyuan, li Liang, shouzhi light a SLAM map splicing method and system [ P ]. CN109887053A,2019-06-14.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a visual odometer rotary motion processing method based on a pause information supplementing mechanism. The core mechanism of the invention is a Pause Supplemental Information Mechanism (PSIM), which aims to extract image frames (such as rotating frames) for independent processing, form information packets in the running process of the system and add the information packets into a constructed map, and then connect the image frames before and after rotation and continue running.
The invention aims at realizing the following technical scheme:
a visual odometer rotary motion processing method based on a pause information supplementing mechanism comprises the following steps:
step 1, monitoring image frames and judging the motion category of the image frames;
step 2, switching the data stream direction according to the judging result of the step 1; and the image frames are divided into three categories: a rotation initial frame RB, rotation process frames R1, R2, …, rn, and a rotation end frame RE; the rotating initial frame RB is used for inheriting the current camera pose information and providing initialization for the subsequent frames; the rotation process frames R1, R2, …, rn are used to integrate the rotation images for computation; the rotation end frame RE is used for processing the accumulated error of each rotation and is connected to a common calculation flow;
step 3, performing image frame matching and pose calculation to finish the image processing of camera rotation; according to the number of image frames, carrying out error optimization processing on pose calculation results;
and 4, optimizing the back-end data.
Further, the step 1 specifically includes: setting a vector N to represent the consistency and the difference of the image points; when the motion trend is stable and the camera moves towards one axial direction, the motion vectors are approximately parallel, and the N value is approximately zero; and judging the motion amplitude by setting a fluctuation threshold value, and distinguishing the motion state of the image frame according to the change of the N value.
Further, the step 3 specifically includes:
for the common camera motion state, performing calculation through a DSO algorithm frame; for the rotational camera motion state, calculation is performed by using the rotation initial frame RB, the rotation process frames R1, R2, …, rn, and the rotation end frame RE; transmitting current pose information of the video camera through rotating an initial frame RB, and obtaining pose information of a first frame through image frame accumulation calculation results; the position of the camera is represented by a three-dimensional space, and the rotation of the camera is represented by a quaternion, that is, the data of the camera pose information are represented as follows:
T=(x,y,z,q1,q2,q3,q4) (1)
the rotation process frames R1, R2, … and Rn form a set S for calculating rotation information in the pose of the camera; wherein the points of the rotated initial frame RB have 3D information, and the image points in the set S have only 2D information; and there are two image points in the set S: image points without any depth information and participating in rotation pose information calculation; let the pixel coordinates of the projection of the spatial point P be [ u, v ]] T The spatial point P position is expressed as follows:
Figure BDA0002563139750000041
wherein λ represents scale factors, k represents an internal reference matrix, and ζ represents a lie algebra;
converting the formula (2) into the formula (3) by a least square method, and if the error can be minimized, considering the pose of the camera and the point space position information at the moment to be credible;
Figure BDA0002563139750000042
wherein k represents an internal reference matrix, S represents a scale factor, P represents a space point, u represents an image pixel, and a subscript i corresponds to an ith point; the end of rotation frame RE is a connection point for different motion states; and through the formula and the camera pose data operation, the image processing of camera rotation is completed.
Further, the step (4) specifically includes: the point cloud map is obtained by calculating camera pose information, and part of the camera pose information is overlapped with the global map; and (3) performing processing such as formula (3) and the like, performing fusion and redundancy elimination of the point cloud map, and obtaining the final map point cloud.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
according to the calculation state of the visual odometer and the calculation of the motion vector, whether the camera generates rotation or approximate rotation motion can be identified; if the camera does not rotate, the normal visual odometer pose calculation process is not interfered by the invention; when the camera rotates or moves approximately, the invention can maintain the normal calculation of the visual odometer; the invention can effectively classify and process different image frames and data for the classification of the rotating frames; the invention can avoid the operation stop of the computing system caused by insufficient displacement when the camera rotates; finally, point clouds generated by calculation of different types of motions of the camera are fused together according to back-end optimization.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of image frame classification.
Fig. 3a to 3f are overall views of the point cloud map.
Detailed Description
The invention is described in further detail below with reference to the drawings and the specific examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment provides a visual odometer rotary motion processing method based on a pause information supplementing mechanism; the specific test environment is as follows: the 3.6GHz, 8GB memory and Windows10 system of the Intel Rui 7 processor meet the conventional level of computer configuration. As shown in fig. 1, the method comprises the steps of:
1. monitoring image frames and analyzing image frame motion categories
As shown in fig. 1, the PSIM mechanism is turned on in real time during the operation of the SLAM system, and different PSIM steps are started when different movements of the camera occur, i.e. the parts indicated by dashed boxes in the figure. Judging according to the specific numerical values of the motion state values (R, T) and the N vectors: if the image does not generate special motions such as rotation and quasi-rotation, the value of (R, T) is not 0 or N is not 0, the common frame sliding window is triggered, and the rotation frame sliding window does not act. If the image generates special motions such as rotation and the like and the displacement T in (R, T) is 0 or N is 0, the rotation frame sliding window is triggered, and the common frame sliding window does not act.
2. Switching data stream flow direction according to analysis result of step 1
As shown in fig. 2, image frames are divided into three categories: the rotation initial frame RB, the rotation process frames R1, R2, …, rn, and the rotation end frame RE. The PSIM mechanism directs the data stream to different sliding windows according to different image categories to perform calculation processes such as motion state and rotation amount.
3. Image frame matching and pose calculation
As shown in fig. 1, for a general camera motion state, the present embodiment performs calculation of KF image frames following DSO algorithm framework, corresponding to "D" circle identification in the figure. For the camera motion state such as rotation, calculation is performed by using the rotation initial frame RB, the rotation process frames R1, R2, …, rn, and the rotation end frame RE, corresponding to "EPnP" and "H" circle marks in the figure. The task of rotating the initial frame is to convey camera current pose information. The pose information of the first frame is obtained through the accumulated calculation result of the image frame of the previous part. The position of the camera is represented by a three-dimensional space, and the rotation of the camera is represented by a quaternion, that is, the data of the spatial pose information of the camera is represented as follows:
T=(x,y,z,q1,q2,q3,q4) (1)
the rotation process frames form a set S for the calculation of rotation information in the camera pose. In this step, the points of the RB frames have 3D information, whereas the image points in the rotated image frame set S only have 2D information. There are two image points in set S: image points without any depth information, image points involved in the rotation pose information calculation. By adopting a nonlinear optimization method, two types of image points are utilized togetherIs used. Let the pixel coordinates of the projection of the spatial point P be [ u, v ]] T Its spatial position can be expressed as follows:
Figure BDA0002563139750000061
where λ represents the scale factor, k represents the internal reference matrix, and ζ represents the lie algebra.
Since the measurement error is not completely removable, the least squares method is selected to convert equation (2) to equation (3), and if the error can be minimized, the camera pose and the point spatial position information at this time are considered to be authentic.
Figure BDA0002563139750000062
Wherein k represents an internal reference matrix, S represents a scale factor, P represents a space point, u represents an image pixel, and a subscript i corresponds to an ith point;
the end of rotation frame RE is a connection point for different motion states. And through the formula and the camera pose data operation, the image processing of camera rotation is completed. And carrying out error optimization processing on the pose calculation result according to the number of the RB-RE image frames. The precondition for this error handling operation is to assume that the speed of the moving camera is approximately the same when the user is looking at the surrounding environment.
The part does not need to be set manually, the camera pose information of the previous stage is carried over by utilizing the rotation initial frame RB, mathematical expression is carried out by utilizing the rotation intermediate frame image in the set S, the calculation error is minimized by the least square method (3), the error is further processed by the rotation end frame RE, and finally all data results are obtained.
4. Backend data optimization
The map point cloud can gradually increase after the SLAM system starts to operate, and the number of map point clouds generated by the rotating image frames is smaller than that of map point clouds generated by the common image frames. The map scale is determined according to the image size and the number of key points, and is generally set as follows: the image size is 1280x1024, and the number of key points is controlled within 2000. And (3) performing back-end data optimization by utilizing the superposition part of the global map point cloud and the rotary map point cloud through processing of a formula (3) and the like to obtain a final global map, as shown in fig. 3a to 3 b.
The invention is not limited to the embodiments described above. The above description of specific embodiments is intended to describe and illustrate the technical aspects of the present invention, and is intended to be illustrative only and not limiting. Numerous specific modifications can be made by those skilled in the art without departing from the spirit of the invention and scope of the claims, which are within the scope of the invention.

Claims (3)

1. The visual odometer rotary motion processing method based on the pause information supplementing mechanism is characterized by comprising the following steps of:
step 1, monitoring image frames and judging the motion category of the image frames;
step 2, switching the data stream direction according to the judging result of the step 1; and the image frames are divided into three categories: a rotation initial frame RB, rotation process frames R1, R2, …, rn, and a rotation end frame RE; the rotating initial frame RB is used for inheriting the current camera pose information and providing initialization for the subsequent frames; the rotation process frames R1, R2, …, rn are used to integrate the rotation images for computation; the rotation end frame RE is used for processing the accumulated error of each rotation and is connected to a common calculation flow;
step 3, performing image frame matching and pose calculation to finish the image processing of camera rotation; according to the number of image frames, carrying out error optimization processing on pose calculation results; for the camera motion state in a normal state, performing calculation through a DSO algorithm frame; for the rotational camera motion state, calculation is performed by using the rotation initial frame RB, the rotation process frames R1, R2, …, rn, and the rotation end frame RE; transmitting current pose information of the video camera through rotating an initial frame RB, and obtaining pose information of a first frame through image frame accumulation calculation results; the position of the camera is represented by a three-dimensional space, and the rotation of the camera is represented by a quaternion, that is, the data of the camera pose information are represented as follows:
T=(x,y,z,q1,q2,q3,q4) (1)
the rotation process frames R1, R2, … and Rn form a set S for calculating rotation information in the pose of the camera; wherein the points of the rotated initial frame RB have 3D information, and the image points in the set S have only 2D information; and there are two image points in the set S: image points without any depth information and participating in rotation pose information calculation; let the pixel coordinates of the projection of the spatial point P be [ u, v ]] T The spatial point P position is expressed as follows:
Figure FDA0004202874800000011
wherein λ represents scale factors, k represents an internal reference matrix, and ζ represents a lie algebra;
converting the formula (2) into the formula (3) by a least square method, and if the error can be minimized, considering the pose of the camera and the point space position information at the moment to be credible;
Figure FDA0004202874800000012
wherein k represents an internal reference matrix, S represents a scale factor, P represents a space point, u represents an image pixel, and a subscript i corresponds to an ith point; the end of rotation frame RE is a connection point for different motion states; the image processing of camera rotation is completed through the formula and the camera pose data operation;
and 4, optimizing the back-end data.
2. The method for processing rotational movement of a visual odometer based on a pause information supplementing mechanism according to claim 1, wherein the step 1 specifically comprises: setting a vector N to represent the consistency and the difference of the image points; when the motion trend is stable and the camera moves towards one axial direction, the motion vectors are approximately parallel, and the N value is approximately zero; and judging the motion amplitude by setting a fluctuation threshold value, and distinguishing the motion state of the image frame according to the change of the N value.
3. The method for processing rotational movement of a visual odometer based on a suspension information supplementing mechanism according to claim 1, wherein the step (4) comprises: the point cloud map is obtained by calculating camera pose information, and part of the camera pose information is overlapped with the global map; and (3) performing processing such as formula (3) and the like, performing fusion and redundancy elimination of the point cloud map, and obtaining the final map point cloud.
CN202010621410.3A 2020-06-30 2020-06-30 Visual odometer rotary motion processing method based on pause information supplementing mechanism Active CN111833402B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010621410.3A CN111833402B (en) 2020-06-30 2020-06-30 Visual odometer rotary motion processing method based on pause information supplementing mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010621410.3A CN111833402B (en) 2020-06-30 2020-06-30 Visual odometer rotary motion processing method based on pause information supplementing mechanism

Publications (2)

Publication Number Publication Date
CN111833402A CN111833402A (en) 2020-10-27
CN111833402B true CN111833402B (en) 2023-06-06

Family

ID=72899953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010621410.3A Active CN111833402B (en) 2020-06-30 2020-06-30 Visual odometer rotary motion processing method based on pause information supplementing mechanism

Country Status (1)

Country Link
CN (1) CN111833402B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN108615246A (en) * 2018-04-19 2018-10-02 浙江大承机器人科技有限公司 It improves visual odometry system robustness and reduces the method that algorithm calculates consumption
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109493415A (en) * 2018-09-20 2019-03-19 北京大学 A kind of the global motion initial method and system of aerial images three-dimensional reconstruction
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104374395A (en) * 2014-03-31 2015-02-25 南京邮电大学 Graph-based vision SLAM (simultaneous localization and mapping) method
CN108615246A (en) * 2018-04-19 2018-10-02 浙江大承机器人科技有限公司 It improves visual odometry system robustness and reduces the method that algorithm calculates consumption
CN109211241A (en) * 2018-09-08 2019-01-15 天津大学 The unmanned plane autonomic positioning method of view-based access control model SLAM
CN109493415A (en) * 2018-09-20 2019-03-19 北京大学 A kind of the global motion initial method and system of aerial images three-dimensional reconstruction
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN110108258A (en) * 2019-04-09 2019-08-09 南京航空航天大学 A kind of monocular vision odometer localization method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于特征交叉检验的实时视觉里程计方法;范维思 等;北京航空航天大学学报;第44卷(第11期);全文 *

Also Published As

Publication number Publication date
CN111833402A (en) 2020-10-27

Similar Documents

Publication Publication Date Title
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
Jinyu et al. Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality
Forster et al. SVO: Fast semi-direct monocular visual odometry
CN108519102B (en) Binocular vision mileage calculation method based on secondary projection
CN111968228B (en) Augmented reality self-positioning method based on aviation assembly
WO2019241782A1 (en) Deep virtual stereo odometry
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
Meilland et al. Dense visual mapping of large scale environments for real-time localisation
CN112752028B (en) Pose determination method, device and equipment of mobile platform and storage medium
CN112200869B (en) Robot global optimal visual positioning method and device based on dotted line characteristics
CN112085790A (en) Point-line combined multi-camera visual SLAM method, equipment and storage medium
Fang et al. Self-supervised camera self-calibration from video
Sandy et al. Object-based visual-inertial tracking for additive fabrication
Pitzer et al. Automatic reconstruction of textured 3D models
Xiang et al. Vilivo: Virtual lidar-visual odometry for an autonomous vehicle with a multi-camera system
Wang et al. LF-VIO: A visual-inertial-odometry framework for large field-of-view cameras with negative plane
Zhang et al. A visual-inertial dynamic object tracking SLAM tightly coupled system
CN111833402B (en) Visual odometer rotary motion processing method based on pause information supplementing mechanism
Zhao et al. A review of visual SLAM for dynamic objects
Biström Comparative analysis of properties of LiDAR-based point clouds versus camera-based point clouds for 3D reconstruction using SLAM algorithms
Muharom et al. Real-Time 3D Modeling and Visualization Based on RGB-D Camera using RTAB-Map through Loop Closure
Ruf et al. FaSS-MVS--Fast Multi-View Stereo with Surface-Aware Semi-Global Matching from UAV-borne Monocular Imagery
Zhang et al. A visual slam system with laser assisted optimization
Li et al. Look at Robot Base Once: Hand-Eye Calibration with Point Clouds of Robot Base Leveraging Learning-Based 3D Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant