CN110807818A - RGB-DSLAM method and device based on key frame - Google Patents

RGB-DSLAM method and device based on key frame Download PDF

Info

Publication number
CN110807818A
CN110807818A CN201911040190.9A CN201911040190A CN110807818A CN 110807818 A CN110807818 A CN 110807818A CN 201911040190 A CN201911040190 A CN 201911040190A CN 110807818 A CN110807818 A CN 110807818A
Authority
CN
China
Prior art keywords
frame
key frame
point cloud
selecting
transformation matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911040190.9A
Other languages
Chinese (zh)
Inventor
吉长江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yingpu Technology Co Ltd
Original Assignee
Beijing Yingpu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yingpu Technology Co Ltd filed Critical Beijing Yingpu Technology Co Ltd
Priority to CN201911040190.9A priority Critical patent/CN110807818A/en
Publication of CN110807818A publication Critical patent/CN110807818A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an RGB-DSLAM method and device based on key frames, and relates to the field of SLAM. The method comprises the following steps: acquiring RGB-D color and depth images from a dataset; carrying out attitude estimation by using FOVIS according to the color and depth images to obtain a relative transformation matrix between two poses, and creating a point cloud picture according to the relative transformation matrix; and selecting a key frame by using a g2o frame in the closed-loop detection process, creating a node corresponding to the key frame in the point cloud picture, storing the node and performing optimization calculation. The device includes: the device comprises an acquisition module, a creation module and a detection module. The method optimizes the existing frame, reduces the necessary number of stored frames, has strong robustness, can effectively solve the SLAM problem, reduces the mileage measurement error, and generates a coherent three-dimensional map.

Description

RGB-DSLAM method and device based on key frame
Technical Field
The present application relates to the SLAM field, and in particular, to a method and an apparatus for RGB-DSLAM based on a key frame.
Background
RGB-D (color and depth image) cameras for robotic research are relatively new. Henry et al, 2012 developed an RGB-D SLAM (Simultaneous Localization And Mapping) system, which combines sparse visual features And ICP (Iterative Closest Point) to construct And optimize a location map using the concept of keyframes. In 2011, Huang et al implemented a visual odometry measurement system named FOVIS (Fast enough-interactive from VISion) on a miniature aircraft with RGB-D cameras for positioning and mapping with sparse visual features. In 2014, Silva and Goncalves developed a visual odometry and mapping system for RGB-D cameras that required the use of only a CPU. Another RGB-D SLAM system was developed by Endres et al in 2012, using sparse visual features and graph optimization. In 2014, Engel et al proposed an LSD-SLAM algorithm based on keyframe and map optimization, while in 2017, Mur-Artal and Tardos developed ORB (organized FAST and rotaed BRIEF, FAST feature point extraction and description) -SLAM2, which is a complete monocular, stereo and RGB-D camera SLAM system, using map optimization techniques. ORB-SLAM2 has good accuracy and performance. It creates a sparse representation of the environment, but only through post-processing can a dense map be obtained. The work combines a key frame cycle detection method and FOVIS robust visual mileage estimation to obtain a position map. The map is optimized to create consistent trajectories to generate an aligned dense point cloud global map.
However, in the prior art, the storage mode mostly adopts a point cloud form, so that a large amount of memory is required, and an error is often large due to the use of a key frame.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to an aspect of the present application, there is provided a key frame based RGB-D SLAM method, including:
acquiring RGB-D color and depth images from a dataset;
carrying out attitude estimation by using FOVIS according to the color and depth images to obtain a relative transformation matrix between two poses, and creating a point cloud picture according to the relative transformation matrix;
and selecting a key frame by using a g2o frame in the closed-loop detection process, creating a node corresponding to the key frame in the point cloud picture, storing the node and performing optimization calculation.
Optionally, creating a point cloud according to the relative transformation matrix includes:
from the relative transformation matrix, a point cloud is created using Eqs and using a mesh filter.
Optionally, selecting the key frame includes:
initially, selecting a first frame as a key frame, acquiring a next frame as a current frame, matching the current frame with the key frame by using an ORB (object relational mapping) rapid feature point extraction and description algorithm, when the number of correct data matched between the two frames is lower than a preset threshold value, selecting the current frame as a new key frame, and otherwise, continuously acquiring the next frame as the current frame for matching.
Optionally, the method further comprises:
and if the number of the correct data matched between the currently selected key frame and the last key frame exceeds a preset threshold value, confirming that a closed loop exists, and adding a corresponding edge in the point cloud picture.
Optionally, the method further comprises:
in the process of selecting the key frame for matching, an outlier reflection filter and a RANSAC method are used for detecting matched outliers and selecting correct data.
According to another aspect of the present application, there is provided a key frame based RGB-D SLAM apparatus, including:
an acquisition module configured to acquire RGB-D color and depth images from a dataset;
a creation module configured to perform pose estimation using FOVIS from the color and depth images, to obtain a relative transformation matrix between the two poses, and to create a point cloud map from the relative transformation matrix;
and the detection module is configured to utilize a g2o frame in a closed-loop detection process, select a key frame, create a node corresponding to the key frame in the point cloud image, store the node and perform optimization calculation.
Optionally, the creating module is specifically configured to:
from the relative transformation matrix, a point cloud is created using Eqs and using a mesh filter.
Optionally, the detection module is specifically configured to:
initially, selecting a first frame as a key frame, acquiring a next frame as a current frame, matching the current frame with the key frame by using an ORB (object relational mapping) rapid feature point extraction and description algorithm, when the number of correct data matched between the two frames is lower than a preset threshold value, selecting the current frame as a new key frame, and otherwise, continuously acquiring the next frame as the current frame for matching.
Optionally, the detection module is further configured to:
and if the number of the correct data matched between the currently selected key frame and the last key frame exceeds a preset threshold value, confirming that a closed loop exists, and adding a corresponding edge in the point cloud picture.
Optionally, the detection module is further configured to:
in the process of selecting the key frame for matching, an outlier reflection filter and a RANSAC method are used for detecting matched outliers and selecting correct data.
According to yet another aspect of the application, there is provided a computing device comprising a memory, a processor and a computer program stored in the memory and executable by the processor, wherein the processor implements the method as described above when executing the computer program.
According to yet another aspect of the application, a computer-readable storage medium, preferably a non-volatile readable storage medium, is provided, having stored therein a computer program which, when executed by a processor, implements a method as described above.
According to yet another aspect of the application, there is provided a computer program product comprising computer readable code which, when executed by a computer device, causes the computer device to perform the method described above.
According to the technical scheme, RGB-D color and depth images are obtained from a data set, posture estimation is carried out by utilizing FOVIS according to the color and the depth images, a relative transformation matrix between two poses is obtained, a point cloud picture is created according to the relative transformation matrix, a g2o frame is utilized in the closed-loop detection process, a key frame is selected, a node corresponding to the key frame is created in the point cloud picture, the node is stored and optimized and calculated, the existing frame is optimized, the necessary number of stored frames is reduced, the robustness is high, the SLAM problem can be effectively solved, the mileage measurement error is reduced, and a coherent three-dimensional map is generated.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a flowchart of a keyframe based RGB-D SLAM method according to one embodiment of the present application;
FIG. 2 is a flowchart of a key frame based RGB-D SLAM method according to another embodiment of the present application;
FIG. 3 is a block diagram of a key frame based RGB-D SLAM apparatus according to another embodiment of the present application;
FIG. 4 is a block diagram of a computing device according to another embodiment of the present application;
fig. 5 is a diagram of a computer-readable storage medium structure according to another embodiment of the present application.
Detailed Description
FIG. 1 is a flowchart of a keyframe based RGB-D SLAM method according to one embodiment of the present application. Referring to fig. 1, the method includes:
101: acquiring RGB-D color and depth images from a dataset;
102: carrying out attitude estimation by using FOVIS according to the color and depth images to obtain a relative transformation matrix between the two poses, and creating a point cloud picture according to the relative transformation matrix;
103: and in the closed loop detection process, selecting a key frame by using a g2o frame, creating a node corresponding to the key frame in the point cloud picture, storing the node and performing optimization calculation.
In this embodiment, optionally, creating a point cloud graph according to the relative transformation matrix includes:
from the relative transformation matrix, a point cloud is created using Eqs and using a mesh filter.
In this embodiment, optionally, selecting the key frame includes:
the method comprises the steps of initially selecting a first frame as a key frame, obtaining a next frame as a current frame, matching the current frame with the key frame by using an ORB rapid feature point extraction and description algorithm, selecting the current frame as a new key frame when the number of correct data matched between the two frames is lower than a preset threshold value, and continuously obtaining the next frame as the current frame for matching.
In this embodiment, optionally, the method further includes:
and if the number of the correct data matched between the currently selected key frame and the last key frame exceeds a preset threshold value, confirming that a closed loop exists, and adding a corresponding edge in the point cloud picture.
In this embodiment, optionally, the method further includes:
in the process of selecting the key frame for matching, an outlier reflection filter and a RANSAC method are used for detecting matched outliers and selecting correct data.
In the method provided by the embodiment, RGB-D color and depth images are acquired from a data set, a relative transformation matrix between two poses is obtained by performing posture estimation according to the color and depth images using FOVIS, a point cloud picture is created according to the relative transformation matrix, a key frame is selected by using a g2o frame in a closed-loop detection process, a node corresponding to the key frame is created in the point cloud picture, the node is stored and optimized, and an existing frame is optimized, so that the necessary number of stored frames is reduced, the method has strong robustness, can effectively solve the SLAM problem, reduce the mileage measurement error, and generate a coherent three-dimensional map.
Fig. 2 is a flowchart of a key frame based RGB-D SLAM method according to another embodiment of the present application. Referring to fig. 2, the method includes:
201: acquiring RGB-D color and depth images from a dataset;
in this embodiment, optionally, a tun reference dataset of munich industry university is used, which is an open source RGB-D SLAM dataset.
202: carrying out attitude estimation by using FOVIS according to the color and depth images to obtain a relative transformation matrix between two poses;
203: creating a point cloud graph using Eqs and using a mesh filter based on the relative transformation matrix;
wherein, because point cloud graph representation requires a large amount of memory, a mesh filter may be used to reduce the number of points used to create the point cloud graph.
204: in the closed loop detection process, a g2o frame is utilized, a first frame is selected as a key frame initially, a next frame is obtained as a current frame, the current frame is matched with the key frame by using an ORB rapid feature point extraction and description algorithm, when the number of correct data matched between the two frames is lower than a preset threshold value, the current frame is selected as a new key frame, otherwise, the next frame is continuously obtained as the current frame for matching;
in this embodiment, the closed loop detection is crucial to the optimization of the graph, and is used to optimize the error constraint. Ideally, to detect the cycle closure, only the current frame and all the past frames need to be compared, but the calculation amount is too large, so the problem is solved by selecting the key frame, and the calculation amount is reduced.
When the number of the correct data matched between the two frames is lower than a preset threshold value, the robot is indicated to finish a new action, and therefore, the current frame is selected as a new key frame. Each new key frame is matched to a previous key frame.
205: creating nodes corresponding to the key frames in the point cloud graph, storing the nodes and performing optimization calculation;
in this embodiment, each time a new key frame is determined, a node is added to the cloud point map, so that the cloud point map is continuously updated.
206: and if the number of the correct data matched between the currently selected key frame and the last key frame exceeds a preset threshold value, confirming that a closed loop exists, and adding a corresponding edge in the point cloud picture.
In this embodiment, optionally, the method further includes:
in the process of selecting the key frame for matching, an outlier reflection filter and a RANSAC method are used for detecting matched outliers and selecting correct data.
In the method provided by the embodiment, RGB-D color and depth images are acquired from a data set, a relative transformation matrix between two poses is obtained by performing posture estimation according to the color and depth images using FOVIS, a point cloud picture is created according to the relative transformation matrix, a key frame is selected by using a g2o frame in a closed-loop detection process, a node corresponding to the key frame is created in the point cloud picture, the node is stored and optimized, and an existing frame is optimized, so that the necessary number of stored frames is reduced, the method has strong robustness, can effectively solve the SLAM problem, reduce the mileage measurement error, and generate a coherent three-dimensional map.
Fig. 3 is a structural diagram of a key frame based RGB-D SLAM device according to another embodiment of the present application. Referring to fig. 3, the apparatus includes:
an acquisition module 301 configured to acquire RGB-D color and depth images from a dataset;
a creation module 302 configured to perform pose estimation using FOVIS from the color and depth images, to obtain a relative transformation matrix between the two poses, to create a point cloud from the relative transformation matrix;
and the detection module 303 is configured to select a key frame by using the g2o framework in the closed-loop detection process, create a node corresponding to the key frame in the point cloud image, store the node, and perform optimization calculation.
In this embodiment, optionally, the creating module is specifically configured to:
from the relative transformation matrix, a point cloud is created using Eqs and using a mesh filter.
In this embodiment, optionally, the detection module is specifically configured to:
the method comprises the steps of initially selecting a first frame as a key frame, obtaining a next frame as a current frame, matching the current frame with the key frame by using an ORB rapid feature point extraction and description algorithm, selecting the current frame as a new key frame when the number of correct data matched between the two frames is lower than a preset threshold value, and continuously obtaining the next frame as the current frame for matching.
In this embodiment, optionally, the detection module is further configured to:
and if the number of the correct data matched between the currently selected key frame and the last key frame exceeds a preset threshold value, confirming that a closed loop exists, and adding a corresponding edge in the point cloud picture.
In this embodiment, optionally, the detection module is further configured to:
in the process of selecting the key frame for matching, an outlier reflection filter and a RANSAC method are used for detecting matched outliers and selecting correct data.
The apparatus provided in this embodiment may perform the method provided in any of the above method embodiments, and details of the process are described in the method embodiments and are not described herein again.
According to the device provided by the embodiment, RGB-D color and depth images are acquired from a data set, the FOVIS is utilized to perform posture estimation according to the color and the depth images to obtain a relative transformation matrix between two poses, a point cloud picture is created according to the relative transformation matrix, a g2o frame is utilized in the closed-loop detection process to select a key frame, a node corresponding to the key frame is created in the point cloud picture, the node is stored and optimized and calculated, the existing frame is optimized, the necessary number of stored frames is reduced, the robustness is high, the SLAM problem can be effectively solved, the mileage measurement error is reduced, and a coherent three-dimensional map is generated.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Embodiments also provide a computing device, referring to fig. 4, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 5, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. Which, when run on a computer, causes the computer to carry out the steps of the method according to the invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A key frame based RGB-DSLAM method includes:
acquiring RGB-D color and depth images from a dataset;
carrying out attitude estimation by using FOVIS according to the color and depth images to obtain a relative transformation matrix between two poses, and creating a point cloud picture according to the relative transformation matrix;
and selecting a key frame by using a g2o frame in the closed-loop detection process, creating a node corresponding to the key frame in the point cloud picture, storing the node and performing optimization calculation.
2. The method of claim 1, wherein creating a point cloud from the relative transformation matrix comprises:
from the relative transformation matrix, a point cloud is created using Eqs and using a mesh filter.
3. The method of claim 1, wherein selecting a key frame comprises:
initially, selecting a first frame as a key frame, acquiring a next frame as a current frame, matching the current frame with the key frame by using an ORB (object relational mapping) rapid feature point extraction and description algorithm, when the number of correct data matched between the two frames is lower than a preset threshold value, selecting the current frame as a new key frame, and otherwise, continuously acquiring the next frame as the current frame for matching.
4. The method of claim 1, further comprising:
and if the number of the correct data matched between the currently selected key frame and the last key frame exceeds a preset threshold value, confirming that a closed loop exists, and adding a corresponding edge in the point cloud picture.
5. The method according to any one of claims 1-4, further comprising:
in the process of selecting the key frame for matching, an outlier reflection filter and a RANSAC method are used for detecting matched outliers and selecting correct data.
6. A key frame based RGB-DSLAM device, comprising:
an acquisition module configured to acquire RGB-D color and depth images from a dataset;
a creation module configured to perform pose estimation using FOVIS from the color and depth images, to obtain a relative transformation matrix between the two poses, and to create a point cloud map from the relative transformation matrix;
and the detection module is configured to utilize a g2o frame in a closed-loop detection process, select a key frame, create a node corresponding to the key frame in the point cloud image, store the node and perform optimization calculation.
7. The apparatus of claim 6, wherein the creation module is specifically configured to:
from the relative transformation matrix, a point cloud is created using Eqs and using a mesh filter.
8. The apparatus of claim 6, wherein the detection module is specifically configured to:
initially, selecting a first frame as a key frame, acquiring a next frame as a current frame, matching the current frame with the key frame by using an ORB (object relational mapping) rapid feature point extraction and description algorithm, when the number of correct data matched between the two frames is lower than a preset threshold value, selecting the current frame as a new key frame, and otherwise, continuously acquiring the next frame as the current frame for matching.
9. The apparatus of claim 6, wherein the detection module is further configured to:
and if the number of the correct data matched between the currently selected key frame and the last key frame exceeds a preset threshold value, confirming that a closed loop exists, and adding a corresponding edge in the point cloud picture.
10. The apparatus of any one of claims 6-9, wherein the detection module is further configured to:
in the process of selecting the key frame for matching, an outlier reflection filter and a RANSAC method are used for detecting matched outliers and selecting correct data.
CN201911040190.9A 2019-10-29 2019-10-29 RGB-DSLAM method and device based on key frame Pending CN110807818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911040190.9A CN110807818A (en) 2019-10-29 2019-10-29 RGB-DSLAM method and device based on key frame

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911040190.9A CN110807818A (en) 2019-10-29 2019-10-29 RGB-DSLAM method and device based on key frame

Publications (1)

Publication Number Publication Date
CN110807818A true CN110807818A (en) 2020-02-18

Family

ID=69489544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911040190.9A Pending CN110807818A (en) 2019-10-29 2019-10-29 RGB-DSLAM method and device based on key frame

Country Status (1)

Country Link
CN (1) CN110807818A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184810A (en) * 2020-09-22 2021-01-05 浙江商汤科技开发有限公司 Relative pose estimation method, device, electronic device and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern
CN109636897A (en) * 2018-11-23 2019-04-16 桂林电子科技大学 A kind of Octomap optimization method based on improvement RGB-D SLAM
CN109766758A (en) * 2018-12-12 2019-05-17 北京计算机技术及应用研究所 A kind of vision SLAM method based on ORB feature

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590827A (en) * 2017-09-15 2018-01-16 重庆邮电大学 A kind of indoor mobile robot vision SLAM methods based on Kinect
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern
CN109636897A (en) * 2018-11-23 2019-04-16 桂林电子科技大学 A kind of Octomap optimization method based on improvement RGB-D SLAM
CN109766758A (en) * 2018-12-12 2019-05-17 北京计算机技术及应用研究所 A kind of vision SLAM method based on ORB feature

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J. C. V. SOARES等: ""Keyframe-based RGB-D SLAM for Mobile Robots with Visual Odometry in Indoor Environments using Graph Optimization"", 《2018 LATIN AMERICAN ROBOTIC SYMPOSIUM》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184810A (en) * 2020-09-22 2021-01-05 浙江商汤科技开发有限公司 Relative pose estimation method, device, electronic device and medium

Similar Documents

Publication Publication Date Title
US10984556B2 (en) Method and apparatus for calibrating relative parameters of collector, device and storage medium
CN107990899B (en) Positioning method and system based on SLAM
CN110135455B (en) Image matching method, device and computer readable storage medium
US10636198B2 (en) System and method for monocular simultaneous localization and mapping
CN107909612B (en) Method and system for visual instant positioning and mapping based on 3D point cloud
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
CN110118554A (en) SLAM method, apparatus, storage medium and device based on visual inertia
EP3326156B1 (en) Consistent tessellation via topology-aware surface tracking
US10249058B2 (en) Three-dimensional information restoration device, three-dimensional information restoration system, and three-dimensional information restoration method
US20190080190A1 (en) System and method of selecting a keyframe for iterative closest point
WO2018214086A1 (en) Method and apparatus for three-dimensional reconstruction of scene, and terminal device
GB2566443A (en) Cross-source point cloud registration
CN113822996B (en) Pose estimation method and device for robot, electronic device and storage medium
Han et al. DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving
CN115049731A (en) Visual mapping and positioning method based on binocular camera
CN105339981B (en) Method for using one group of primitive registration data
CN110807818A (en) RGB-DSLAM method and device based on key frame
WO2017003424A1 (en) Metric 3d stitching of rgb-d data
CN115511970B (en) Visual positioning method for autonomous parking
KR101766823B1 (en) Robust visual odometry system and method to irregular illumination changes
CN114742967B (en) Visual positioning method and device based on building digital twin semantic graph
CN115937002A (en) Method, apparatus, electronic device and storage medium for estimating video rotation
CN111489439B (en) Three-dimensional line graph reconstruction method and device and electronic equipment
CN112288817B (en) Three-dimensional reconstruction processing method and device based on image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200218

RJ01 Rejection of invention patent application after publication