CN111415417A - Mobile robot topology experience map construction method integrating sparse point cloud - Google Patents

Mobile robot topology experience map construction method integrating sparse point cloud Download PDF

Info

Publication number
CN111415417A
CN111415417A CN202010291189.XA CN202010291189A CN111415417A CN 111415417 A CN111415417 A CN 111415417A CN 202010291189 A CN202010291189 A CN 202010291189A CN 111415417 A CN111415417 A CN 111415417A
Authority
CN
China
Prior art keywords
mobile robot
map
experience
pose
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010291189.XA
Other languages
Chinese (zh)
Other versions
CN111415417B (en
Inventor
刘冬
丛明
陈飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Research Institute Co Ltd of Dalian University of Technology
Original Assignee
Jiangsu Research Institute Co Ltd of Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Research Institute Co Ltd of Dalian University of Technology filed Critical Jiangsu Research Institute Co Ltd of Dalian University of Technology
Priority to CN202010291189.XA priority Critical patent/CN111415417B/en
Publication of CN111415417A publication Critical patent/CN111415417A/en
Application granted granted Critical
Publication of CN111415417B publication Critical patent/CN111415417B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention belongs to the field of robot technology and application, and relates to a mobile robot topology experience map construction method integrated with sparse point cloud. The invention solves the problem of insufficient information of the sparse point cloud map for autonomous navigation tasks, and the topological experience map of the environment is constructed on line through the sparse point cloud, and can be directly used for navigation of the mobile robot.

Description

Mobile robot topology experience map construction method integrating sparse point cloud
Technical Field
The invention belongs to the field of robot technology and application, and relates to a mobile robot topology experience map construction method integrating sparse point clouds.
Background
In recent years, with the improvement of computing capacity and the development of image processing technology, Visual Simultaneous localization and Mapping (vS L AM) becomes a popular research subject and realizes rapid development, a vS L AM algorithm constructed map comprises dense and sparse maps, and in practical application, the sparse map has the problem of insufficient information for the navigation task.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a mobile robot topology experience map construction method integrating sparse point clouds.
The method for constructing the topological experience map of the mobile robot integrated with the sparse point cloud is characterized in that the topological experience map which can be directly used for path planning and navigation of the mobile robot is constructed in an incremental manner according to key frame data acquired by the sparse point cloud and a visual image acquired by a camera.
Preferably, the method specifically comprises the following steps:
(1) constructing a sparse point cloud;
(2) modeling a topological empirical map; modeling a topological empirical map into a set of empirical nodes and a set of node transfer networks;
(3) constructing a topological experience map; the construction of the topological experience map is based on sparse point cloud positioning, the pose of the mobile robot is calculated according to the pose of the camera, visual experience is collected, and node triples of experience information are packaged; and finally, applying the pose and experience information of the mobile robot to the topological empirical map modeling to complete the construction of the topological empirical map.
Preferably, constructing the sparse point cloud in the step (1) comprises a front end tracking part and a rear end optimizing part; the front-end tracking adopts a visual front-end based on a characteristic point method to extract characteristic points of each frame of image, then solves the initial estimation of the pose of the current frame according to a characteristic matching relation and adjusts the initial pose estimation; and the back-end optimization firstly adds new key frames, optimizes and updates the local map, performs closed-loop detection processing on each key frame, and corrects the accumulated error of the front-end visual odometer by using global pose graph optimization once closed loops are detected.
Preferably, the topological empirical map G in step (2) is modeled as a set of both empirical nodes E and node transfer networks W:
G={E,W}
the sequence of the experience node set E is constructed according to the pose of the mobile robot and the experience information of the current scene, and specifically comprises the following steps:
E={e1,e2,…,ek-1,ek}
each node e is composed of a triplet:
e=(id,pose,exp)
in the above formula, id is the node number; the position pos of the mobile robot in the world coordinate systemk=(xk,yk,yawk) The method comprises the steps of obtaining two-dimensional coordinates and a course angle of the mobile robot in a world coordinate system; exp is experience information, namely visual features, which have observable characteristics and can be learned and updated online.
The node transfer network W is composed of weights among nodes, and any weight WijThe transfer cost of the mobile robot from the node i to the node j is represented, and is obtained by calculating the matching degree of the Euclidean distance between the nodes and the visual experience of the Euclidean distance.
Preferably, the mobile robot pose in step (3) is specifically realized as follows:
firstly, obtaining a transformation matrix T from a camera coordinate system to a mobile robot coordinate system according to the calibration of hands and eyesRC
Figure BDA0002450455740000031
Wherein R is a rotation matrix, dimensionDegree 3 × 3, t is the translation vector, and dimension 3 × 1, 0TA zero vector of 3 × 1;
then, acquiring image information by using a camera, and performing matching calculation according to the extracted visual features and the sparse point cloud to obtain a camera pose Xc
Xc=[timestamp,x,y,z,qx,qy,qz,qw]
Where timestamp is the timestamp, x, y, z are the camera spatial position, and qx, qy, qz, qw are the rotational quaternions.
And finally, calculating the pose of the mobile robot according to the transformation matrix and the pose of the camera:
XR=TRC·XC
wherein, XRIs the pose of the mobile robot.
Preferably, the empirical information in step (3) is implemented as follows: in order to apply experience information in real time and efficiently, a BOW bag-of-words model is introduced, a large amount of similar scene picture information is applied, descriptors are extracted, the descriptors with small difference are classified into a word by using a clustering algorithm, a dictionary composed of a large number of words is obtained finally, after a fixed number of descriptors are extracted from each experience node, each descriptor finds a corresponding word in the dictionary and calculates TF-IDF (Trans-Inverse Document Frequency) weight of the word, and finally a word vector is used as experience information to be stored in a node triple:
Figure BDA0002450455740000041
wherein, ω isiIndicating the ith word, ηiAs the word omegaiOf TF-IDF weight, viIs a bag-of-words vector description of the image.
Preferably, the camera is a vision sensor Kinect 2.
The invention has the advantages that: the problem that the sparse point cloud map is insufficient in information for autonomous navigation tasks is solved, the topological experience map of the environment is constructed on line through the sparse point cloud, and the topological experience map can be directly used for navigation of the mobile robot.
Drawings
FIG. 1 is a front-end tracking process for constructing a sparse point cloud according to embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a topology empirical map construction process in embodiment 1 of the present invention;
fig. 3 is a schematic view of a coordinate relationship between a camera and a mobile robot according to embodiment 1 of the present invention;
fig. 4 is a key frame and a topological empirical map constructed in embodiment 1 of the present invention.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present invention clearer, the technical solutions in the examples of the present invention will be clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described examples are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
a mobile robot topology experience map construction method integrating sparse point clouds is carried out according to the following steps:
(1) constructing a sparse point cloud;
(2) modeling a topological empirical map; modeling a topological empirical map into a set of empirical nodes and a set of node transfer networks;
(3) constructing a topological experience map; the construction of the topological experience map is based on sparse point cloud positioning, the pose of the mobile robot is calculated according to the pose of the camera, visual experience is collected, and node triples of experience information are packaged; and finally, applying the pose and experience information of the mobile robot to the topological empirical map modeling to complete the construction of the topological empirical map.
Fig. 1 is a front-end tracking process in constructing a sparse point cloud according to this embodiment, and as shown in fig. 1, a visual front-end tracking algorithm is used as a visual odometer to track a camera pose in real time and determine whether a current frame is used as a key frame. Firstly, feature detection is carried out, FAST key points and BRIEF descriptors are extracted from each frame of image, a 3D-2D matching point set is obtained through inter-frame feature matching, a RANSAC algorithm is used for optimizing matching relations, and a PnP algorithm is used for solving the pose of a current frame. And then, adjusting the pose by using a motion-only BA algorithm according to the camera projection constraint, namely only optimizing the pose of the current frame by fixing map points. If the tracking fails, relocation is performed. In the front-end tracking process, the mobile robot estimates the pose according to the visual odometer for a long time and generates accumulated errors, in order to obtain more accurate pose estimation, the pose of a local camera and the position of a space map point are continuously optimized according to a local map in the back-end optimization, meanwhile, closed-loop detection processing is carried out on each frame of image, and once closed loops are detected, global pose map optimization is carried out, so that the accumulated errors in a large-scale environment are eliminated.
Fig. 2 is a schematic diagram of a topological empirical map construction process according to an example of the present invention, and as shown in fig. 2, the topological empirical map is constructed based on sparse point cloud for positioning, the pose of the mobile robot is calculated according to the pose of the camera, the visual experience is collected, and the obtained pose is packaged into a node triplet of experience information. The specific process is as follows:
firstly, obtaining a transformation matrix T from a camera coordinate system to a mobile robot coordinate system according to the calibration of hands and eyesRCThe pose relationship between the mobile robot and the camera is shown in fig. 3, which includes two parts of rotation R and translation t:
Figure BDA0002450455740000051
where R is a rotation matrix with a dimension of 3 × 3, t is a translation vector with a dimension of 3 × 1, 0TIs a zero vector of 3 × 1.
Then, image information is collected by using a vision sensor Kinect2, and camera positioning is carried out according to feature matching to obtain a camera pose.
Xc=[timestamp,x,y,z,qx,qy,qz,qw]
Where timestamp is the timestamp, x, y, z are the camera spatial position, and qx, qy, qz, qw are the rotational quaternions.
And finally, calculating the pose of the mobile robot according to the external parameter matrix.
XR=TRC·XC
Wherein, XRIs the pose of the mobile robot.
In order to apply experience information in real time and efficiently, a BOW bag-of-words model is introduced, a large amount of similar scene picture information is applied, descriptors are extracted, and the descriptors with small differences are classified into a word by using a clustering algorithm. A dictionary consisting of a large number of words is finally obtained. After extracting a fixed number of descriptors from each experience node, each descriptor finds a corresponding word in the dictionary and calculates the TF-IDF (Term Frequency-Inverse Document Frequency) weight of the word, and finally, the word vector is used as experience information and stored in a node triplet:
Figure BDA0002450455740000061
wherein, ω isiIndicating the ith word, ηiAs the word omegaiOf TF-IDF weight, viIs a bag-of-words vector description of the image.
And constructing a topological experience map into experience nodes and a node transfer network G (E, W) according to the obtained pose and experience information of the robot. And constructing a set of sequence experience nodes E according to the pose of the mobile robot and the visual information of the current scene.
E={e1,e2,…,ek-1,ek}
Wherein each node e is composed of triples: e ═ e (id, dose, exp). The meaning of each element is as follows: id is the node number; pos is the pose of the robot in the world coordinate systemk=(xk,yk,yawk) The method comprises the steps of obtaining two-dimensional coordinates and a course angle of a robot in a world coordinate system; and exp is that experience information, namely visual characteristics, has observable characteristics and can be learned and updated on line.
The node transfer network W is composed of weights among nodes, and any weight WijRepresenting the transfer cost of the mobile robot from the node i to the node j, and the connection weight is calculated by the nodeThe Euclidean distance between the two parts is calculated and obtained by matching degree of visual experience of the Euclidean distance between the two parts.
Fig. 4 is a key frame data and a topological empirical map constructed in the embodiment of the present invention, and as shown in fig. 4, the experimental platform is a mobile robot carrying a visual sensor Kinect2, and the experimental occasion is a typical office environment. The mobile robot acquires an environment sequence image according to visual information, utilizes the front end of a characteristic point method and a remarkable visual landmark point based on an optimized rear end reconstruction environment to realize sparse point cloud construction and storage, calculates the pose of a key frame by a random sample consensus (RANSAC) optimized PnP algorithm, and records key frame data, as shown in fig. 4(a), wherein each point records pose information of a key frame camera: xc=[timestamp,x,y,z,qx,qy,qz,qw]. And (3) calculating to obtain the pose of the mobile robot according to the coordinate relation of the camera and the robot: (x, y, yaw). Acquiring experience information exp of a current frame image by using a word bag model, packaging the experience information exp into a node triple, constructing an experience node, and calculating a node transfer weight according to the Euclidean distance between nodes and the matching degree of visual experience, thereby realizing the construction of a topological experience map, as shown in FIG. 4(b), wherein each circle in the map represents an experience node and contains triple information. According to the node triple information and the node transfer network, the mobile robot can plan a path and realize navigation.
Finally, it should be noted that: the above examples are intended only to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the present embodiment example, it will be understood by those skilled in the art; the technical solutions described in the foregoing examples can be modified, or some technical features can be equally replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (7)

1. A mobile robot topology experience map construction method integrated with sparse point cloud is characterized in that a topology experience map which can be directly used for path planning and navigation of a mobile robot is constructed in an incremental mode according to key frame data acquired by the sparse point cloud and a visual image acquired by a camera.
2. The method for constructing the topological empirical map of the mobile robot integrating the sparse point cloud as claimed in claim 1, is characterized by comprising the following steps:
(1) constructing a sparse point cloud;
(2) modeling a topological empirical map; modeling a topological empirical map into a set of empirical nodes and a set of node transfer networks;
(3) constructing a topological experience map; the construction of the topological experience map is based on sparse point cloud positioning, the pose of the mobile robot is calculated according to the pose of the camera, visual experience is collected, and node triples of experience information are packaged; and finally, applying the pose and experience information of the mobile robot to the topological empirical map modeling to complete the construction of the topological empirical map.
3. The method for constructing the topological empirical map of the mobile robot by integrating the sparse point cloud as claimed in claim 2, wherein the constructing of the sparse point cloud in the step (1) comprises two parts of front-end tracking and back-end optimization; the front-end tracking adopts a visual front-end based on a characteristic point method to extract characteristic points of each frame of image, then solves the initial estimation of the pose of the current frame according to a characteristic matching relation and adjusts the initial pose estimation; and the back-end optimization firstly adds new key frames, optimizes and updates the local map, performs closed-loop detection processing on each key frame, and corrects the accumulated error of the front-end visual odometer by using global pose graph optimization once closed loops are detected.
4. The method for constructing the topological empirical map of the mobile robot integrating the sparse point cloud as claimed in claim 3, wherein the topological empirical map G in the step (2) is modeled as a set of an empirical node set E and a node transfer network W:
G={E,W}
the sequence of the experience node set E is constructed according to the pose of the mobile robot and the experience information of the current scene, and specifically comprises the following steps:
E={e1,e2,…,ek-1,ek}
each node e is composed of a triplet:
e=(id,pose,exp)
in the above formula, id is the node number; the position pos of the mobile robot in the world coordinate systemk=(xk,yk,yawk) The method comprises the steps of obtaining two-dimensional coordinates and a course angle of the mobile robot in a world coordinate system; exp is experience information;
the node transfer network W is composed of weights among nodes, and any weight WijThe transfer cost of the mobile robot from the node i to the node j is represented, and is obtained by calculating the matching degree of the Euclidean distance between the nodes and the visual experience of the Euclidean distance.
5. The method for constructing the topological empirical map of the mobile robot integrating the sparse point cloud as claimed in claim 4, wherein the mobile robot pose in the step (3) is specifically realized as follows:
firstly, obtaining a transformation matrix T from a camera coordinate system to a mobile robot coordinate system according to the calibration of hands and eyesRC
Figure FDA0002450455730000021
Where R is a rotation matrix with a dimension of 3 × 3, t is a translation vector with a dimension of 3 × 1, 0TA zero vector of 3 × 1;
then, acquiring image information by using a camera, and performing matching calculation according to the extracted visual features and the sparse point cloud to obtain a camera pose Xc
Xc=[timestamp,x,y,z,qx,qy,qz,qw]
Where timestamp is the timestamp, x, y, z are the camera spatial position, and qx, qy, qz, qw are the rotational quaternions.
And finally, calculating the pose of the mobile robot according to the transformation matrix and the pose of the camera:
XR=TRC·XC
wherein, XRIs the pose of the mobile robot.
6. The method for constructing the topological empirical map of the mobile robot integrating the sparse point cloud as claimed in claim 5, wherein the empirical information in the step (3) is implemented as follows: introducing a BOW bag-of-words model, firstly applying a large amount of similar scene picture information, extracting descriptors, grouping the descriptors with small difference into a word by using a clustering algorithm, finally obtaining a dictionary formed by a large number of words, after extracting a fixed number of descriptors from each experience node, finding the corresponding word in the dictionary and calculating the TF-IDF weight of the word by each descriptor, and finally storing the word vector as experience information in a node triple:
Figure FDA0002450455730000031
wherein, ω isiIndicating the ith word, ηiAs the word omegaiOf TF-IDF weight, viIs a bag-of-words vector description of the image.
7. The method for constructing topology experience map of mobile robot integrating sparse point clouds according to claim 1, 2, 3, 4, 5 or 6, wherein the camera is a vision sensor Kinect 2.
CN202010291189.XA 2020-04-14 2020-04-14 Mobile robot topology experience map construction method integrating sparse point cloud Active CN111415417B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010291189.XA CN111415417B (en) 2020-04-14 2020-04-14 Mobile robot topology experience map construction method integrating sparse point cloud

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010291189.XA CN111415417B (en) 2020-04-14 2020-04-14 Mobile robot topology experience map construction method integrating sparse point cloud

Publications (2)

Publication Number Publication Date
CN111415417A true CN111415417A (en) 2020-07-14
CN111415417B CN111415417B (en) 2023-09-05

Family

ID=71494857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010291189.XA Active CN111415417B (en) 2020-04-14 2020-04-14 Mobile robot topology experience map construction method integrating sparse point cloud

Country Status (1)

Country Link
CN (1) CN111415417B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116664A (en) * 2020-09-04 2020-12-22 季华实验室 Hand-eye calibration track generation method and device, electronic equipment and storage medium
CN112365537A (en) * 2020-10-13 2021-02-12 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN114199243A (en) * 2020-09-18 2022-03-18 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN114440892A (en) * 2022-01-27 2022-05-06 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN114526739A (en) * 2022-01-25 2022-05-24 中南大学 Mobile robot indoor repositioning method, computer device and product
CN114820751A (en) * 2022-03-10 2022-07-29 中国海洋大学 Method and device for three-dimensional reconstruction of scene and three-dimensional reconstruction system
CN115731287A (en) * 2022-09-07 2023-03-03 滁州学院 Moving target retrieval method based on set and topological space

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN110675307A (en) * 2019-08-19 2020-01-10 杭州电子科技大学 Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063258A (en) * 2017-03-07 2017-08-18 重庆邮电大学 A kind of mobile robot indoor navigation method based on semantic information
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN109583457A (en) * 2018-12-03 2019-04-05 荆门博谦信息科技有限公司 A kind of method and robot of robot localization and map structuring
CN110675307A (en) * 2019-08-19 2020-01-10 杭州电子科技大学 Implementation method of 3D sparse point cloud to 2D grid map based on VSLAM

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张潇 等: "一种改进的RatSLAM仿生导航算法" *
邹强 等: "基于生物认知的移动机器人路径规划方法" *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116664B (en) * 2020-09-04 2024-05-28 季华实验室 Method and device for generating hand-eye calibration track, electronic equipment and storage medium
CN112116664A (en) * 2020-09-04 2020-12-22 季华实验室 Hand-eye calibration track generation method and device, electronic equipment and storage medium
CN114199243A (en) * 2020-09-18 2022-03-18 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN114199243B (en) * 2020-09-18 2024-05-24 浙江舜宇智能光学技术有限公司 Pose estimation and motion planning method and device for robot and robot
CN112365537B (en) * 2020-10-13 2023-06-27 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN112365537A (en) * 2020-10-13 2021-02-12 天津大学 Active camera repositioning method based on three-dimensional point cloud alignment
CN114526739A (en) * 2022-01-25 2022-05-24 中南大学 Mobile robot indoor repositioning method, computer device and product
CN114526739B (en) * 2022-01-25 2024-05-07 中南大学 Mobile robot indoor repositioning method, computer device and product
CN114440892B (en) * 2022-01-27 2023-11-03 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN114440892A (en) * 2022-01-27 2022-05-06 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN114820751A (en) * 2022-03-10 2022-07-29 中国海洋大学 Method and device for three-dimensional reconstruction of scene and three-dimensional reconstruction system
CN115731287B (en) * 2022-09-07 2023-06-23 滁州学院 Moving target retrieval method based on aggregation and topological space
CN115731287A (en) * 2022-09-07 2023-03-03 滁州学院 Moving target retrieval method based on set and topological space

Also Published As

Publication number Publication date
CN111415417B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN111415417A (en) Mobile robot topology experience map construction method integrating sparse point cloud
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN111795704B (en) Method and device for constructing visual point cloud map
Cieslewski et al. Data-efficient decentralized visual SLAM
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN105856230B (en) A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN104732518B (en) A kind of PTAM improved methods based on intelligent robot terrain surface specifications
CN106940704B (en) Positioning method and device based on grid map
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
WO2019157925A1 (en) Visual-inertial odometry implementation method and system
CN107657640A (en) Intelligent patrol inspection management method based on ORB SLAM
WO2021035669A1 (en) Pose prediction method, map construction method, movable platform, and storage medium
CN107888828A (en) Space-location method and device, electronic equipment and storage medium
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN112444246B (en) Laser fusion positioning method in high-precision digital twin scene
CN102607532B (en) Quick low-level image matching method by utilizing flight control data
CN112802096A (en) Device and method for realizing real-time positioning and mapping
WO2023087758A1 (en) Positioning method, positioning apparatus, computer-readable storage medium, and computer program product
CN111998862A (en) Dense binocular SLAM method based on BNN
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
WO2023087681A1 (en) Positioning initialization method and apparatus, and computer-readable storage medium and computer program product
CN114255323A (en) Robot, map construction method, map construction device and readable storage medium
CN116007609A (en) Positioning method and computing system for fusion of multispectral image and inertial navigation
Eade Monocular simultaneous localisation and mapping
Dang et al. Perfc: An efficient 2d and 3d perception software-hardware framework for mobile cobot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant