CN114924287A - Map construction method, apparatus and medium - Google Patents

Map construction method, apparatus and medium Download PDF

Info

Publication number
CN114924287A
CN114924287A CN202210422143.6A CN202210422143A CN114924287A CN 114924287 A CN114924287 A CN 114924287A CN 202210422143 A CN202210422143 A CN 202210422143A CN 114924287 A CN114924287 A CN 114924287A
Authority
CN
China
Prior art keywords
pose
closed loop
local map
key frame
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210422143.6A
Other languages
Chinese (zh)
Inventor
王雷
陈熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecoflow Technology Ltd
Original Assignee
Ecoflow Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecoflow Technology Ltd filed Critical Ecoflow Technology Ltd
Priority to CN202210422143.6A priority Critical patent/CN114924287A/en
Publication of CN114924287A publication Critical patent/CN114924287A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides a map construction method, map construction equipment and a map construction medium, which relate to the technical field of robots, wherein the method comprises the steps of acquiring image feature points of a plurality of key frames when closed-loop detection conditions are met, and inquiring an estimated pose of each key frame in a visual dictionary according to the image feature points to obtain a plurality of poses to be optimized; and determining a plurality of target poses from the target local map according to the poses to be optimized, taking the plurality of target poses as a plurality of pre-closed loop points, updating the pose map according to the pre-closed loop points, and finally constructing the global map according to the updated pose map. The visual dictionary comprises image feature points and estimated poses of all historical key frames, the pose graph comprises laser radar point cloud data and estimated poses of the historical key frames, and the target local map is generated according to the laser radar point cloud data of all the historical key frames. The technical scheme provided by the application can improve the accuracy of the constructed map.

Description

Map construction method, apparatus and medium
Technical Field
The present application relates to the field of robotics, and in particular, to a map construction method, apparatus, and medium.
Background
With the continuous progress of the artificial intelligence technology, the functions of the robot become diversified, so that various robots can well complete work in specific environments, and the robot is more and more popular among people.
In most cases, the working environment of the robot is unknown or uncertain, autonomous movement and positioning of the robot are achieved by means of an environment map, the robot mostly adopts sensors such as a laser radar or a camera to construct a map at present, and the method is easy to cause that map construction cannot be closed in places with characteristic points being not rich or scenes being similar, so that the constructed map is low in accuracy.
Disclosure of Invention
In view of this, the present application provides a map construction method, apparatus, and device, so as to improve robustness of a robot in a field with insufficient feature points or similar scenes when a map is constructed in a closed loop, thereby improving accuracy of the constructed map.
In order to achieve the above object, in a first aspect, an embodiment of the present application provides a map construction method, including:
when closed-loop detection is carried out, image feature points of a plurality of key frames are obtained;
according to the image feature points, the estimated pose of each key frame is obtained by inquiring in a visual dictionary and is used as a pose to be optimized, and a plurality of poses to be optimized are obtained, wherein the visual dictionary comprises the image feature points and the estimated poses of all historical key frames;
determining a plurality of target poses from a target local map according to the poses to be optimized, wherein the target local map is generated according to the laser radar point cloud data of each historical key frame;
taking a plurality of target poses as a plurality of pre-closed loop points, and updating a pose graph according to the pre-closed loop points, wherein the pose graph comprises laser radar point cloud data and estimated poses of historical key frames;
and constructing a global map according to the updated pose graph.
As an optional implementation manner of this embodiment, before acquiring image feature points of multiple key frames during the closed-loop detection, the method further includes:
acquiring mapping information of the robot, wherein the mapping information comprises laser radar point cloud data and an initial pose of a current key frame;
constructing a current local map according to the lidar point cloud data of the current key frame, and optimizing the initial pose according to the current local map to obtain an estimated pose of the current key frame;
and updating a pose graph according to the laser radar point cloud data of the current key frame and the estimated pose of the current key frame.
As an optional implementation manner of this embodiment of the present application, the determining multiple target poses from a target local map according to the pose to be optimized includes:
and determining the target pose in a branch-and-bound mode in the target local map according to the plurality of poses to be optimized.
As an optional implementation manner of the embodiment of the present application, the updating the pose graph according to the pre-closed loop point includes:
determining a closed loop constraint factor according to the pre-closed loop point, a previous pre-closed loop point of the pre-closed loop point, an estimated pose corresponding to the pre-closed loop point and an estimated pose corresponding to the previous pre-closed loop point;
and if the closed loop constraint factor is smaller than a preset closed loop error threshold value, constructing an error equation according to the closed loop constraint factor to optimize the estimated pose in the pose graph to obtain an updated pose graph.
As an optional implementation manner in this embodiment of the application, determining a closed-loop constraint factor according to the pre-closed loop point, a previous pre-closed loop point of the pre-closed loop point, an estimated pose corresponding to the pre-closed loop point, and an estimated pose corresponding to the previous pre-closed loop point includes:
if the coordinate systems of the target local map where the pre-closed loop point is located and the current local map are different, determining a target estimated pose of the estimated pose in the target local map according to a coordinate mapping relation between the target local map and the current local map;
determining a closed loop constraint factor according to the pre-closed loop point, a previous pre-closed loop point of the pre-closed loop point, the estimated pose of the target and the estimated pose corresponding to the previous pre-closed loop point;
and if the coordinate system of the target local map where the pre-closed loop point is located is the same as that of the current local map, determining a closed loop constraint factor according to the pre-closed loop point, a last pre-closed loop point of the pre-closed loop point, an estimated pose corresponding to the pre-closed loop point and an estimated pose corresponding to the last pre-closed loop point.
As an optional implementation manner of the embodiment of the present application, after the current local map is constructed according to the lidar point cloud data of the current keyframe, before the initial pose is optimized according to the current local map, the method further includes:
projecting the laser radar point cloud data of the current key frame into the current local map;
calculating a projection score according to the projection position of the laser radar point cloud data of the current key frame in the current local map;
if the projection score is smaller than a set score, or the number of the feature points in the current key frame is smaller than the set number of the feature points, a coordinate system of a new local map is established according to the laser radar point cloud data of the current key frame and the pose of the robot in the current key frame, and a coordinate mapping relation between the new local map and the current local map is recorded.
As an optional implementation manner in this embodiment of the present application, the obtaining, according to the image feature points, an estimated pose of each keyframe as a pose to be optimized by querying a visual dictionary to obtain a plurality of poses to be optimized includes:
in the visual dictionary, inquiring image feature points of each historical key frame, the matching degree of which with the image feature points of the plurality of key frames is higher than a matching degree threshold value;
acquiring image feature points of target historical key frames with the highest matching degree with the image feature points of the plurality of key frames from the inquired image feature points of the historical key frames;
determining the estimated pose of the target historical key frame according to the corresponding relation between the image characteristic points of the target historical key frame and the estimated pose;
and determining the estimated pose of the target historical key frame as the pose to be optimized.
As an optional implementation manner of the embodiment of the present application, the updating the pose graph according to the lidar point cloud data of the current key frame and the estimated pose of the current key frame includes:
and if the parallax between the estimated pose and the last estimated pose of the estimated pose is larger than a preset parallax threshold, adding the lidar point cloud data of the current key frame and the estimated pose of the current key frame into the pose graph to obtain an updated pose graph.
In a second aspect, an embodiment of the present application provides a map building apparatus, including:
an acquisition module: the method comprises the steps of acquiring image characteristic points of a plurality of key frames when closed-loop detection is carried out;
the query module: the system comprises a vision dictionary, a plurality of image feature points and a plurality of estimated poses, wherein the vision dictionary is used for inquiring the estimated pose of each key frame as the pose to be optimized according to the image feature points to obtain a plurality of poses to be optimized, and the vision dictionary comprises the image feature points and the estimated poses of all historical key frames;
a determination module: the system comprises a target local map, a plurality of pose optimization units and a plurality of pose optimization units, wherein the target local map is used for determining a plurality of target poses from the target local map according to the poses to be optimized, and the target local map is generated according to laser radar point cloud data of each historical key frame;
an update module: the system is used for taking a plurality of target poses as a plurality of pre-closed loop points and updating a pose graph according to the pre-closed loop points, wherein the pose graph comprises laser radar point cloud data and estimated poses of historical key frames;
constructing a module: and the global map is constructed according to the updated pose graph.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory for storing a computer program and a processor; the processor is configured to perform the method of the first aspect or any of the embodiments of the first aspect when the computer program is invoked.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method according to the first aspect or any embodiment of the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which when run on the electronic device, causes the electronic device to execute the path tracking method according to any one of the above first aspects.
According to the map construction scheme provided by the embodiment of the application, when the closed-loop detection condition is met, the image feature points of a plurality of key frames are obtained, and the estimated pose of each key frame obtained by inquiring in the visual dictionary is used as the pose to be optimized according to the image feature points so as to obtain a plurality of poses to be optimized; determining a plurality of target poses from the target local map according to the poses to be optimized, taking the plurality of target poses as a plurality of pre-closed loop points, updating a pose map according to the pre-closed loop points, and finally constructing a global map according to the updated pose map. The visual dictionary comprises image feature points and estimated poses of all historical key frames, the pose graph comprises laser radar point cloud data and estimated poses of the historical key frames, and the target local map is generated according to the laser radar point cloud data of all the historical key frames. According to the scheme, during closed-loop detection, the pose to be optimized is inquired in the visual dictionary according to the image feature points in the visual image information, then the pose of the target is determined according to the pose to be optimized in the target local map generated historically, and the pre-closed-loop points are obtained.
Drawings
Fig. 1 is a schematic flowchart of a pose graph update provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of a new local map construction method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a map building method according to an embodiment of the present application;
fig. 4 is a schematic diagram of a positional relationship between a plurality of pre-closed loop points and corresponding estimated poses provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a map building apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The embodiments of the present application will be described below with reference to the drawings. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments herein only and is not intended to be limiting of the application. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
The map construction method provided by the embodiment of the application can be implemented by a map construction device, the map construction device can be a self-moving device, such as a robot, and can also be a chip or a circuit applied to the robot, or the map construction device can also be an electronic device or a chip or a circuit applied to the electronic device, for example, a map can be constructed on a computer by using the map construction method. The embodiment will be described below by taking an example in which the map construction method is applied to a robot. When the map building device is an electronic device, the map building device may interact with the robot, for example, the robot may report various sensor data of the robot to the electronic device, and the like.
The robot may be a mowing robot, a sweeping robot, a mine clearance robot, a cruise robot, etc., which is not particularly limited in this embodiment.
Closed-loop detection, also called loop-back detection, refers to the ability of a robot to recognize that a scene has been reached, causing a map to close. Because the closed-loop detection provides the constraint relation information among the laser radar point cloud data of the current key frame, the estimated pose, the laser radar point cloud data of the historical key frame in the pose graph and the estimated pose, the pose graph needs to be updated in real time before the closed-loop detection is carried out. Fig. 1 is a schematic flowchart of a pose graph update provided in an embodiment of the present application, and as shown in fig. 1, the method may include the following steps:
and S110, acquiring the mapping information of the robot.
The robot may be configured with a number of different types of sensors and cameras, including but not limited to code wheels, inertial sensors, lidar and GPS (Global Positioning System) sensors.
The code wheel may be an absolute encoder or an incremental encoder.
The Inertial sensor may be a Micro Electro Mechanical System (MEMS), an Inertial Measurement Unit (IMU), or the like.
The camera may be a monocular camera, a binocular camera, an RGB-D camera, an event camera, etc.
The mapping information of the robot can comprise laser radar point cloud data and an initial pose of a current key frame.
The robot can acquire the laser radar point cloud data of the current key frame through the laser radar sensor.
The initial pose of the robot may include a position (x, y) and an orientation angle theta of the robot, the position of the robot and a first orientation angle theta1 of the robot may be obtained through a track of the robot acquired by a code disc, angular velocity integration may be performed on data acquired by an inertial sensor to obtain a second orientation angle theta2 of the robot, then an Extended Kalman Filter (EKF) observation equation may be used to weight the theta1 and the theta2 to obtain the orientation angle theta of the robot, and then a prediction equation may be used to predict the current position and the orientation angle theta of the robot to obtain the initial pose of the robot, where the prediction equation may be a uniform velocity model.
The robot can acquire the visual image in the visual range through the camera, and then acquire the visual image information in the visual image.
In the following, the present embodiment takes an example in which a code wheel is an absolute encoder, an inertial sensor is an IMU, and a camera is an RGB-D camera, for example, and an exemplary description is given.
It can be understood that the initial pose may also be obtained through data of other sensors, which is not limited in the embodiment of the present application.
And S120, constructing a current local map according to the laser radar point cloud data of the current key frame, and optimizing the initial pose according to the current local map to obtain the estimated pose of the current key frame.
When the robot walks for a certain distance, for example 0.5m, or rotates for a certain angle, for example 30 degrees, the initial pose corresponding to the current key frame and the lidar point cloud data of the current key frame are recorded, and the current local map is formed by splicing the lidar point cloud data corresponding to a preset number of key frames after the current key frame of the robot. The preset number can be based on the number of key frames collected within a certain range from the current position of the robot.
Further, the initial pose of the current key frame of the robot can be optimized according to the current local map by adopting a Gaussian-Newton iteration method, and the estimated pose of the current key frame after the initial pose is optimized is obtained.
It can be understood that when the features in the environment are fewer, the closed-loop detection matching may fail when the robot performs closed-loop detection according to the current local map, so that when the robot detects that the robot enters an area with fewer features, the laser point cloud data and the initial pose corresponding to the current key frame are re-acquired, and a new local map is constructed according to the re-acquired laser point cloud data and the initial pose corresponding to the current key frame, so as to perform closed-loop detection matching.
Fig. 2 is a schematic flowchart of a new local mapping method provided in an embodiment of the present application, and as shown in fig. 2, the method may include the following steps:
and S121, projecting the laser radar point cloud data of the current key frame to a current local map.
Specifically, each point in the lidar point cloud data can be projected onto a current local map according to the coordinate of each point in the lidar point cloud data of the current key frame, the current local map is composed of small grids, and each point in the lidar point cloud data finally falls into different grids.
And S122, calculating a projection score according to the projection position of the laser radar point cloud data of the current key frame in the current local map.
Each point in the lidar point cloud data has a corresponding grid position in the current local map, taking a point A in the lidar point cloud data as an example, the grid of the point A in the current local map is A ', and after projection, if the point A falls into A', the projection score of the point A can be 10 points; if point A falls within the 8-neighborhood grid of A', the projection score for point A may be 8 points; by analogy, the farther the projection position of the point a in the current local map is from a', the lower the projection score.
And adding the projection scores of all points in the laser radar point cloud data to obtain the projection score of the laser radar point cloud data of the current key frame in the current local map.
And S123, if the projection score is smaller than the set score, or the number of the image feature points in the current key frame is smaller than the number of the set image feature points, constructing a coordinate system of the new local map according to the laser radar point cloud data of the current key frame and the pose of the robot in the current key frame, and recording the coordinate mapping relation between the new local map and the current local map.
Specifically, the robot may further extract image feature points in the current keyframe, and if the projection score is smaller than the set score, or the number of the image feature points in the current keyframe is smaller than the set number of the image feature points, it indicates that there are fewer features in the current region where the robot is located, and when performing closed-loop detection matching according to the current local map, the corresponding image feature points are likely to be missed, resulting in failure of closed-loop detection. And at the moment, a coordinate system of a new local map can be established according to the laser radar point cloud data and the pose of the robot in the current key frame for closed loop detection and matching, wherein the origin of coordinates of the new local map is the current position of the robot.
The robot can also record the coordinate mapping relation between the new local map and the current local map so as to facilitate the subsequent closed-loop detection.
In addition, if the newly constructed local map still fails in closed loop detection and matching due to insufficient point cloud data of the laser radar, the current position of the robot can be determined by using data acquired by the coded disc and the IMU so as to make up for the defect of insufficient laser information.
And S130, updating a pose graph according to the laser radar point cloud data of the current key frame and the estimated pose of the current key frame.
The pose graph comprises laser radar point cloud data of historical key frames, estimated poses and constraint relation information among the historical key frames, wherein the constraint relation information is a constraint relation formed according to the relative poses between any two adjacent historical key frames in the pose graph.
And if the parallax between the estimated pose and the last estimated pose of the estimated pose is larger than a preset parallax threshold, adding the laser radar point cloud data and the estimated pose of the current key frame into the pose graph.
Specifically, the initial pose in the current key frame is locally optimized to be estimated pose1, the initial pose in the previous key frame is locally optimized to be estimated pose2, and if the parallax between the estimated pose1 and the estimated pose2 is larger than a preset parallax threshold, the lidar point cloud data and the estimated pose1 of the current key frame can be added into the pose map to obtain an updated pose map, wherein the updated pose map also comprises the lidar point cloud data and the estimated pose of the current key frame. It can be understood that when the interval duration between the generation time of the estimated pose of the current key frame and the generation time of the estimated pose of the previous key frame is longer than the set duration, the current key frame can also be added into the pose graph.
Fig. 3 is a schematic flowchart of a mapping method provided in an embodiment of the present application, and as shown in fig. 3, the method may include the following steps:
and S140, acquiring image characteristic points of a plurality of key frames when closed-loop detection is carried out.
Specifically, images of a plurality of key frames are acquired, and image feature points of each key frame are extracted through a feature extraction algorithm, where the feature extraction algorithm may include, but is not limited to, a Scale-invariant feature transform (SIFT) algorithm, a Histogram of Oriented Gradients (HOG) algorithm, or a traditional feature extraction manner such as a deep learning network, and is not limited herein.
S150, according to the image feature points, the estimated pose of each key frame is obtained by inquiring the visual dictionary and serves as the pose to be optimized, and a plurality of poses to be optimized are obtained.
Specifically, the visual dictionary stores the historical key frames, the image feature points and the estimated poses of the historical key frames, and the robot can store the extracted image feature points, the current key frames and the corresponding estimated poses in the current key frames into the visual dictionary to construct a dictionary model, so that matching can be conveniently carried out when subsequent closed-loop detection is needed. In general, the image feature points determined in each key frame may be extracted according to a set fixed object, for example, the fixed object may be an object such as a building, a flower bed, and a tree, which is not limited herein.
Further, closed-loop detection can be performed once every a set number of key frames, that is, when the current key frame reaches the set number, the closed-loop detection step is performed. For example, the set closed-loop detection condition is that closed-loop detection is performed every 10 key frames, that is, when the current key frame reaches the tenth key frame, the image feature point of the current key frame is extracted.
When closed-loop detection is required, image feature points of historical key frames, the matching degree of which with the image feature points of the current key frame is higher than the threshold value of the matching degree, can be inquired in a visual dictionary.
Specifically, the robot may select one or more image feature points from the image feature points of the current keyframe, and then query the image feature points of the historical keyframe for which the matching degree with each of the selected image feature points is higher than a threshold matching degree.
And then determining the image feature points of the target historical key frame with the highest matching degree with the image feature points of the current key frame in the inquired image feature points of the historical key frames.
Specifically, the robot may calculate a sum of matching degrees of each image feature point in each queried history key frame, and use the history key frame with the highest sum of matching degrees as the target history key frame.
Further, the estimated pose of the target history key frame can be determined according to the corresponding relation between the image feature points of the target history key frame and the estimated pose.
Specifically, the robot can determine the estimated pose of the target historical key frame according to the corresponding relation between each image feature point and the estimated pose in the constructed dictionary model.
And finally, determining the estimated pose of the target historical key frame as the pose to be optimized.
At the same time, the current pose cur1 of the robot can also be recorded. When the robot creates a new local map, the robot is located on the new local map, and then the new local map is used for matching when closed-loop detection is matched, and the robot can not be switched to the old local map until the corresponding old local map is found during closed-loop detection. Therefore, if cur1 is on the newly built local map, it is also necessary to obtain the pose cur1 _ofcur 1 on the old map according to the corresponding relationship between the new local map and the old local map, and if cur1 is on the old local map, cur1 is equal to cur1 _.
And S160, determining the target pose from the target local map according to the plurality of poses to be optimized.
And the target local map is generated according to the laser radar point cloud data corresponding to each historical key frame.
Specifically, the pose to be optimized can be used as the input of a branch and bound algorithm (branch and bound), the estimated pose of each historical key frame is searched in each target local map, the estimated pose of each searched historical key frame is respectively matched with the pose to be optimized, and the historical estimated pose position 1 with the highest matching degree is used as the target pose. The matching efficiency of the estimated pose of the historical key frame and the pose to be optimized is greatly improved through the branch-and-bound algorithm.
And S170, taking the plurality of target poses as a plurality of pre-closed loop points, and updating the pose graph according to the plurality of pre-closed loop points.
The robot can determine the closed loop constraint factor according to the pre-closed loop point pose1, the previous pre-closed loop point pose2 of the pre-closed loop point, the estimated pose cur1 corresponding to the pre-closed loop point and the estimated pose cur2 corresponding to the previous pre-closed loop point.
Specifically, if the coordinate systems of the target local map where the pre-closed loop point is located and the current local map are different, the robot may determine an estimated target pose cur1 _inthe target local map according to the coordinate mapping relationship between the target local map and the current local map; and then, determining a closed loop constraint factor according to the pre-closed loop point pose1, the previous pre-closed loop point pose2 of the pre-closed loop point, the estimated pose cur1_ of the target and the estimated pose cur2 corresponding to the previous pre-closed loop point.
If the coordinate system of the target local map where the pre-closed loop point is located is the same as that of the current local map, the robot can determine the closed loop constraint factor according to the pre-closed loop point pose1, the last pre-closed loop point pose2 of the pre-closed loop point, the estimated pose cur1 corresponding to the pre-closed loop point and the estimated pose cur2 corresponding to the last pre-closed loop point.
Specifically, the closed-loop constraint factor may be calculated according to the following formula (1):
a Δ T is T1T 2T 3T 4
Where Δ T is a closed loop constraint factor, T1 is the relative pose between dose 1 and dose 2, T2 is the relative pose between dose 2 and cur2, T3 is the relative pose between cur2 and cur1, and T4 is the relative pose between cur1 and dose 1.
Fig. 4 is a schematic diagram of a position relationship between a plurality of pre-closed loop points and corresponding estimated poses provided in the embodiment of the present application, and as shown in fig. 4, the robot may also determine a closed loop constraint factor according to a plurality of adjacent pre-closed loop points and estimated poses respectively corresponding to the plurality of pre-closed loop points.
Specifically, the closed-loop constraint factor may be calculated according to the following formula (2):
Figure BDA0003608313810000111
wherein,
Figure BDA0003608313810000112
indicates cur 1 And (p) 1 The relative position and posture between the two parts,
Figure BDA0003608313810000113
indicates a position 1 And (p) and (b) 2 The relative pose thereof is determined by the relative pose thereof,
Figure BDA0003608313810000114
indicates a position of 2 And (p) and (b) 3 The relative position and posture between the two parts can be analogized,
Figure BDA0003608313810000115
indicates a position n And cur n The relative pose thereof is determined by the relative pose thereof,
Figure BDA0003608313810000116
denotes cur n And cur n-1 The relative position and posture between the two parts,
Figure BDA0003608313810000117
indicates cur 2 And cur 1 The relative position and posture between the two is n which is more than or equal to 2 and less than or equal to 10.
Further, the relative pose between any two poses can be calculated according to the following equation (3):
Figure BDA0003608313810000118
wherein,
Figure BDA0003608313810000119
i.e. i and j represent respectively any corresponding two poses,
Figure BDA00036083138100001110
representing an inverse matrix, T, corresponding to one of the poses j A matrix representing the relative other pose correspondence of the pose, e.g.
Figure BDA00036083138100001111
The closed-loop constraint factor Δ T calculated by the formula (2) and the formula (3) may be finally output as: Δ T ═ Δ x, Δ y, Δ z, Δ roll, Δ pitch, Δ yaw, where Δ x, Δ y, and Δ z are expressed in terms of closed-loop filtered spatial coordinate error, and Δ roll, Δ pitch, and Δ yaw are expressed in terms of closed-loop filtered attitude error, i.e., roll angle error, pitch angle error, and yaw angle error, respectively.
And comparing the closed loop constraint factor delta T with a preset closed loop error threshold value thresh, and if the delta T is less than the thresh, constructing a global error equation to optimize the pose in the pose graph according to the closed loop constraint factor. Wherein thresh ═ is (thresh _ x, thresh _ y, thresh _ z, thresh _ roll, thresh _ pitch, thresh _ yaw), thresh _ x, thresh _ y, thresh _ z represents a spatial coordinate error threshold; thresh _ roll, thresh _ pitch, thresh _ yaw represent attitude error thresholds.
And S180, constructing a global map according to the updated pose graph.
Specifically, the global map can be re-spliced according to the estimated pose after optimization in the pose map, the laser radar point cloud data corresponding to the estimated pose after optimization and the constraint relation information in the pose map, so that the re-spliced map is closer to the real situation.
It will be appreciated by those skilled in the art that the above embodiments are exemplary and not intended to limit the present application. Where possible, the order of execution of one or more of the above steps may be adjusted, or may be selectively combined, to arrive at one or more of the first embodiments. The skilled person can select any combination of the above steps according to the needs, and all that does not depart from the essence of the scheme of the present application falls into the protection scope of the present application.
According to the map construction scheme provided by the embodiment of the application, when the closed-loop detection condition is met, the image feature points of a plurality of key frames are obtained, and the estimated pose of each key frame obtained by inquiring in the visual dictionary is used as the pose to be optimized according to the image feature points so as to obtain a plurality of poses to be optimized; and determining a plurality of target poses from the target local map according to the poses to be optimized, taking the plurality of target poses as a plurality of pre-closed loop points, updating the pose map according to the pre-closed loop points, and finally constructing the global map according to the updated pose map. The visual dictionary comprises image feature points and estimated poses of all historical key frames, the pose graph comprises laser radar point cloud data and estimated poses of the historical key frames, and the target local map is generated according to the laser radar point cloud data of all the historical key frames. According to the scheme, during closed-loop detection, the pose to be optimized is inquired in the visual dictionary according to the image feature points in the visual image information, then the target pose is determined according to the pose to be optimized in the target local map generated historically, and the pre-closed-loop point is obtained.
Based on the same inventive concept, as an implementation of the foregoing method, an embodiment of the present application provides a map construction apparatus, where an embodiment of the apparatus corresponds to the foregoing method embodiment, and for convenience of reading, details in the foregoing method embodiment are not repeated in this apparatus embodiment one by one, but it should be clear that the apparatus in this embodiment can correspondingly implement all the contents in the foregoing method embodiment.
Fig. 5 is a schematic structural diagram of a map building apparatus provided in an embodiment of the present application, and as shown in fig. 5, the map building apparatus provided in the embodiment may include: an obtaining module 11, a query module 12, a determining module 13, an updating module 14, and a constructing module 15, wherein:
the acquisition module 11: the method comprises the steps of acquiring image feature points of a plurality of key frames when closed-loop detection is carried out;
the query module 12: the system comprises a vision dictionary, a plurality of image feature points and a plurality of estimated poses, wherein the vision dictionary is used for inquiring the estimated pose of each key frame as the pose to be optimized according to the image feature points to obtain a plurality of poses to be optimized, and the vision dictionary comprises the image feature points and the estimated poses of all historical key frames;
the determination module 13: the system comprises a target local map, a plurality of historical keyframes and a plurality of target pose optimization units, wherein the target local map is used for determining a plurality of target poses from the target local map according to the poses to be optimized, and the target local map is generated according to laser radar point cloud data of each historical keyframe;
the update module 14: the system is used for taking a plurality of target poses as a plurality of pre-closed loop points and updating a pose graph according to the pre-closed loop points, wherein the pose graph comprises laser radar point cloud data and estimated poses of historical key frames;
the building module 15: and the global map is constructed according to the updated pose graph.
As an optional implementation manner, the map building apparatus is further configured to:
acquiring mapping information of the robot, wherein the mapping information comprises laser radar point cloud data and an initial pose of a current key frame;
constructing a current local map according to the laser radar point cloud data of the current key frame, and optimizing the initial pose according to the current local map to obtain an estimated pose of the current key frame;
and updating a pose graph according to the laser radar point cloud data of the current key frame and the estimated pose of the current key frame.
As an optional implementation manner, the determining module 13 is specifically configured to:
and determining the target pose in a branch-and-bound mode in the target local map according to the plurality of poses to be optimized.
As an optional implementation manner, the update module 14 is specifically configured to:
determining a closed loop constraint factor according to the pre-closed loop point, the last pre-closed loop point of the pre-closed loop point, the estimated pose corresponding to the pre-closed loop point and the estimated pose corresponding to the last pre-closed loop point;
and if the closed loop constraint factor is smaller than a preset closed loop error threshold value, constructing an error equation according to the closed loop constraint factor to optimize the estimated pose in the pose graph to obtain an updated pose graph.
As an optional implementation manner, the determining module 13 is specifically configured to:
if the coordinate systems of the target local map where the pre-closed loop point is located and the current local map are different, determining a target estimated pose of the estimated pose in the target local map according to the coordinate mapping relation between the target local map and the current local map;
determining a closed loop constraint factor according to the pre-closed loop point, a previous pre-closed loop point of the pre-closed loop point, the estimated pose of the target and the estimated pose corresponding to the previous pre-closed loop point;
and if the coordinate system of the target local map where the pre-closed loop point is located is the same as that of the current local map, determining a closed loop constraint factor according to the pre-closed loop point, a last pre-closed loop point of the pre-closed loop point, an estimated pose corresponding to the pre-closed loop point and an estimated pose corresponding to the last pre-closed loop point.
As an optional implementation manner, the map building apparatus is further configured to:
projecting the laser radar point cloud data of the current key frame into the current local map;
calculating a projection score according to the projection position of the laser radar point cloud data of the current key frame in the current local map;
if the projection score is smaller than a set score, or the number of the feature points in the current key frame is smaller than the number of the set feature points, a coordinate system of a new local map is built according to the laser radar point cloud data of the current key frame and the pose of the robot in the current key frame, and a coordinate mapping relation between the new local map and the current local map is recorded.
As an optional implementation manner, the query module 12 is specifically configured to:
querying feature points of historical key frames, of which the matching degree with the image feature points of the plurality of key frames is higher than a threshold matching degree, in the visual dictionary;
acquiring image feature points of target historical key frames with the highest matching degree with the image feature points of the plurality of key frames from the inquired image feature points of the historical key frames;
determining the estimated pose of the target historical key frame according to the corresponding relation between the image characteristic points of the target historical key frame and the estimated pose;
and determining the estimated pose of the target historical key frame as the pose to be optimized.
As an optional implementation manner, the map building apparatus is further configured to:
and if the parallax between the estimated pose and the last estimated pose of the estimated pose is larger than a preset parallax threshold, adding the laser radar point cloud data of the current key frame and the estimated pose of the current key frame into the pose graph to obtain an updated pose graph.
The map building apparatus provided in this embodiment may perform the above method embodiments, and the implementation principle and technical effect thereof are similar, which are not described herein again.
Based on the same inventive concept, the embodiment of the application also provides the electronic equipment. Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, the electronic device according to the embodiment includes: a memory 210 and a processor 220, the memory 210 being for storing computer programs; the processor 220 is adapted to perform the method according to the above-described method embodiments when invoking the computer program.
The electronic device provided by this embodiment may perform the above method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the method described in the foregoing method embodiments.
The embodiment of the present application further provides a computer program product, which when running on an electronic device, enables the electronic device to implement the method described in the above method embodiment when executed.
It should be clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional units and modules is only used for illustration, and in practical applications, the above function distribution may be performed by different functional units and modules as needed, that is, the internal structure of the apparatus may be divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. For the specific working processes of the units and modules in the system, reference may be made to the corresponding processes in the foregoing method embodiments, which are not described herein again.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optics, digital subscriber line) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, or a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
One of ordinary skill in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by hardware related to instructions of a computer program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the above method embodiments. And the aforementioned storage medium may include: various media capable of storing program codes, such as ROM or RAM, magnetic or optical disks, etc.
The naming or numbering of the steps appearing in the present application does not mean that the steps in the method flow have to be executed in the chronological/logical order indicated by the naming or numbering, and the named or numbered process steps may be executed in a modified order depending on the technical purpose to be achieved, as long as the same or similar technical effects are achieved.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other ways. For example, the above-described apparatus/device embodiments are merely illustrative, and for example, the division of the modules or units is only one type of logical function division, and there may be another division in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some feature points may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In the description of the present application, a "/" indicates a relationship in which the objects associated before and after are an "or", for example, a/B may indicate a or B; in the present application, "and/or" is only an association relationship describing an association object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural.
Also, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, or c, may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or multiple.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be implemented in other sequences than those illustrated or described herein.
Reference throughout this specification to "one embodiment" or "some embodiments" or the like, described with reference to "one embodiment" or "some embodiments" or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A map construction method, comprising:
when closed-loop detection is carried out, image feature points of a plurality of key frames are obtained;
according to the image feature points, the estimated pose of each key frame is obtained by inquiring a visual dictionary and serves as the pose to be optimized, and a plurality of poses to be optimized are obtained, wherein the visual dictionary comprises the image feature points and the estimated poses of historical key frames;
determining a plurality of target poses from a target local map according to the plurality of poses to be optimized, wherein the target local map is generated according to the laser radar point cloud data of each historical key frame;
taking a plurality of target poses as a plurality of pre-closed loop points, and updating a pose graph according to the pre-closed loop points, wherein the pose graph comprises laser radar point cloud data and estimated poses of historical key frames;
and constructing a global map according to the updated pose graph.
2. The method of claim 1, wherein prior to said obtaining image feature points for a plurality of keyframes when performing closed-loop detection, the method further comprises:
acquiring mapping information of the robot, wherein the mapping information comprises laser radar point cloud data and an initial pose of a current key frame;
constructing a current local map according to the laser radar point cloud data of the current key frame, and optimizing the initial pose according to the current local map to obtain an estimated pose of the current key frame;
and updating the pose graph according to the laser radar point cloud data of the current key frame and the estimated pose of the current key frame.
3. The method of claim 1, wherein determining a plurality of target poses from a target local map according to a plurality of the poses to be optimized comprises:
and determining the target pose in a branch-and-bound mode in the target local map according to the plurality of poses to be optimized.
4. The method of claim 1, wherein said updating the pose graph according to the plurality of pre-closed loop points comprises:
determining a closed loop constraint factor according to the pre-closed loop point, a previous pre-closed loop point of the pre-closed loop point, an estimated pose corresponding to the pre-closed loop point and an estimated pose corresponding to the previous pre-closed loop point;
and if the closed loop constraint factor is smaller than a preset closed loop error threshold value, constructing an error equation according to the closed loop constraint factor to optimize the estimated pose in the pose graph to obtain an updated pose graph.
5. The method of claim 4, wherein determining a closed-loop constraint factor based on the pre-closed loop point, a previous pre-closed loop point of the pre-closed loop point, an estimated pose corresponding to the pre-closed loop point, and an estimated pose corresponding to the previous pre-closed loop point comprises:
if the coordinate systems of the target local map where the pre-closed loop point is located and the current local map are different, determining a target estimated pose of the estimated pose in the target local map according to the coordinate mapping relation between the target local map and the current local map;
determining a closed loop constraint factor according to the pre-closed loop point, a previous pre-closed loop point of the pre-closed loop point, the estimated pose of the target and the estimated pose corresponding to the previous pre-closed loop point;
and if the coordinate system of the target local map where the pre-closed loop point is located is the same as that of the current local map, determining a closed loop constraint factor according to the pre-closed loop point, the previous pre-closed loop point of the pre-closed loop point, the estimated pose corresponding to the pre-closed loop point and the estimated pose corresponding to the previous pre-closed loop point.
6. The method of claim 2, wherein after the constructing a current local map from the lidar point cloud data for the current keyframe, prior to optimizing the initial pose from the current local map, the method further comprises:
projecting the laser radar point cloud data of the current key frame into the current local map;
calculating a projection score according to the projection position of the laser radar point cloud data of the current key frame in the current local map;
if the projection score is smaller than a set score, or the number of the feature points in the current key frame is smaller than the set number of the feature points, a coordinate system of a new local map is established according to the laser radar point cloud data of the current key frame and the pose of the robot in the current key frame, and a coordinate mapping relation between the new local map and the current local map is recorded.
7. The method according to claim 1, wherein the obtaining an estimated pose of each keyframe from a visual dictionary as a pose to be optimized according to the image feature points comprises:
in the visual dictionary, inquiring image feature points of each historical key frame, the matching degree of which with the image feature points of the plurality of key frames is higher than a matching degree threshold value;
acquiring image feature points of target historical key frames with the highest matching degree with the image feature points of the plurality of key frames from the inquired image feature points of the historical key frames;
determining the estimated pose of the target historical key frame according to the corresponding relation between the image characteristic points of the target historical key frame and the estimated pose;
and determining the estimated pose of the target historical key frame as the pose to be optimized.
8. The method of claim 2, wherein updating the pose map according to the lidar point cloud data for the current keyframe and the estimated pose for the current keyframe comprises:
and if the parallax between the estimated pose and the last estimated pose of the estimated pose is larger than a preset parallax threshold, adding the lidar point cloud data of the current key frame and the estimated pose of the current key frame into the pose graph to obtain an updated pose graph.
9. An electronic device, comprising: a memory for storing a computer program and a processor; the processor is adapted to perform the method of any of claims 1-8 when invoking the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-8.
CN202210422143.6A 2022-04-21 2022-04-21 Map construction method, apparatus and medium Pending CN114924287A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210422143.6A CN114924287A (en) 2022-04-21 2022-04-21 Map construction method, apparatus and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210422143.6A CN114924287A (en) 2022-04-21 2022-04-21 Map construction method, apparatus and medium

Publications (1)

Publication Number Publication Date
CN114924287A true CN114924287A (en) 2022-08-19

Family

ID=82805755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210422143.6A Pending CN114924287A (en) 2022-04-21 2022-04-21 Map construction method, apparatus and medium

Country Status (1)

Country Link
CN (1) CN114924287A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115164887A (en) * 2022-08-30 2022-10-11 中国人民解放军国防科技大学 Pedestrian navigation positioning method and device based on laser radar and inertia combination
CN116539026A (en) * 2023-07-06 2023-08-04 杭州华橙软件技术有限公司 Map construction method, device, equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115164887A (en) * 2022-08-30 2022-10-11 中国人民解放军国防科技大学 Pedestrian navigation positioning method and device based on laser radar and inertia combination
CN116539026A (en) * 2023-07-06 2023-08-04 杭州华橙软件技术有限公司 Map construction method, device, equipment and storage medium
CN116539026B (en) * 2023-07-06 2023-09-29 杭州华橙软件技术有限公司 Map construction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Dai et al. Rgb-d slam in dynamic environments using point correlations
Dubé et al. An online multi-robot SLAM system for 3D LiDARs
CN112634451B (en) Outdoor large-scene three-dimensional mapping method integrating multiple sensors
CN107160395B (en) Map construction method and robot control system
CN112179330B (en) Pose determination method and device of mobile equipment
US10939791B2 (en) Mobile robot and mobile robot control method
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
CN111489393A (en) VS L AM method, controller and mobile device
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
EP3532869A1 (en) Vision-inertial navigation with variable contrast tracking residual
CN112734852A (en) Robot mapping method and device and computing equipment
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN114924287A (en) Map construction method, apparatus and medium
CN110717927A (en) Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110555901A (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
Li et al. Review of vision-based Simultaneous Localization and Mapping
CN111161412A (en) Three-dimensional laser mapping method and system
CN112652001B (en) Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering
Liu et al. Visual slam based on dynamic object removal
JP2022523312A (en) VSLAM methods, controllers and mobile devices
CN114593735B (en) Pose prediction method and device
CN115962773A (en) Method, device and equipment for synchronous positioning and map construction of mobile robot
CN112733971B (en) Pose determination method, device and equipment of scanning equipment and storage medium
Sleaman et al. Indoor mobile robot navigation using deep convolutional neural network
Üzer et al. Vision-based hybrid map building for mobile robot navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination