CN113899357B - Incremental mapping method and device for visual SLAM, robot and readable storage medium - Google Patents

Incremental mapping method and device for visual SLAM, robot and readable storage medium Download PDF

Info

Publication number
CN113899357B
CN113899357B CN202111153715.7A CN202111153715A CN113899357B CN 113899357 B CN113899357 B CN 113899357B CN 202111153715 A CN202111153715 A CN 202111153715A CN 113899357 B CN113899357 B CN 113899357B
Authority
CN
China
Prior art keywords
map
positioning information
points
current frame
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111153715.7A
Other languages
Chinese (zh)
Other versions
CN113899357A (en
Inventor
李一龙
范圣印
刘志励
王璀
张煜东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yihang Yuanzhi Technology Co Ltd
Original Assignee
Beijing Yihang Yuanzhi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yihang Yuanzhi Technology Co Ltd filed Critical Beijing Yihang Yuanzhi Technology Co Ltd
Priority to CN202111153715.7A priority Critical patent/CN113899357B/en
Publication of CN113899357A publication Critical patent/CN113899357A/en
Application granted granted Critical
Publication of CN113899357B publication Critical patent/CN113899357B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a visual SLAM incremental mapping method, a device, a robot and a readable storage medium, wherein the method comprises the following steps: acquiring visual features and odometry data based on the sensor data; acquiring a first map, and initializing a second map based on the first map, the visual features and the odometry data; respectively carrying out cooperative positioning in the first map and the initialized second map by utilizing the current frame to obtain first positioning information and second positioning information; carrying out fusion mapping on the initialized second map based on the second positioning information to obtain third positioning information; and correcting the second map after the fusion map building based on the first positioning information and the third positioning information to obtain an incremental map. The application has the effect of improving the scale change of the original map in the incremental map building and the non-convergence of the newly built map.

Description

Incremental mapping method and device for visual SLAM, robot and readable storage medium
Technical Field
The application relates to the technical field of computer image processing, in particular to an incremental mapping method, an incremental mapping device, a robot and a readable storage medium for visual SLAM.
Background
In recent years, SLAM technology is widely applied to the fields of automatic driving, automatic robots, and the like, and is used for solving the problems of self positioning and mapping in a location scene. Compared with the laser SLAM technology, the visual SLAM technology has the characteristics of low sensor cost and rich information, but can not give the scale difference with the real space, and has the phenomenon of uneven scale distribution.
In order to solve the problem of uneven scale distribution generated by visual SLAM, a visual incremental mapping technology is adopted, and external observation such as a wheel encoder, an IMU (inertial measurement unit), an RTK (real time kinematic) and the like is generally selected as a basis for scale recovery, and the external observation is participated in the pose calculation and map optimization processes so as to reduce the scale difference of a visual map as much as possible.
However, the existing visual incremental mapping technology can change the original map scale, and a navigation map generated based on a visual map needs to be re-established, so that the waste of calculation resources is caused; and when the external observation is selected to carry out scale repair on the pose of the visual map or carry out incremental map building on the visual map with poor positioning effect, the newly built key frame is changed due to scene, illumination and observation and is separated from the association relation with the original map, so that the newly built map is not converged, and the map bifurcation phenomenon is generated.
Disclosure of Invention
In order to solve the problem that the original map scale changes and the newly built map is not converged in the incremental map building, the application provides an incremental map building method, an incremental map building device, a robot and a readable storage medium of a visual SLAM.
In a first aspect, the present application provides an incremental mapping method for visual SLAM, which adopts the following technical scheme:
an incremental mapping method of a visual SLAM, comprising:
acquiring visual features and odometry data based on the sensor data;
acquiring a first map, and initializing a second map based on the first map, the visual features and the odometry data;
respectively carrying out cooperative positioning in the first map and the initialized second map by utilizing the current frame to obtain first positioning information and second positioning information;
carrying out fusion mapping on the initialized second map based on the second positioning information to obtain third positioning information;
and correcting the second map after the fusion map building based on the first positioning information and the third positioning information to obtain an incremental map.
Optionally, the initializing a second map based on the first map, the visual features, and the odometry data includes:
Constructing a fixed area of a second map based on the key frames and map points in the first map;
establishing a temporary map based on the visual features and the odometry data;
repositioning in the second map based on the visual features;
if the positioning confidence coefficient of the current frame in the second map exceeds a first preset threshold tau, aligning the second map with the temporary map;
and fusing and replacing the aligned second map with the temporary map.
Optionally, the establishing a temporary map based on the visual features and the odometry data includes:
carrying out semantic matching and geometric constraint inspection on image feature points of a current frame and a previous frame to obtain a first matching relationship;
calculating the relative pose provided by the odometer corresponding to the current frame and the previous frame based on the first matching relation, and initializing the temporary map by using a triangulation method;
and establishing a key frame and map points of the newly added area of the second map by utilizing the matching relation between the current frame and the reference key frame and the relative pose provided by the odometer corresponding to the current frame and the reference key frame.
Optionally, the repositioning in the second map based on the visual feature includes:
Searching a key frame similar to the current frame as a candidate key frame, and carrying out semantic matching and geometric constraint inspection on image feature points of the current frame and the candidate key frame to obtain a second matching relation;
calculating the relative pose provided by the odometer corresponding to the current frame and the candidate key frame based on the second matching relation;
and acquiring a local map of the second map based on the relative pose provided by the odometer corresponding to the current frame and the candidate key frame, and optimizing the pose of the current frame based on the local map.
Optionally, if the positioning confidence of the current frame in the second map exceeds a first preset threshold τ, the aligning the second map with the temporary map includes:
if the number of map points observed by the current frame in the second map exceeds the first preset threshold tau, the positioning pose of the current frame in the second map is based onAnd pose of the current frame in the temporary map +.>Calculating the relative pose of the temporary map and the second map>
Based on the relative poseAnd rotating the temporary map to the coordinate system of the second map.
Optionally, the performing co-location in the first map and the initialized second map by using the current frame, and obtaining the first location information and the second location information includes:
Positioning on the first map by utilizing the visual features to obtain positioning information of the current frame in the first map
If the positioning informationConfidence in->Is greater than a second preset threshold tau C Positioning information ∈>Initial positioning information in said second map as current frame +.>Local map of second map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the second map>
If the positioning informationConfidence in->Is not greater than a second preset threshold tau C Obtaining the initial pose of the current frame in the second map based on the visual features and the odometer data, and positioning by utilizing the matching relation between the local map of the second map and the current frame on the basis of the initial pose to obtain the positioning information of the current frame in the second map
Positioning informationInitial positioning information in the first map as current frame +.>Local map of first map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the first map>
If the positioning informationConfidence in->Greater than confidence->Positioning information +.>As said first positioning information and positioning information +. >As the second positioning information.
Optionally, the fusing and mapping the initialized second map based on the second positioning information, and obtaining third positioning information includes:
establishing map points of the newly added area based on the second positioning information and key frames in the fixed area;
establishing map points of the newly added area based on the second positioning information and key frames of the newly added area;
if the map points and/or the key frames of the newly added area meet the preset redundancy conditions, deleting the map points and/or the key frames meeting the preset redundancy conditions;
and carrying out local optimization on the pose of the newly added region to obtain the third positioning information.
Optionally, the establishing the map point of the added area based on the second positioning information and the key frame in the fixed area includes:
matching the current key frame with the image feature points of the adjacent key frames in the fixed area based on semantic conditions and geometric constraint conditions;
for two image feature points successfully matched, if the map points corresponding to the two image feature points are empty, acquiring a map point p1 corresponding to the two image feature points based on the positions of the two image feature points in the two frames of images and the pose of the two frames of images, assigning the map point p1 to the two image feature points, and adding the map point p1 into the newly added area; if the map points corresponding to the two image feature points are empty and only one map point is empty, assigning a non-empty value to the map point with the empty value; if the map points corresponding to the two image feature points are non-empty, judging whether the map points belonging to the fixed area exist in the map points corresponding to the two image feature points, if so, replacing another map point by the map points belonging to the fixed area, and if not, optionally replacing another map point by one map point.
Optionally, the establishing the map point of the newly added area based on the second positioning information and the key frame of the newly added area includes:
matching the image feature points of the current key frame and the adjacent key frames in the newly added area based on semantic conditions and geometric constraint conditions;
for two image feature points successfully matched, if the map points corresponding to the two image feature points are empty, acquiring a map point p2 corresponding to the two image feature points based on the positions of the two image feature points in the two frames of images and the pose of the two frames of images, assigning the map point p2 to the two image feature points, and adding the map point p2 into the newly added area; if the map points corresponding to the two image feature points are empty and only one map point is empty, assigning a non-empty value to the map point with the empty value; if the map points corresponding to the two image feature points are non-empty, judging whether the map points corresponding to the two image feature points exist the map points belonging to the newly added area, if so, replacing another map point by the map points belonging to the newly added area, and if not, optionally replacing another map point by one map point.
Optionally, if the map points and/or keyframes of the newly added area meet the preset redundancy condition, deleting the map points and/or keyframes that meet the preset redundancy condition includes:
traversing the map points of the newly added area, and deleting the map points if the visibility rate of the map points is lower than a third preset threshold value; and/or the number of the groups of groups,
traversing the key frames of the newly added area, and removing the key frames if the visible map points of the key frames and other key frames have preset overlapping areas.
Optionally, the correcting the second map after the fusion map creation based on the first positioning information and the third positioning information, to obtain an incremental map includes:
if the confidence coefficient of the current key frame in the first positioning information is not smaller than a fourth preset threshold value, judging whether the difference between the first positioning information and the third positioning information of the current key frame meets a preset difference condition or not;
if yes, setting the positioning pose of the current key frame in the first map as the pose of the current key frame in the second map, and completing pose alignment;
updating the visual relationship of the second map based on the visual relationship of the current key frame and the visual map points in the first map;
And performing pore graph optimization and global optimization on the newly added region.
Optionally, the preset difference condition includes at least one of the following conditions:
the translation difference between the pose in the first positioning information and the pose in the third positioning information exceeds a fourth preset threshold;
the rotation angle difference of the pose in the first positioning information and the third positioning information exceeds a preset angle;
the ratio of the number of map points observed by the current key frame in the fixed area of the second map to the absolute value of the number of map points observed by the current key frame in the first map is smaller than a fifth preset threshold.
In a second aspect, the present application provides an incremental mapping apparatus for visual SLAM, which adopts the following technical scheme:
an incremental mapping apparatus for visual SLAM, comprising:
the acquisition module is used for acquiring visual characteristics and odometry data based on the sensor data;
the initialization module is used for acquiring a first map and initializing a second map based on the first map, the visual features and the odometer data;
the co-location module is used for respectively performing co-location in the first map and the initialized second map by utilizing the current frame to obtain first location information and second location information;
The fusion map building module is used for carrying out fusion map building on the initialized second map based on the second positioning information to obtain third positioning information;
and the correction module is used for correcting the second map after the fusion map construction based on the first positioning information and the third positioning information to obtain an incremental map.
In a third aspect, the present application provides a robot, which adopts the following technical scheme:
a robot comprising a memory, a processor, a communication bus, a communication interface, and a sensor device;
the memory, the processor and the communication interface are connected through the communication bus;
the sensor device is connected with the communication interface and is used for collecting sensor data;
the memory has stored thereon a computer program capable of being loaded by the processor and performing the method of any of the first aspects.
In a fourth aspect, the present application provides a computer readable storage medium, which adopts the following technical scheme:
a computer readable storage medium storing a computer program capable of being loaded by a processor and executing the method of any one of the first aspects.
By adopting the technical scheme, the function of incremental map construction is realized on the premise of not changing the scale distribution of the original visual map, and support of incremental map construction is provided for data which can be positioned strongly on the original map by collaborative positioning and fusion map construction; and (3) carrying out weak positioning or lost data on the original map, using a correction module to align the scale distribution of the newly added map to the scale distribution of the original map, avoiding the phenomena of non-convergence, bifurcation and the like of incremental map building caused by uneven scale distribution, and improving the quality of visual incremental map building.
Drawings
Fig. 1 is a flow chart of an incremental mapping method of a visual SLAM according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a sub-step of step S200 in the incremental mapping method of the visual SLAM according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating a sub-step of step S400 in the incremental mapping method of the visual SLAM according to an embodiment of the present application.
Fig. 4 is a block diagram of an incremental mapping apparatus for visual SLAM according to an embodiment of the present application.
Fig. 5 is a block diagram of a robot according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application.
Fig. 1 is a flow chart of an incremental mapping method of a visual SLAM according to the present embodiment. As shown in fig. 1, the main flow of the method is described as follows (steps S100 to S500):
step S100, visual characteristics and odometry data are acquired based on the sensor data;
in this embodiment, the sensor data includes image data and scale observation data. The image data may be in monocular, binocular or multi-eye vision, and may be collected by an image collecting device such as a common camera or a fisheye camera.
Visual descriptors are extracted from the image data, visual features can be FAST corner points, harris corner points, SIFT, SURF and variants thereof, and the descriptors can be HOG descriptors, BRIEF descriptors and the like, and deep learning descriptors can also be used. In order to ensure the extraction efficiency of the descriptors, the combination of the FAST corner and the BRIEF descriptors can be selected for describing visual features.
The scale observation data is acquired by a scale observation odometer, one or more devices of a wheel encoder, a wheel encoder and an inertial measurement unit, an RTK and a GPS can be used for forming the scale observation odometer, and the scale observation odometer is only required to be capable of forming the odometer with real scale information, so that the embodiment is not particularly limited.
Taking a wheel type encoder and an inertia measurement unit as an example, acquiring displacement through the wheel type encoder, acquiring rotation through the inertia measurement unit, forming an odometer by the displacement and the rotation, wherein the odometer data is used as an effective sensor for relative positioning of the mobile robot, and real-time pose information is provided for the robot and is used for subsequent scale recovery.
Step S200, a first map is obtained, and a second map is initialized based on the first map, the visual features and the odometry data;
In this embodiment, the original map, which is the reference map for incremental mapping, is referred to as a first map, and the incremental map is referred to as a second map. The first map is provided by other mapping modules, and is constructed by mapping data such as images, wheel speeds and the like.
The initialization of the second map is an incremental mapping starting method, and only the fact that the current frame obtains only one positioning with high confidence coefficient on the first map is met.
As an alternative implementation of the present embodiment, as shown in fig. 2, step S200 includes:
step S210, constructing a fixed area of a second map based on the key frames and map points in the first map;
the first map is used for initializing the second map, key frames and map points in the first map are copied into the second map to form a fixed area of the second map, and therefore elements of the fixed area of the first map and elements of the fixed area of the second map have a one-to-one correspondence.
Step S220, a temporary map is established based on the visual characteristics and the odometry data;
carrying out semantic matching and geometric constraint inspection on image feature points of a current frame and a previous frame to obtain a first matching relationship; calculating the relative pose provided by the odometer corresponding to the current frame and the previous frame based on the first matching relation, and initializing the temporary map by using a triangularization method; and establishing a key frame and map points of a new region of the second map by utilizing the matching relation between the current frame and the reference key frame and the relative pose provided by the odometer corresponding to the current frame and the reference key frame.
In this embodiment, the frames represent the data structure generated by each frame of image, and the key frames refer to frames stored in the map after being filtered. Although the frame does not exist in the map, the frame may be located on the map. For a frame, the key frame with which it is most common view is called a reference key frame.
Step S230, repositioning in the second map based on the visual features;
firstly, searching a key frame similar to the current frame as a candidate key frame, and carrying out semantic matching and geometric constraint inspection on image feature points of the current frame and the candidate key frame to obtain a second matching relation. Candidate key frames can be screened by adopting a word bag model, a VLAD series method and the like, and the embodiment is not particularly limited.
Then, calculating the relative pose provided by the odometer corresponding to the current frame and the candidate key frame based on the second matching relation; the relative pose may be calculated by using a ransac+pnp method, which is not specifically limited in this embodiment.
And obtaining a local map of the second map based on the relative pose provided by the odometer corresponding to the current frame and the candidate key frame, and optimizing the pose of the key frame based on the local map to finish repositioning operation.
Step S240, if the positioning confidence coefficient of the current frame in the second map exceeds a first preset threshold tau, aligning the second map with the temporary map;
in this embodiment, the current frame may be selected to be observed in the second mapAs a method of calculation of the confidence of the positioning, i.e. if the number of map points observed in the second map by the current frame exceeds a first preset threshold τ, when the positioning has a high confidence, the positioning pose in the second map is based on the current frameAnd pose of the current frame in the temporary map +.>Calculating the relative pose of the temporary map and the second map>Based on relative pose->The temporary map is rotated to the coordinate system of the second map.
Optionally, the first preset threshold τ is calculated as follows:
positioning on the first map by using map building data of the first map, and recording the number of map points observed by each frame to generate a set N; mean mu based on set N N And variance sigma N Calculating a first preset threshold value tau, wherein a calculation formula is tau=mu N -3*σ N . Because the positioning operation is only performed on the first map, only the image data in the map data is needed to be built.
And when the positioning confidence coefficient does not meet the condition, namely the number of the map points observed by the current frame in the second map does not exceed the first preset threshold value tau, repeating the operation until the positioning with high confidence coefficient is obtained, and then performing alignment processing.
And step S250, fusing and replacing the second map after the alignment processing with the temporary map.
Map point set observed in second map using current frameMap point set observed in temporary map with current frame +.>And fusing and replacing, and performing local optimization to finish the initialization of the second map.
Where N is the number of feature points extracted from the current frame,and->Representing a second map associated with an ith feature point in the current frame, map points in the temporary map, and +_when the ith feature point of the current frame has no corresponding map point in the temporary map>Is empty.
Step S300, respectively carrying out cooperative positioning in a first map and an initialized second map by utilizing the current frame to obtain first positioning information and second positioning information;
the step is used for giving the initial pose of the second map conforming to the first map scale, so as to facilitate the subsequent mapping operation and provide the reference pose in the first map for the subsequent correction operation.
Firstly, positioning on a first map by utilizing visual features to obtain positioning information of a current frame in the first mapIn a common positioning method, the relative position relation between a current frame and a reference key frame is given through a RANSAC+PnP or other methods, an initial value of the current frame is determined according to the position of the reference key frame, and then the positioning is completed by using an optimized method by combining the matching relation between points in a local map and the current frame.
If the positioning informationConfidence in->Is greater than a second preset threshold tau C Positioning information ∈>Initial positioning information +.>Local map of second map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the second map>
Since the second map contains all information of the first map, that is, there is a one-to-one correspondence between the first map and elements of the fixed area of the second map, the positioning result of the current frame on the first map can be used as an initial value of the positioning on the second map to perform the positioning, and the operation can always be successful.
If the positioning informationConfidence in->Is not greater than a second preset threshold tau C Based on the visual characteristics and the odometer data, obtaining the initial pose of the current frame in the second map, and positioning by utilizing the matching relation between the local map of the second map and the current frame on the basis of the initial pose to obtain positioning information of the current frame in the second map>Since the second map contains the newly added area, the confidence of the second map +.>Always higher, the positioning result is trusted.
Positioning informationInitial positioning information +. >Local map of first map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the first map>If the positioning information->Confidence in->Greater than confidence->Positioning information +.>As first positioning information, and positioning information is to be usedAs second positioning information.
Step S400, merging and mapping the initialized second map based on the second positioning information to obtain third positioning information;
as an alternative implementation of the present embodiment, as shown in fig. 3, step S400 includes:
step S410, establishing map points of the newly added area based on the second positioning information and the key frames in the fixed area;
and matching the image feature points of the current key frame and the adjacent key frames in the fixed area based on the semantic condition and the geometric constraint condition. At the current key frame K i Adjacent to the key frameIn (1) use of the characteristics->Representing key frame K i Features of (m)/(m)>Representing key frame K j Is the nth feature of (2). When the hamming distance of the two feature descriptors is greater than a given threshold, it is determined that the two features satisfy the semantic condition.
In the geometric constraint condition discrimination, the position of a feature point in an image, the internal reference of a camera and the corresponding relative pose of two frames of images are known, and the feature is determined by using the Simpson error And->Whether the geometric constraint is satisfied. In case of satisfying both semantic and geometrical constraints, the feature is considered +.>And->Matching.
In this embodiment, the neighboring key frame refers to a key frame having a certain number of visible map points with the current key frame.
For two successfully matched image feature points, if the map points corresponding to the two image feature points are empty, acquiring a map point p1 corresponding to the two image feature points based on the positions of the two image feature points in the two frames of images and the pose of the two frames of images, assigning the map point p1 to the two image feature points, and adding the map point p1 into the newly added area; if two map points corresponding to the image feature points are empty and only one map point is empty, assigning a non-empty value to the map point with the empty value; if the map points corresponding to the two image feature points are non-empty, judging whether the map points corresponding to the two image feature points exist the map points belonging to the fixed area, if so, replacing another map point by the map points belonging to the fixed area, and if not, optionally replacing another map point by one map point.
Step S420, establishing map points of the newly added area based on the second positioning information and the key frames of the newly added area;
The map points are established by screening two parts of the matching characteristic points and the triangularization characteristic points.
First, image feature points of the current key frame and adjacent key frames in the newly added area are matched based on semantic conditions and geometric constraint conditions.
For two successfully matched image feature points, if the map points corresponding to the two image feature points are empty, acquiring a map point p2 corresponding to the two image feature points based on the positions of the two image feature points in the two frames of images and the pose of the two frames of images, assigning the map point p2 to the two image feature points, and adding the map point p2 into the newly added area; if two map points corresponding to the image feature points are empty and only one map point is empty, assigning a non-empty value to the map point with the empty value; if the map points corresponding to the two image feature points are non-empty, judging whether the map points corresponding to the two image feature points exist the map points belonging to the newly added area, if so, replacing another map point by the map points belonging to the newly added area, and if not, optionally replacing another map point by one map point.
Step S430, if the map points and/or the key frames of the newly added area meet the preset redundancy conditions, deleting the map points and/or the key frames meeting the preset redundancy conditions;
Specifically, traversing map points of the newly added area, if the visibility rate of the map points is lower than a third preset threshold value, considering that the information quantity carried by the map points is lower, and deleting the map points; and/or traversing the key frame of the newly added area, and removing the key frame if the visible map points of the key frame and other key frames have a preset overlapping area, such as 95%, so as to reduce the redundancy of the map.
Step S440, the pose of the newly added region is locally optimized, and third positioning information is obtained.
In the process of local optimization, locking a fixed area in the second map, wherein the fixed area participates in pose optimization calculation but the pose is not updated; and updating the pose of the newly added area in the second map through local optimization, and outputting third positioning information.
Through step S400, the current keyframe and keyframes with different priorities are triangulated, so as to ensure that the newly added keyframe and map points are more attached to the scale of the first map.
And S500, correcting the second map after the fusion map building based on the first positioning information and the third positioning information to obtain an incremental map.
Map correction can be divided into three links: pose alignment, visual relationship updating, and phase graph optimization and global optimization. The key frames involved in the alignment, visual relationship adjustment are those of the current and neighboring frames that have high confidence locations in the first map, both from the newly added portion of the second map The set of these key frames is noted as
Firstly, judging whether the confidence coefficient of the current key frame in the first positioning information is not smaller than a fourth preset threshold value, if yes, judging whether the difference between the first positioning information and the third positioning information of the current key frame meets a preset difference condition.
Optionally, the preset difference condition includes at least one of the following conditions:
(1) The translation difference between the pose in the first positioning information and the third positioning information exceeds a fourth preset threshold, for example, the fourth preset threshold may be set to 0.5;
(2) The rotation angle difference of the pose in the first positioning information and the third positioning information exceeds a preset angle, for example, the preset angle may be set to 3 degrees;
(3) The ratio of the number of map points observed by the current key frame in the fixed area of the second map to the absolute value of the number of map points observed by the current key frame in the first map is less than a fifth preset threshold, for example,
if the preset difference condition is met, setting the positioning pose of the current key frame in the first map as the pose of the key frame in the second map, and completing pose alignment. Specifically, for key frame K i Positioning the object in a first mapSet to its pose +. >I.e. < ->And completing pose alignment.
Then, the visual relationship of the second map is updated based on the visual relationship of the current key frame with the visual map points in the first map.
Specifically, key frame K is used i Visual map points in a first map For its visual map point in the second map +.>Fusion is carried out in which->For the current frame K i Feature point number, +.>Respectively represent key frames K i Visual map points in the first map and the second map corresponding to the j-th feature point.
If it isExist (S)>Corresponding map points on the second map +.>Newly added area from the second map, and +.>And->Different, map points are used +.>Replace->And removing map points in the map>
And finally, performing pore graph optimization and global optimization on the newly added region.
In the pore graph optimization, acquiring a keyframe set of a second mapFixing part->Keyframe set->Fixed keyframe set +.>Pose in (a) and current key frame K c Is the pose of (1). Using pre-update posesUses the updated pose as a target>As an observation, optimization was performed, and the objective function was as follows:
in global optimization, the pose of a key frame of a second map and map points are used as optimization variables, the visual relationship between the key frame and the map points is used as constraint, and the reprojection error is used as an objective function to perform local map optimization. Fixing the fixed part from the second map during the optimization The pose of the key frame and the position of the map points. The objective function is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,representing the number of key frames in the second map, < +.>Representing the number of map points, z, in the second map ij A measured value representing the jth map point observed by the ith key frame, T i Representing the pose of the ith keyframe, p j Represents the jth map point, g (T i ,p j ) Representing map point p j Pose T i The following observations.
The pore graph optimization and the global optimization are a general method, and are actually the optimization of a local map corresponding to a key frame from the current key frame to the last new key frame with high-confidence positioning, and the pose and the position of a fixed part in the local map are fixed in the optimization process.
By the correction method, the situation of changing the scale of the newly built map caused by poor positioning effect of the first map, short-term positioning loss or long-term positioning loss caused by expansion of the map area, which is caused by illumination and environment difference, can be solved, and the robustness of the system is improved.
It should be noted that, the confidence threshold calculating method varies with the confidence calculating method, and there may be various residual value calculating methods for the same confidence calculating method, such as three sigma criteria under gaussian distribution, hypothesis test, and the like. The visual SLAM main body framework used in the embodiment of the application can be ORB_SLAM, VINS and other series frameworks; the embodiment is not particularly limited as long as the SLAM frame used in the incremental map is identical to the frame used in the original map construction.
Fig. 4 is a block diagram illustrating a configuration of an incremental mapping apparatus 600 for visual SLAM according to an embodiment of the present application. As shown in fig. 4, the incremental mapping apparatus 600 of the visual SLAM mainly includes:
an acquisition module 610 for acquiring visual features and odometry data based on the sensor data;
an initialization module 620, configured to acquire a first map and initialize a second map based on the first map, the visual features, and the odometry data;
the co-location module 630 is configured to perform co-location on the first map and the initialized second map by using the current frame, so as to obtain first location information and second location information;
the fusion mapping module 640 is configured to perform fusion mapping on the initialized second map based on the second positioning information, so as to obtain third positioning information;
the correction module 650 is configured to correct the second map after the map is merged based on the first positioning information and the third positioning information, so as to obtain an incremental map.
The functional modules in the embodiments of the present application may be integrated together to form a single unit, for example, integrated in a processing unit, or each module may exist alone physically, or two or more modules may be integrated to form a single unit. The integrated units may be implemented in hardware or in software functional units. The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored on a computer readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing an electronic device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk.
Various changes and specific examples of the method provided in the embodiment of the present application are applicable to an incremental mapping device for a visual SLAM provided in the embodiment, and by the foregoing detailed description of the incremental mapping method for a visual SLAM, those skilled in the art can clearly know the implementation method of the incremental mapping device for a visual SLAM in the embodiment, which is not described in detail herein for brevity of description.
Fig. 5 is a block diagram of a robot 700 according to an embodiment of the present application. As shown in fig. 5, the robot 700 includes a memory 701, a processor 702, a communication bus 703, a communication interface 704, a sensor device 705; the memory 701, the processor 702, and the communication interface 704 are connected by a communication bus 703.
The sensor device 705 is connected to the communication interface 704 for collecting sensor data. The sensor device 705 may include, but is not limited to, an image acquisition device, which may be a conventional camera, a fisheye camera, etc., and a scale observation odometer, which may be comprised of one or more of a wheel encoder, a wheel encoder + inertial measurement unit, an RTK, a GPS.
Memory 701 may be used to store instructions, programs, code, sets of codes, or instruction sets. The memory 701 may include a storage program area and a storage data area, wherein the storage program area may store instructions for implementing an operating system, instructions for at least one function, instructions for implementing the incremental mapping method of the visual SLAM provided by the above-described embodiments, and the like; the storage data area may store data and the like involved in the incremental mapping method of the visual SLAM provided in the above embodiment.
The processor 702 may include one or more processing cores. The processor 702 performs the various functions of the application and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 701, calling data stored in the memory 701. The processor 702 may be at least one of an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a digital signal processor (Digital Signal Processor, DSP), a digital signal processing device (Digital Signal Processing Device, DSPD), a programmable logic device (Programmable Logic Device, PLD), a field programmable gate array (Field Programmable Gate Array, FPGA), a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronics for implementing the functions of the processor 702 described above may be other for different devices, and embodiments of the present application are not particularly limited.
Communication bus 703 may include a path to transfer information between the aforementioned components. The communication bus 703 may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus 703 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one double arrow is shown in FIG. 5, but not only one bus or one type of bus.
Among them, robots include, but are not limited to: an autopilot vehicle, a logistics robot, a sweeping robot, a mobile robot, and an AR robot. The robot shown in fig. 5 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
An embodiment of the present application provides a computer-readable storage medium storing a computer program capable of being loaded by a processor and executing the incremental mapping method of visual SLAM as provided in the above embodiment.
In this embodiment, the computer-readable storage medium may be a tangible device that holds and stores instructions for use by the instruction execution device. The computer readable storage medium may be, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any combination of the preceding. In particular, the computer readable storage medium may be a portable computer disk, hard disk, USB flash disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), podium random access memory (SRAM), portable compact disc read-only memory (CD-ROM), digital Versatile Disk (DVD), memory stick, floppy disk, optical disk, magnetic disk, mechanical coding device, and any combination of the foregoing.
The computer program in this embodiment contains program code for executing the method shown in fig. 1, and the program code may include instructions corresponding to the execution of the steps of the method provided in the above embodiment. The computer program may be downloaded from a computer readable storage medium to the respective computing/processing device or to an external computer or external storage device via a network (e.g., the internet, a local area network, a wide area network, and/or a wireless network). The computer program may execute entirely on the user's computer and as a stand-alone software package.
In the embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
In addition, it is to be understood that relational terms such as first and second are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. An incremental mapping method of a visual SLAM, comprising:
acquiring visual features and odometry data based on the sensor data;
Acquiring a first map, and initializing a second map based on the first map, the visual features and the odometry data;
respectively carrying out cooperative positioning in the first map and the initialized second map by utilizing the current frame to obtain first positioning information and second positioning information;
carrying out fusion mapping on the initialized second map based on the second positioning information to obtain third positioning information;
correcting the second map after the fusion map building based on the first positioning information and the third positioning information to obtain an incremental map;
the initializing a second map based on the first map, the visual features, and the odometry data includes:
constructing a fixed area of a second map based on the key frames and map points in the first map;
establishing a temporary map based on the visual features and the odometry data;
repositioning in the second map based on the visual features;
if the positioning confidence coefficient of the current frame in the second map exceeds a first preset threshold tau, aligning the second map with the temporary map;
fusing and replacing the aligned second map and the temporary map;
The establishing a temporary map based on the visual features and the odometry data includes:
carrying out semantic matching and geometric constraint inspection on image feature points of a current frame and a previous frame to obtain a first matching relationship;
calculating the relative pose provided by the odometer corresponding to the current frame and the previous frame based on the first matching relation, and initializing the temporary map by using a triangulation method;
establishing a key frame and map points of a new region of the second map by utilizing the matching relation between the current frame and the reference key frame and the relative pose provided by the odometer corresponding to the current frame and the reference key frame;
the step of respectively performing co-location in the first map and the initialized second map by using the current frame to obtain first location information and second location information includes:
positioning on the first map by utilizing the visual features to obtain positioning information of the current frame in the first map
If the positioning informationConfidence in->Is greater than a second preset threshold tau C Positioning information ∈>Initial positioning information in said second map as current frame +.>Local map of second map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the second map >
If the positioning informationConfidence in->Is not greater than a second preset threshold tau C Obtaining an initial pose of the current frame in the second map based on the visual features and the odometer data, and positioning by utilizing a matching relation between a local map of the second map and the current frame on the basis of the initial pose to obtain positioning information of the current frame in the second map>
Positioning informationInitial positioning information in the first map as current frame +.>Local map of first map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the first map>
If the positioning informationConfidence in->Greater than confidence->Positioning information +.>As said first positioning information and positioning information +.>As the second positioning information;
the merging and mapping the initialized second map based on the second positioning information to obtain third positioning information includes:
establishing map points of the newly added area based on the second positioning information and key frames in the fixed area;
establishing map points of the newly added area based on the second positioning information and key frames of the newly added area;
If the map points and/or the key frames of the newly added area meet the preset redundancy conditions, deleting the map points and/or the key frames meeting the preset redundancy conditions;
locally optimizing the pose of the newly added region to obtain the third positioning information;
correcting the second map after the fusion map building based on the first positioning information and the third positioning information, wherein obtaining the incremental map comprises the following steps:
if the confidence coefficient of the current key frame in the first positioning information is not smaller than a fourth preset threshold value, judging whether the difference between the first positioning information and the third positioning information of the current key frame meets a preset difference condition or not;
if yes, setting the positioning pose of the current key frame in the first map as the pose of the current key frame in the second map, and completing pose alignment;
updating the visual relationship of the second map based on the visual relationship of the current key frame and the visual map points in the first map;
and performing pore graph optimization and global optimization on the newly added region.
2. The method of claim 1, wherein the repositioning in the second map based on the visual features comprises:
Searching a key frame similar to the current frame as a candidate key frame, and carrying out semantic matching and geometric constraint inspection on image feature points of the current frame and the candidate key frame to obtain a second matching relation;
calculating the relative pose provided by the odometer corresponding to the current frame and the candidate key frame based on the second matching relation;
and acquiring a local map of the second map based on the relative pose provided by the odometer corresponding to the current frame and the candidate key frame, and optimizing the pose of the current frame based on the local map.
3. The method according to claim 1, wherein the aligning the second map with the temporary map if the positioning confidence of the current frame in the second map exceeds a first preset threshold τ comprises:
if the number of map points observed by the current frame in the second map exceeds the first preset threshold tau, the positioning pose of the current frame in the second map is based onAnd pose of the current frame in the temporary map +.>Calculating the relative pose of the temporary map and the second map>
Based on the relative poseAnd rotating the temporary map to the coordinate system of the second map.
4. The method of claim 1, wherein the establishing map points of the newly added area based on the second positioning information and key frames in the fixed area comprises:
matching the current key frame with the image feature points of the adjacent key frames in the fixed area based on semantic conditions and geometric constraint conditions;
for two image feature points successfully matched, if the map points corresponding to the two image feature points are empty, acquiring a map point p1 corresponding to the two image feature points based on the positions of the two image feature points in the two frames of images and the pose of the two frames of images, assigning the map point p1 to the two image feature points, and adding the map point p1 into the newly added area; if the map points corresponding to the two image feature points are empty and only one map point is empty, assigning a non-empty value to the map point with the empty value; if the map points corresponding to the two image feature points are non-empty, judging whether the map points belonging to the fixed area exist in the map points corresponding to the two image feature points, if so, replacing another map point by the map points belonging to the fixed area, and if not, optionally replacing another map point by one map point.
5. The method of claim 1, wherein the establishing map points of the newly added area based on the second positioning information and key frames of the newly added area comprises:
matching the image feature points of the current key frame and the adjacent key frames in the newly added area based on semantic conditions and geometric constraint conditions;
for two image feature points successfully matched, if the map points corresponding to the two image feature points are empty, acquiring a map point p2 corresponding to the two image feature points based on the positions of the two image feature points in the two frames of images and the pose of the two frames of images, assigning the map point p2 to the two image feature points, and adding the map point p2 into the newly added area; if the map points corresponding to the two image feature points are empty and only one map point is empty, assigning a non-empty value to the map point with the empty value; if the map points corresponding to the two image feature points are non-empty, judging whether the map points corresponding to the two image feature points exist the map points belonging to the newly added area, if so, replacing another map point by the map points belonging to the newly added area, and if not, optionally replacing another map point by one map point.
6. The method according to claim 1, wherein if the map points and/or key frames of the newly added area meet a preset redundancy condition, deleting the map points and/or key frames that meet the preset redundancy condition includes:
traversing the map points of the newly added area, and deleting the map points if the visibility rate of the map points is lower than a third preset threshold value; and/or the number of the groups of groups,
traversing the key frames of the newly added area, and removing the key frames if the visible map points of the key frames and other key frames have preset overlapping areas.
7. The method of claim 1, wherein the preset differential conditions include at least one of the following:
the translation difference between the pose in the first positioning information and the pose in the third positioning information exceeds a fourth preset threshold;
the rotation angle difference of the pose in the first positioning information and the third positioning information exceeds a preset angle;
the ratio of the number of map points observed by the current key frame in the fixed area of the second map to the absolute value of the number of map points observed by the current key frame in the first map is smaller than a fifth preset threshold.
8. An incremental mapping apparatus for visual SLAM, comprising:
The acquisition module is used for acquiring visual characteristics and odometry data based on the sensor data;
the initialization module is used for acquiring a first map and initializing a second map based on the first map, the visual features and the odometer data;
the co-location module is used for respectively performing co-location in the first map and the initialized second map by utilizing the current frame to obtain first location information and second location information;
the fusion map building module is used for carrying out fusion map building on the initialized second map based on the second positioning information to obtain third positioning information;
the correction module is used for correcting the second map after the fusion map building based on the first positioning information and the third positioning information to obtain an incremental map;
the initialization module is specifically configured to construct a fixed area of a second map based on the key frames and map points in the first map; establishing a temporary map based on the visual features and the odometry data; repositioning in the second map based on the visual features; if the positioning confidence coefficient of the current frame in the second map exceeds a first preset threshold tau, aligning the second map with the temporary map; fusing and replacing the aligned second map and the temporary map; wherein said establishing a temporary map based on said visual features and said odometry data comprises: carrying out semantic matching and geometric constraint inspection on image feature points of a current frame and a previous frame to obtain a first matching relationship; calculating the relative pose provided by the odometer corresponding to the current frame and the previous frame based on the first matching relation, and initializing the temporary map by using a triangulation method; establishing a key frame and map points of a new region of the second map by utilizing the matching relation between the current frame and the reference key frame and the relative pose provided by the odometer corresponding to the current frame and the reference key frame;
The co-location module is specifically configured to use the visual feature to locate on the first map to obtain a current locationPositioning information of previous frame in the first mapIf the positioning information->Confidence in->Is greater than a second preset threshold tau C Positioning information ∈>Initial positioning information in said second map as current frame +.>Local map of second map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the second map>If the positioning information->Confidence in->Is not greater than a second preset threshold tau C Obtaining an initial pose of the current frame in the second map based on the visual features and the odometer data, and positioning by utilizing a matching relation between a local map of the second map and the current frame on the basis of the initial pose to obtain positioning information of the current frame in the second map>Positioning information +.>Initial positioning information in the first map as current frame +.>Local map of first map is used for initial positioning information->Optimizing to obtain positioning information of the current frame in the first map>If the positioning informationConfidence in->Greater than confidence->Positioning information +. >As said first positioning information and positioning information +.>As the second positioning information;
the fusion mapping module is specifically configured to build map points of the newly added area based on the second positioning information and key frames in the fixed area; establishing map points of the newly added area based on the second positioning information and key frames of the newly added area; if the map points and/or the key frames of the newly added area meet the preset redundancy conditions, deleting the map points and/or the key frames meeting the preset redundancy conditions; locally optimizing the pose of the newly added region to obtain the third positioning information;
the correction module is specifically configured to determine whether a difference between the first positioning information and the third positioning information of the current key frame meets a preset difference condition if the confidence coefficient of the current key frame in the first positioning information is not less than a fourth preset threshold; if yes, setting the positioning pose of the current key frame in the first map as the pose of the current key frame in the second map, and completing pose alignment; updating the visual relationship of the second map based on the visual relationship of the current key frame and the visual map points in the first map; and performing pore graph optimization and global optimization on the newly added region.
9. A robot comprising a memory, a processor, a communication bus, a communication interface, and a sensor device;
the memory, the processor and the communication interface are connected through the communication bus;
the sensor device is connected with the communication interface and is used for collecting sensor data;
the memory has stored thereon a computer program that can be loaded by the processor and that performs the method according to any of claims 1 to 7.
10. A computer readable storage medium, characterized in that a computer program is stored which can be loaded by a processor and which performs the method according to any one of claims 1 to 7.
CN202111153715.7A 2021-09-29 2021-09-29 Incremental mapping method and device for visual SLAM, robot and readable storage medium Active CN113899357B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111153715.7A CN113899357B (en) 2021-09-29 2021-09-29 Incremental mapping method and device for visual SLAM, robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111153715.7A CN113899357B (en) 2021-09-29 2021-09-29 Incremental mapping method and device for visual SLAM, robot and readable storage medium

Publications (2)

Publication Number Publication Date
CN113899357A CN113899357A (en) 2022-01-07
CN113899357B true CN113899357B (en) 2023-10-31

Family

ID=79189423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111153715.7A Active CN113899357B (en) 2021-09-29 2021-09-29 Incremental mapping method and device for visual SLAM, robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN113899357B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074638A (en) * 2018-07-23 2018-12-21 深圳前海达闼云端智能科技有限公司 Fusion graph building method, related device and computer readable storage medium
CN110000786A (en) * 2019-04-12 2019-07-12 珠海市一微半导体有限公司 A kind of historical map or atlas of view-based access control model robot utilizes method
CN112197764A (en) * 2020-12-07 2021-01-08 广州极飞科技有限公司 Real-time pose determining method and device and electronic equipment
CN112595323A (en) * 2020-12-08 2021-04-02 深圳市优必选科技股份有限公司 Robot and drawing establishing method and device thereof
CN112683273A (en) * 2020-12-21 2021-04-20 广州慧扬健康科技有限公司 Adaptive incremental mapping method, system, computer equipment and storage medium
CN113345032A (en) * 2021-07-07 2021-09-03 北京易航远智科技有限公司 Wide-angle camera large-distortion image based initial image construction method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107104250B (en) * 2017-04-25 2019-08-16 北京小米移动软件有限公司 The charging method and device of sweeping robot
US11164326B2 (en) * 2018-12-18 2021-11-02 Samsung Electronics Co., Ltd. Method and apparatus for calculating depth map

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109074638A (en) * 2018-07-23 2018-12-21 深圳前海达闼云端智能科技有限公司 Fusion graph building method, related device and computer readable storage medium
CN110000786A (en) * 2019-04-12 2019-07-12 珠海市一微半导体有限公司 A kind of historical map or atlas of view-based access control model robot utilizes method
CN112197764A (en) * 2020-12-07 2021-01-08 广州极飞科技有限公司 Real-time pose determining method and device and electronic equipment
CN112595323A (en) * 2020-12-08 2021-04-02 深圳市优必选科技股份有限公司 Robot and drawing establishing method and device thereof
CN112683273A (en) * 2020-12-21 2021-04-20 广州慧扬健康科技有限公司 Adaptive incremental mapping method, system, computer equipment and storage medium
CN113345032A (en) * 2021-07-07 2021-09-03 北京易航远智科技有限公司 Wide-angle camera large-distortion image based initial image construction method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于增量信息技术的地图数据快速更新;金鑫一;赵孟影;徐则双;;测绘与空间地理信息(第11期);全文 *

Also Published As

Publication number Publication date
CN113899357A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
Cvišić et al. SOFT‐SLAM: Computationally efficient stereo visual simultaneous localization and mapping for autonomous unmanned aerial vehicles
CN109211251B (en) Instant positioning and map construction method based on laser and two-dimensional code fusion
CN112734852B (en) Robot mapping method and device and computing equipment
CN112240768A (en) Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration
WO2021035669A1 (en) Pose prediction method, map construction method, movable platform, and storage medium
CN111402339B (en) Real-time positioning method, device, system and storage medium
CN109443348B (en) Underground garage position tracking method based on fusion of look-around vision and inertial navigation
CN111561923A (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
Cui et al. Efficient large-scale structure from motion by fusing auxiliary imaging information
CN113506342B (en) SLAM omni-directional loop correction method based on multi-camera panoramic vision
CN113256719A (en) Parking navigation positioning method and device, electronic equipment and storage medium
CN112198878B (en) Instant map construction method and device, robot and storage medium
Corke et al. Sensor influence in the performance of simultaneous mobile robot localization and map building
CN113137968B (en) Repositioning method and repositioning device based on multi-sensor fusion and electronic equipment
CN112950710A (en) Pose determination method and device, electronic equipment and computer readable storage medium
CN113822996B (en) Pose estimation method and device for robot, electronic device and storage medium
CN114136315A (en) Monocular vision-based auxiliary inertial integrated navigation method and system
CN113761647B (en) Simulation method and system of unmanned cluster system
CN108827287B (en) Robust visual SLAM system in complex environment
CN111783611B (en) Unmanned vehicle positioning method and device, unmanned vehicle and storage medium
Hong et al. Visual inertial odometry using coupled nonlinear optimization
CN113899357B (en) Incremental mapping method and device for visual SLAM, robot and readable storage medium
CN112348854A (en) Visual inertial mileage detection method based on deep learning
CN115984417A (en) Semantic mapping method and device and storage medium
CN114299192B (en) Method, device, equipment and medium for positioning and mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant