WO2021232470A1 - 基于多传感器融合的slam制图方法、*** - Google Patents

基于多传感器融合的slam制图方法、*** Download PDF

Info

Publication number
WO2021232470A1
WO2021232470A1 PCT/CN2020/092921 CN2020092921W WO2021232470A1 WO 2021232470 A1 WO2021232470 A1 WO 2021232470A1 CN 2020092921 W CN2020092921 W CN 2020092921W WO 2021232470 A1 WO2021232470 A1 WO 2021232470A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
positioning information
gnss
imu
map
Prior art date
Application number
PCT/CN2020/092921
Other languages
English (en)
French (fr)
Inventor
刘继廷
Original Assignee
北京数字绿土科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京数字绿土科技有限公司 filed Critical 北京数字绿土科技有限公司
Priority to EP20936432.2A priority Critical patent/EP4155669A4/en
Priority to US17/791,505 priority patent/US20230194306A1/en
Publication of WO2021232470A1 publication Critical patent/WO2021232470A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3848Data obtained from both position sensors and additional sensors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C25/00Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/485Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an optical system or imaging system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/48Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
    • G01S19/49Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the invention relates to the technical field of navigation multi-sensor fusion, in particular to a SLAM mapping method and system based on multi-sensor fusion.
  • SLAM Simultaneous Localization And Mapping
  • IMU Inertial Measurement Unit
  • GNSS Global Navigation Satellite System
  • Laser SLAM refers to the use of laser sensors to acquire data for simultaneous positioning and mapping.
  • the laser does not depend on the illumination of the surrounding environment, and can scan the high-precision three-dimensional information of the surrounding environment, and its algorithm is relatively robust.
  • the laser has gradually become one of the more popular research areas in the SLAM field.
  • Visual SLAM mainly uses camera sensors to obtain image data of the surrounding environment, and uses the captured image information for positioning and mapping.
  • the cumulative error will gradually increase over time, resulting in a decrease in the accuracy of positioning and mapping effects.
  • the more popular method is to make corrections through closed-loop methods, but in large-scale mapping In medium and restricted surrounding environments, the accuracy is often not up to the accuracy required for current map production.
  • the purpose of the present invention is to provide a SLAM mapping method and system based on multi-sensor fusion, so as to alleviate the technical problems of low accuracy and large errors that are easily restricted by the surrounding environment in the prior art.
  • an embodiment of the present invention provides a SLAM mapping method based on multi-sensor fusion, which is applied to a server and includes: acquiring multiple sensor data of a mobile platform about the surrounding environment; the multiple sensor data includes: point cloud data , Image data, IMU data and GNSS data; hierarchically process the multiple sensor data to generate multiple positioning information; wherein, one sensor data corresponds to one positioning information; based on the multiple positioning information, the mobile Target location information of the platform; generate a local high-precision map based on the target location information; perform a closed-loop detection operation on the local high-precision map to obtain a high-precision global map of the mobile platform.
  • acquiring multiple sensor data on the surrounding environment of the mobile platform includes: calibrating the positional relationship between the camera, IMU, GNSS and the laser based on the laser to obtain calibration information; wherein, the laser, the laser The camera, IMU, and the GNSS are all sensors on the mobile platform; using the time of the GNSS as a reference, synchronize the time of the laser, the camera, and the IMU to the time system of the current GNSS; synchronize the collection station
  • the data of the laser, the camera, the IMU, and the GNSS obtain multiple sensor data about the surrounding environment of the mobile platform; wherein, the point cloud data is the data collected by the laser, and the image
  • the data is data collected by the camera, the IMU data is data collected by the IMU, and the GNSS data is data collected by the GNSS.
  • the multiple positioning information includes: initial positioning information, first positioning information, and second positioning information
  • performing hierarchical processing on the multiple sensor data to generate multiple positioning information includes: based on the IMU data, The GNSS data and the calibration information generate initial positioning information; based on the initial positioning information and the image data, visual SLAM is used to generate first positioning information; based on the first positioning information and the point cloud data, Use laser SLAM to generate second positioning information.
  • obtaining target positioning information of the mobile platform based on the multiple positioning information includes: extracting a key frame matching point set of the image data and the point cloud data matching point set; based on the second positioning Information, the IMU data, the GNSS data, the key frame matching point set and the point cloud data matching point set to generate a comprehensive positioning information database; joint optimization of the data sets in the comprehensive positioning information database, Obtain the high-precision trajectory of the mobile platform; use the high-precision trajectory as the target positioning information.
  • the local high-precision map includes: an image local map and a point cloud three-dimensional scene local map. Based on the target positioning information, generating a local high-precision map includes: resolving the image data based on the high-precision trajectory Generate a local image map based on the position and posture information of the key frame; calculate the position and posture information of the point cloud data based on the high-precision trajectory to generate a point cloud three-dimensional scene local map.
  • performing a closed-loop detection operation on the local high-precision map to obtain a high-precision global map of the mobile platform includes: performing a closed-loop detection operation on the local high-precision map to obtain a local map rotation and translation matrix; Local map rotation and translation matrix construction map optimization pose constraints; use the pose constraints in the map optimization to correct the high-precision trajectory to obtain a corrected high-precision trajectory; based on the corrected high-precision trajectory, obtain The high-precision global map of the mobile platform.
  • the embodiment of the present invention also provides a SLAM mapping system based on multi-sensor fusion, which is applied to a server and includes: an acquisition module, a hierarchical processing module, a positioning module, a first generation module and a second generation module, wherein,
  • the acquisition module is used to acquire multiple sensor data about the surrounding environment of the mobile platform;
  • the multiple sensor data includes: point cloud data, image data, IMU data, and GNSS data;
  • the hierarchical processing module is used to The multiple sensor data are processed hierarchically to generate multiple positioning information; wherein one sensor data corresponds to one positioning information;
  • the positioning module is configured to obtain target positioning information of the mobile platform based on the multiple positioning information
  • the first generation module is used to generate a local high-precision map based on the target positioning information;
  • the second generation module is used to perform a closed-loop detection operation on the local high-precision map to obtain the mobile platform High-precision global map.
  • the acquisition module includes: a calibration unit, a synchronization unit, and an acquisition unit, wherein the calibration unit is used to calibrate the positional relationship between the camera, IMU, GNSS and the laser based on the laser to obtain the calibration Information; wherein the laser, the camera, the IMU, and the GNSS are all sensors on the mobile platform; the synchronization unit is used to synchronize the laser, the laser and the GNSS based on the time of the GNSS The camera, the time of the IMU to the current GNSS time system; the acquisition unit is used to synchronously collect the data of the laser, the camera, the IMU and the GNSS to obtain information about the surrounding environment of the mobile platform Multiple sensor data; wherein the point cloud data is data collected by the laser, the image data is data collected by the camera, the IMU data is data collected by the IMU, and the GNSS The data is the data collected by the GNSS.
  • the calibration unit is used to calibrate the positional relationship between the camera, IMU, GNSS and the
  • an embodiment of the present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and running on the processor, and the processor executes the computer program When realizing the steps of the method described in the first aspect above.
  • embodiments of the present invention also provide a computer-readable medium having non-volatile program code executable by a processor, and the program code causes the processor to execute the method described in the first aspect.
  • the present invention provides a SLAM mapping method and system based on multi-sensor fusion, applied to a server, including: acquiring multiple sensor data of a mobile platform about the surrounding environment; multiple sensor data including: point cloud data, image data, IMU data And GNSS data; multi-sensor data is processed hierarchically to generate multiple positioning information; among them, one sensor data corresponds to one positioning information; based on multiple positioning information, the target positioning information of the mobile platform is obtained; based on the target positioning information, generate Local high-precision map; closed-loop detection operation is performed on the local high-precision map to obtain a high-precision global map of the mobile platform.
  • the present invention alleviates the technical problems of low precision and large errors that are easily restricted by the surrounding environment in the prior art.
  • FIG. 1 is a flowchart of a SLAM mapping method based on multi-sensor fusion according to an embodiment of the present invention
  • Fig. 2 is a flowchart of a method for acquiring data of multiple sensors on the surrounding environment of a mobile platform according to an embodiment of the present invention
  • FIG. 3 is a flowchart of a method for hierarchically processing multiple sensor data according to an embodiment of the present invention
  • FIG. 5 is a flowchart of a method for obtaining a high-precision global map of a mobile platform according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a SLAM mapping system based on multi-sensor fusion according to an embodiment of the present invention
  • FIG. 7 is a schematic diagram of another SLAM mapping system based on multi-sensor fusion provided by an embodiment of the present invention.
  • Fig. 1 is a flowchart of a SLAM mapping method based on multi-sensor fusion provided according to an embodiment of the present invention, and the method is applied to a server. As shown in Figure 1, the method specifically includes the following steps:
  • Step S102 Obtain multiple sensor data about the surrounding environment of the mobile platform; multiple sensor data includes: point cloud data, image data, IMU data, and GNSS data.
  • point cloud information of the surrounding environment is collected by a laser to obtain point cloud data; image information is collected by a camera to obtain image data; the angular velocity and acceleration of the mobile platform are obtained by IMU to obtain IMU data; and the absolute latitude and longitude of each moment is obtained by GNSS Coordinates to get GNSS data.
  • Step S104 Perform hierarchical processing on multiple sensor data to generate multiple positioning information; where one sensor data corresponds to one positioning information.
  • Step S106 Obtain target positioning information of the mobile platform based on multiple positioning information.
  • step S108 a local high-precision map is generated based on the target location information.
  • step S110 a closed-loop detection operation is performed on the local high-precision map to obtain a high-precision global map of the mobile platform.
  • the present invention provides a SLAM mapping method based on multi-sensor fusion to obtain multiple sensor data of a mobile platform about the surrounding environment; multiple sensor data includes: point cloud data, image data, IMU data and GNSS data; for multiple sensors
  • the data is processed hierarchically to generate multiple positioning information; among them, one sensor data corresponds to one positioning information; based on multiple positioning information, the target positioning information of the mobile platform is obtained; based on the target positioning information, a local high-precision map is generated;
  • the precision map performs a closed-loop detection operation to obtain a high-precision global map of the mobile platform.
  • the present invention alleviates the technical problems of low precision and large errors that are easily restricted by the surrounding environment in the prior art.
  • step S102 includes the following steps:
  • Step S1021 Using the laser as a reference, calibrate the positional relationship between the camera, IMU, GNSS and the laser to obtain calibration information; where the laser, camera, IMU and GNSS are all sensors on the mobile platform.
  • the laser and the camera on the same rigid body are used to collect data from the calibration board, and the surface features in the point cloud data, as well as the corner points in the image and other information, are measured by the distance from the point to the surface.
  • the calibration between the laser and the IMU the trajectory of the mobile device can be obtained through the point cloud data collected by the laser, and the trajectory of the mobile device can also be obtained through the IMU, and the trajectory can be aligned.
  • the external parameters between the two can only be roughly estimated (or measured by a ruler).
  • Step S1022 synchronizing the time of the laser, camera, and IMU to the current GNSS time system based on the time of the GNSS.
  • Step S1023 synchronously collecting data of laser, camera, IMU and GNSS to obtain multiple sensor data about the surrounding environment of the mobile platform; among them, the point cloud data is the data collected by the laser, the image data is the data collected by the camera, and the IMU data
  • the GNSS data is the data collected by the IMU, and the GNSS data is the data collected by the GNSS.
  • the multiple positioning information includes: initial positioning information, first positioning information, and second positioning information.
  • step S104 includes the following steps:
  • Step S1041 Generate initial positioning information based on the IMU data, GNSS data and calibration information.
  • Step S1042 Based on the initial positioning information and the image data, visual SLAM is used to generate first positioning information.
  • a) calculate the feature points for each key frame image, and the specific feature points include ORB feature points and Harris corner points.
  • the abnormal points are removed by RANSAC, and the feature points P t , P t-1 under the normalized plane of the camera are obtained by further optimization calculation.
  • R is the robot pose rotation transformation matrix
  • T is the robot displacement matrix
  • P i t and P i t-1 are the matching point pairs of feature points from time t to time t+1.
  • P t represents the set of all feature points of the image frame F1
  • P t-1 represents the set of all the feature points of the image frame F 1
  • R is the rotation matrix of the carrier
  • T is the translation vector of the carrier
  • N is the number of feature point pairs .
  • Step S1043 based on the first positioning information and the point cloud data, use laser SLAM to generate second positioning information.
  • the point feature F K1 , the line feature F K2 , and the surface feature F K3 can be calculated according to the following formula.
  • i is a point P K is, X i is the coordinate point i, p is an points in the neighborhood of point i is set, j is the point p in, X i is the coordinate of point i, f is the feature value; pre Given thresholds M 1 , M 2 , M 3, and M 4 , for the current point, the point whose feature value f is less than M 1 belongs to feature F k1 , the point greater than M 2 and less than M 3 belongs to F k2 , and the point greater than M 4 belongs to F k3 .
  • the characteristic data of each frame is converted to the coordinate system corresponding to the P1 positioning information.
  • Obtain two adjacent frames of point cloud data P t and P t-1 perform a domain search for all matching pairs F t-1 in F t , and determine all candidate feature matching pairs.
  • the rotation and translation parameters R and T of the point cloud data of two adjacent frames are obtained. Specifically, the parameters can be solved by the following formula:
  • Y represents the feature extracted from the next adjacent data frame
  • X represents the feature extracted from the adjacent previous data frame
  • R is the rotation matrix of the carrier
  • T is the translation vector of the carrier.
  • the matching pair is optimized according to the obtained result, and the feature point F t 'is recalculated.
  • the feature points in the point cloud P t-1 re-search for feature point pairs in F t ', recalculate to obtain new rotation and translation matrices R and T, and update them.
  • the rotation and translation position information R t-1 and T t-1 of two adjacent frames are obtained, and the current optimal feature matching pair is added to the matching database K.
  • the second positioning information P2 (that is, the optimal feature matching pair regarding the second positioning information P2) based on the laser point cloud data is obtained through the conversion matrix of the adjacent frames.
  • the above-mentioned Figure 3 involves the following operations, step S1041, generating initial positioning information based on IMU data, GNSS data and calibration information.
  • Step S1042 Based on the initial positioning information and the image data, visual SLAM is used to generate first positioning information.
  • a new algorithm design is used to generate initial positioning information (the method of minimizing reprojection error combined with the least square method), and the above calculation algorithm is applied to obtain the initial positioning information.
  • FIG. 4 is a flowchart of a method for joint optimization of multiple positioning information according to an embodiment of the present invention. As shown in Figure 4, the method includes the following steps:
  • Step S1061 Extract the key frame matching point set and the point cloud data matching point set of the image data.
  • Step S1062 Generate a comprehensive positioning information database based on the second positioning information, IMU data, GNSS data, key frame matching point set and point cloud data matching point set.
  • step S1063 the data set in the comprehensive positioning information database is jointly optimized to obtain the high-precision trajectory of the mobile platform.
  • step S1064 the high-precision trajectory is used as target positioning information.
  • Each unit of the sliding window contains the original camera key frame matching pair information, or laser point cloud matching pair information and IMU pre-integration information, where the IMU pre-integration information is two consecutive frames All the IMU data of the data constitute an observation value through the IMU pre-integration model. Then, construct a factor graph model for the data in the sliding window in turn, including constructing IMU pre-integration constraint Z imu , laser point cloud feature matching constraint Z laser , image key frame matching constraint Z img, and GNSS position constraint Z gnss, etc., by solving the joint The maximum posterior probability of the probability distribution, each state variable at each time point is obtained. Among them, the state variable that needs to be estimated is
  • E, N, U respectively represent the three-dimensional coordinates of the world coordinate system
  • V e , V n , V u respectively represent the east, north, and sky speed
  • roll, pitch, yaw represent the attitude angle
  • ⁇ x , ⁇ y , and ⁇ z respectively represent the deviation of the accelerometer.
  • the local high-precision map includes: an image local map and a point cloud three-dimensional scene local map.
  • step S108 further includes the following steps:
  • Step S1081 Calculate the position and posture information of the key frames of the image data based on the high-precision trajectory to generate a local image map.
  • step S1082 the position and posture information of the point cloud data are calculated based on the high-precision trajectory, and a partial map of the point cloud three-dimensional scene is generated.
  • step S110 includes the following steps:
  • step S1101 a closed-loop detection operation is performed on the local high-precision map to obtain the local map rotation and translation matrix.
  • the image closed-loop detection is performed. Specifically, the feature of each frame of picture is searched in the dictionary, and the similarity is calculated. If the similarity is too high, it is considered that the mobile platform has returned to a certain position before, forming a closed loop.
  • the laser point cloud information it is judged whether the current point cloud local map and the previously formed point cloud map are duplicated, and the point cloud closed-loop detection is judged.
  • the registration error E is calculated, and the minimum error function is solved
  • the following formula is the formula for calculating the registration error and the minimum error function:
  • i is the current map data frame to be registered point
  • X i is the coordinate i
  • j is the point in the global coordinate system of the data frame
  • X j is the coordinate j
  • T i, j is the j
  • the rotation and translation matrix of i, E i,j is the registration error, It is the default norm.
  • the judgment basis is based on the overlap degree of the point cloud, that is, when the registration error is the smallest, the proportion of the point with the same name in the point cloud overlap area. If the point cloud overlap is higher than a certain percentage, it is judged to be a closed loop. At this time, Ti ,j is its rotation and translation matrix.
  • Step S1102 based on the local map rotation and translation matrix to construct the pose constraint in the map optimization.
  • step S1103 the high-precision trajectory is corrected by using the pose constraints in the graph optimization to obtain the corrected high-precision trajectory.
  • step S1104 based on the corrected high-precision trajectory, a high-precision global map of the mobile platform is obtained.
  • the accumulated error will become larger and larger, resulting in a decrease in accuracy, and closed-loop detection can determine whether the scene collected in the current frame is very similar to a previous frame. If so, a loop is formed, and the accumulation of errors can be reduced during optimization, and a high-precision global map can be generated.
  • the embodiment of the present invention has at least one of the following advantages:
  • Fig. 6 is a schematic diagram of a SLAM drawing system based on multi-sensor fusion provided according to an embodiment of the present invention, and the system is applied to a server.
  • the system includes: an acquisition module 10, a hierarchical processing module 20, a positioning module 30, a first generation module 40 and a second generation module 50.
  • the acquisition module 10 is used to acquire multiple sensor data of the mobile platform about the surrounding environment; the multiple sensor data includes: point cloud data, image data, IMU data, and GNSS data.
  • the hierarchical processing module 20 is used to perform hierarchical processing on multiple sensor data to generate multiple positioning information; among them, one sensor data corresponds to one positioning information.
  • the positioning module 30 is used to obtain target positioning information of the mobile platform based on multiple positioning information.
  • the first generating module 40 is configured to generate a local high-precision map based on the target location information.
  • the second generating module 50 is used to perform a closed-loop detection operation on the local high-precision map to obtain a high-precision global map of the mobile platform.
  • FIG. 7 is a schematic diagram of another SLAM mapping system based on multi-sensor fusion according to an embodiment of the present invention.
  • the acquisition module 10 includes: a calibration unit 11, a synchronization unit 12 and an acquisition unit 13.
  • the calibration unit 11 is used to calibrate the position relationship between the camera, IMU, GNSS and the laser based on the laser to obtain calibration information; wherein the laser, camera, IMU and GNSS are all sensors on the mobile platform.
  • the synchronization unit 12 is used to synchronize the time of the laser, camera, and IMU to the current GNSS time system based on the time of the GNSS.
  • the acquisition unit 13 is used to synchronously collect data from the laser, camera, IMU, and GNSS to obtain multiple sensor data about the surrounding environment of the mobile platform; among them, the point cloud data is the data collected by the laser, and the image data is the data collected by the camera , IMU data is data collected by IMU, and GNSS data is data collected by GNSS.
  • An embodiment of the present invention also provides an electronic device including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor implements the steps of the method in the first embodiment when the processor executes the computer program.
  • the embodiment of the present invention also provides a computer-readable medium having non-volatile program code executable by the processor, and the program code causes the processor to execute the method in the first embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Manufacturing & Machinery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

一种基于多传感器融合的SLAM制图方法和***,应用于服务器,所述方法包括:获取移动平台关于周围环境的多个传感器数据(S102);多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据;对多个传感器数据进行层级处理,生成多个定位信息(S104);其中,一个传感器数据对应于一个定位信息;基于多个定位信息,得到移动平台的目标定位信息(S106);基于目标定位信息,生成局部高精度地图(S108);对局部高精度地图进行闭环检测操作,得到移动平台的高精度全局地图(S110)。所述方法缓解了现有技术中存在的容易受周围环境限制而导致的精度低的技术问题。

Description

基于多传感器融合的SLAM制图方法、*** 技术领域
本发明涉及导航多传感器融合技术领域,尤其是涉及一种基于多传感器融合的SLAM制图方法、***。
背景技术
SLAM(SIMUltaneous Localization And Mapping)技术是指即时定位与地图构建,即通过处理传感器采集的周围环境数据,实时反馈当前运动***在未知环境的位置并同时绘制运动***的周围环境地图,此地图可以是2D平面地图,也可以是三维周围环境地图。在机器人、自动驾驶、虚拟现实、测绘、农业、林业、电力、建筑等行业已经有了很广泛的应用。目前常用的传感器单元包括激光器、惯性导航***(Inertial Measurement Unit,IMU)、视觉相机、全球导航卫星***(Global Navigation Satellite System,GNSS)。
当前比较成熟的SLAM算法大致可以分为激光SLAM和视觉SLAM两类。激光SLAM指主要利用激光传感器获取数据,进行同步定位和制图。激光器不依赖周围环境的光照,且能够扫描周围环境的高精度三维信息,其算法较为稳健,目前随着激光器成本的降低,逐渐成为SLAM领域较为热门的研究领域之一。但是激光器在无明显结构的环境中,如平面墙、草原、狭窄走廊等环境,其无法探测到有效的环境特征,导致定位和制图容易失败。视觉SLAM主要是利用相机传感器来获取周围环境的影像数据,通过拍摄的影像信息来进行定位和制图。其价格低廉、可视化性强,一直是SLAM研究领域最热门的方向。但是视觉相机很依赖周围环境的光照信息和纹理信息,一但光照变化过大,或者纹理重复性单一,容易导致制图失败。
无论是激光SLAM还是视觉SLAM两着还存在随着时间的增长,累计误差会逐渐增加,造成定位和制图效果精度下降,目前比较流行的方式是通过闭环等方式来进行修正,但是在大规模制图中,受限周围环境,精度往往都达不到目前地图生产所要求的精度。
发明内容
有鉴于此,本发明的目的在于提供一基于多传感器融合的SLAM制图方法和***,以缓解了现有技术中存在的容易受周围环境限制而导致的精度低、误差大的技术问题。
第一方面,本发明实施例提供了一种基于多传感器融合的SLAM制图方法,应用于服务器,包括:获取移动平台关于周围环境的多个传感器数据;所述多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据;对所述多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息;基于所述多个定位信息,得到所述移动平台的目标定位信息;基于所述目标定位信息,生成局部高精度地图;对所述局部高精度地图进行闭环检测操作,得到所述移动平台的高精度全局地图。
进一步地,获取移动平台关于周围环境的多个传感器数据,包括:以激光器为基准,标定相机、IMU、GNSS和所述激光器之间的位置关系,得到标定信息;其中,所述激光器、所述相机、IMU和所述GNSS均为所述移动平台上的传感器;以所述GNSS的时间为基准,同步所述激光器、所述相机、所述IMU的时间到当前GNSS的时间***;同步采集所述激光器、所述相机、所述IMU和所述GNSS的数据,得到所述移动平台关于周围环境的多个传感器数据;其中,所述点云数据为所述激光器采集到的数据,所述图像数据为所述相机采集到的数据,所述IMU数据为所述IMU采集到的数据,所述GNSS数据为所述GNSS采集到的数据。
进一步地,所述多个定位信息包括:初始定位信息,第一定位信息和第二定位信息,对所述多个传感器数据进行层级处理,生成多个定位信息,包括:基于所述IMU数据、所述GNSS数据和所述标定信息,生成初始定位信息;基于所述初始定位信息和所述图像数据,利用视觉SLAM生成第一定位信息;基于所述第一定位信息和所述点云数据,利用激光SLAM生成第二定位信息。
进一步地,基于所述多个定位信息,得到所述移动平台的目标定位信息,包括:提取所述图像数据的关键帧匹配点集和所述点云数据匹配点集;基于所述第二定位信息、所述IMU数据、所述GNSS数据、所述关键帧匹配点集 和所述点云数据匹配点集,生成综合定位信息数据库;对所述综合定位信息数据库中的数据集进行联合优化,得到所述移动平台的高精度轨迹;将所述高精度轨迹作为所述目标定位信息。
进一步地,所述局部高精度地图包括:影像局部地图和点云三维场景局部地图,基于所述目标定位信息,生成局部高精度地图,包括:基于所述高精度轨迹解算所述图像数据的关键帧的位置和姿态信息,生成影像局部地图;基于所述高精度轨迹解算所述点云数据的位置和姿态信息,生成点云三维场景局部地图。
进一步地,对所述局部高精度地图进行闭环检测操作,得到所述移动平台的高精度全局地图,包括:对所述局部高精度地图进行闭环检测操作,得到局部地图旋转平移矩阵;基于所述局部地图旋转平移矩阵构造图优化中位姿约束;利用所述图优化中位姿约束对所述高精度轨迹进行修正,得到修正后的高精度轨迹;基于所述修正后的高精度轨迹,得到所述移动平台的高精度全局地图。
第二方面,本发明实施例还提供了一种基于多传感器融合的SLAM制图***,应用于服务器,包括:获取模块,层级处理模块,定位模块,第一生成模块和第二生成模块,其中,所述获取模块,用于获取移动平台关于周围环境的多个传感器数据;所述多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据;所述层级处理模块,用于对所述多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息;所述定位模块,用于基于所述多个定位信息,得到所述移动平台的目标定位信息;所述第一生成模块,用于基于所述目标定位信息,生成局部高精度地图;所述第二生成模块,用于对所述局部高精度地图进行闭环检测操作,得到所述移动平台的高精度全局地图。
进一步地,所述获取模块包括:标定单元,同步单元和采集单元,其中,所述标定单元,用于以激光器为基准,标定相机、IMU、GNSS和所述激光器之间的位置关系,得到标定信息;其中,所述激光器、所述相机、IMU和所述GNSS均为所述移动平台上的传感器;所述同步单元,用于以所述GNSS的时间为基准,同步所述激光器、所述相机、所述IMU的时间到当前GNSS 的时间***;所述采集单元,用于同步采集所述激光器、所述相机、所述IMU和所述GNSS的数据,得到所述移动平台关于周围环境的多个传感器数据;其中,所述点云数据为所述激光器采集到的数据,所述图像数据为所述相机采集到的数据,所述IMU数据为所述IMU采集到的数据,所述GNSS数据为所述GNSS采集到的数据。
第三方面,本发明实施例还提供了一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述第一方面所述的方法的步骤。
第四方面,本发明实施例还提供了一种具有处理器可执行的非易失的程序代码的计算机可读介质,所述程序代码使所述处理器执行上述第一方面所述方法。
本发明提供了一种基于多传感器融合的SLAM制图方法和***,应用于服务器,包括:获取移动平台关于周围环境的多个传感器数据;多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据;对多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息;基于多个定位信息,得到移动平台的目标定位信息;基于目标定位信息,生成局部高精度地图;对局部高精度地图进行闭环检测操作,得到移动平台的高精度全局地图。本发明缓解了现有技术中存在的容易受周围环境限制而导致的精度低、误差大的技术问题。
附图说明
为了更清楚地说明本发明具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例提供的一种基于多传感器融合的SLAM制图方法的流程图;
图2为本发明实施例提供的一种获取移动平台关于周围环境的多个传感 器数据的方法流程图;
图3为本发明实施例提供的一种对多个传感器数据进行层级处理的方法流程图;
图4为本发明实施例提供的一种对多个定位信息进行联合优化的方法流程图;
图5为本发明实施例提供的一种得到移动平台的高精度全局地图的方法流程图;
图6为本发明实施例提供的一种基于多传感器融合的SLAM制图***的示意图;
图7为本发明实施例提供的另一种基于多传感器融合的SLAM制图***的示意图。
具体实施方式
下面将结合附图对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例一:
图1是根据本发明实施例提供的一种基于多传感器融合的SLAM制图方法的流程图,该方法应用于服务器。如图1所示,该方法具体包括如下步骤:
步骤S102,获取移动平台关于周围环境的多个传感器数据;多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据。
具体地,通过激光器采集周围环境的点云信息,得到点云数据;通过相机采集图像信息,得到图像数据;通过IMU获取移动平台的角速度和加速度,得到IMU数据;通过GNSS获取每一时刻经纬度绝对坐标,得到GNSS数据。
步骤S104,对多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息。
步骤S106,基于多个定位信息,得到移动平台的目标定位信息。
步骤S108,基于目标定位信息,生成局部高精度地图。
步骤S110,对局部高精度地图进行闭环检测操作,得到移动平台的高精度全局地图。
本发明提供了一种基于多传感器融合的SLAM制图方法,获取移动平台关于周围环境的多个传感器数据;多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据;对多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息;基于多个定位信息,得到移动平台的目标定位信息;基于目标定位信息,生成局部高精度地图;对局部高精度地图进行闭环检测操作,得到移动平台的高精度全局地图。本发明缓解了现有技术中存在的容易受周围环境限制而导致的精度低、误差大的技术问题。
可选地,如图2所示,步骤S102包括如下步骤:
步骤S1021,以激光器为基准,标定相机、IMU、GNSS和激光器之间的位置关系,得到标定信息;其中,激光器、相机、IMU和GNSS均为移动平台上的传感器。
具体地,激光器和相机的标定,使用在同一刚体上的激光器和相机对着标定板采集数据,拟合点云数据里面的面特征,以及图像中的角点等信息,通过点到面的距离去优化求解出相机和激光器的外参,激光器和IMU之间的标定,通过激光器采集的点云数据可以获取移动设备的轨迹,通过IMU也可以获取移动设备的轨迹,对轨迹进行对齐,即可以获得外参,由于IMU的加速度计容易漂,只能大致的估计两者之间的外参(或者通过尺子进行量测)。
步骤S1022,以GNSS的时间为基准,同步激光器、相机、IMU的时间到当前GNSS的时间***。
步骤S1023,同步采集激光器、相机、IMU和GNSS的数据,得到移动平台关于周围环境的多个传感器数据;其中,点云数据为激光器采集到的数据,图像数据为相机采集到的数据,IMU数据为IMU采集到的数据,GNSS数据为GNSS采集到的数据。
在本发明实施例中,多个定位信息包括:初始定位信息,第一定位信息和第二定位信息。
可选地,如图3所示,步骤S104包括如下步骤:
步骤S1041,基于IMU数据、GNSS数据和标定信息,生成初始定位信息。
具体地,首先在移动平台上完成导航***的初始化及对准,对卡尔曼滤波参数的初始参数进行设定,然后利用GNSS卫星定位信息,以及惯性导航***(INS),以卡尔曼滤波原理,求解得到每一时刻t的状态后验估计信息P0 t
步骤S1042,基于初始定位信息和图像数据,利用视觉SLAM生成第一定位信息。
具体地,a),针对每一关键帧图像计算其特征点,具体特征点包含ORB特征点及Harris角点等。
b).对相邻两个图像关键帧,依据初始定位信息P0,得到位置修正后的特征顶F 1和F 2如下图,其中,P t-1表示图像帧F 1的所有特征点集合,P t表示图像帧F 2的所有特征点集合。
Figure PCTCN2020092921-appb-000001
通过RANSAC去除异常点,并进一步优化计算得到相机归一化平面下的特征点P t,P t-1
由于处在同一特征环境下,这些特征匹配点对的变换关系可表示为如下式所示:
Figure PCTCN2020092921-appb-000002
P i t=RP i t-1+T;
其中,R为机器人位姿旋转变换矩阵,T为机器人位移矩阵,P i t与P i t-1为t时刻到t+1时刻的特征点匹配点对。采用最小化重投影误差的方法来求解位姿R与T,如下式:
Figure PCTCN2020092921-appb-000003
其中,P t表示图像帧F1的所有特征点集合,P t-1表示图像帧F 1的所有特 征点集合;R为载体的旋转矩阵,T为载体的平移向量,N表示特征点对的数目。
计算得到相邻关键帧的旋转平移矩阵,并依次得到所有关键帧的第一定位信息P1,将当前的最优特征匹配对(即关于第一定位信息P1的最优特征匹配对)加入到匹配数据库中。
步骤S1043,基于第一定位信息和点云数据,利用激光SLAM生成第二定位信息。
具体的,对于当前帧激光点云数据P K,依据以下公式可计算得到其点特征F K1,线特征F K2,面特征F K3
Figure PCTCN2020092921-appb-000004
其中,i为P K中的一个点,X i为点i的坐标,p为点i的邻域点集合,j为p中的点,X i为点i的坐标,f为特征值;预先给定阈值M 1、M 2、M 3和M 4,针对当前点的特征值f小于M 1的点属于特征F k1,大于M 2小于M 3的点属于F k2,大于M 4的点属于F k3
根据第一定位信息P1,将每帧的特征数据转换到P1定位信息对应的坐标系。获取相邻两帧点云数据P t和P t-1,对所有匹配对F t-1在F t中进行领域搜索,确定所有候选特征匹配对。根据匹配对并通过最小二乘法求解得到相邻两帧点云数据的旋转平移参数R和T。具体的,可通过如下公式求解参数:
Y=RX+T
其中,Y表示从相邻的后一数据帧中提取的特征,X表示从相邻的前一数据帧中提取的特征,R为载体的旋转矩阵,T为载体的平移向量。
然后根据得到的结果对匹配对进行优选,并重新计算特征点F t'。对点云P t-1中的特征点在F t'中重新查找特征点对,重新计算得到新的旋转平移矩阵R和T,并更新之。最终得到相邻两帧的旋转平移位置信息R t-1和T t-1,并将当前的最优特征匹配对加入到匹配数据库K中。
最后,通过依据相邻帧的转换矩阵,得到依据激光点云数据的第二定位信息P2(即关于第二定位信息P2的最优特征匹配对)。上述图3涉及了如 下操作,步骤S1041,基于IMU数据、GNSS数据和标定信息,生成初始定位信息。步骤S1042,基于初始定位信息和图像数据,利用视觉SLAM生成第一定位信息。步骤S1043,基于第一定位信息和点云数据,利用激光SLAM生成第二定位信息。在上述技术方案中,基于IMU数据、GNSS数据和标定信息,生成初始定位信息采用了全新的算法设计(最小化重投影误差的方法结合最小二乘法),应用上述计算算法,得到初始定位信息。
可选地,图4是根据本发明实施例提供的一种对多个定位信息进行联合优化的方法流程图。如图4所示,该方法包括如下步骤:
步骤S1061,提取图像数据的关键帧匹配点集和点云数据匹配点集。
步骤S1062,基于第二定位信息、IMU数据、GNSS数据、关键帧匹配点集和点云数据匹配点集,生成综合定位信息数据库。
步骤S1063,对综合定位信息数据库中的数据集进行联合优化,得到移动平台的高精度轨迹。
步骤S1064,将高精度轨迹作为目标定位信息。
具体地,首先构建一个容量为n的滑动窗口,滑动窗口每个单元包含原始相机关键帧匹配对信息,或激光点云匹配对信息以及IMU预积分信息,其中,IMU预积分信息为连续两帧数据的所有IMU数据通过IMU预积分模型构成的一个观测值。然后,依次对滑动窗口内的数据构造因子图模型,包括构建IMU预积分约束Z imu,激光点云特征匹配约束Z laser,图像关键帧匹配约束Z img以及GNSS位置约束Z gnss等,通过求解联合概率分布的最大后验概率,得到每一个时间点的各个状态变量。其中,需要估计的状态变量为
Figure PCTCN2020092921-appb-000005
其中,E,N,U分别表示世界坐标系的三维坐标;V e,V n,V u,分别表示东向、北向、天向速度;roll,pitch,yaw表示姿态角;
Figure PCTCN2020092921-appb-000006
分别表示陀螺仪偏差量;σ xyz分别表示加速度计偏差量。
对每一时刻的状态集合S k={S 1,S 2,Λ,S k},根据上述构造的测量值集合,Z={Z imu,Z laser,Z img,Z gnss},求解联合概率分布p(S k|Z k)的最大后验概率:
Figure PCTCN2020092921-appb-000007
其中S' k={S′ 1,S′ 2,Λ,S' k},表示S k的最优估计值。求解得到最优状态量,从而得到高精度轨迹T。
在本发明实施例中,局部高精度地图包括:影像局部地图和点云三维场景局部地图。可选地,步骤S108还包括如下步骤:
步骤S1081,基于高精度轨迹解算图像数据的关键帧的位置和姿态信息,生成影像局部地图。
步骤S1082,基于高精度轨迹解算点云数据的位置和姿态信息,生成点云三维场景局部地图。
可选地,如图5所示,步骤S110包括如下步骤:
步骤S1101,对局部高精度地图进行闭环检测操作,得到局部地图旋转平移矩阵。
具体地,首先根据GNSS数据初步判断当前地图和之前扫描过的地图是否有重复。若经纬度信息相差在一定的阈值之内,则认为所处在同一地方,那么者两帧就会形成闭环。
然后根据影像的特征点信息判断当前影像局部地图和之前形成的影像地图有无重复,进行影像闭环检测。具体地,将每帧图片的特征在字典里面进行搜索,计算相似度,相似度过高,则认为移动平台回到了之前的某一位置,形成闭环。
之后根据激光点云信息判断当前点云局部地图和之前形成的点云地图有无重复,判断进行点云闭环检测。
具体的,针对获取的两帧候选判断的点云i和j,计算其配准误差E,求解最小误差函数
Figure PCTCN2020092921-appb-000008
下面公式为计算配准误差及最小误差函数公式:
E i,j=Xi-T i,j·X j
Figure PCTCN2020092921-appb-000009
其中i为所述当前地图中的待配准数据帧中的点,X i为i的坐标,j为全 局坐标系数据帧中的点,X j为j的坐标,T i,j为j到i的旋转平移矩阵,E i,j为配准误差,
Figure PCTCN2020092921-appb-000010
为预设范数。
判断依据为基于点云的重叠度,即当配准误差最小时,其同名点占点云重叠区域的点的比例。如果点云重叠度高于一定百分比则判断其为闭环。此时T i,j即为其旋转平移矩阵。
步骤S1102,基于局部地图旋转平移矩阵构造图优化中位姿约束。
步骤S1103,利用图优化中位姿约束对高精度轨迹进行修正,得到修正后的高精度轨迹。
步骤S1104,基于修正后的高精度轨迹,得到移动平台的高精度全局地图。
在本发明实施例中,随着时间的推移,累计误差会越来越大,导致精度降低,而闭环检测可以去判断当前帧所采集的场景和之前的某一帧是否有很大的相似,若是,则形成了环,优化的时候就可以降低误差的累积,生成高精度的全局地图。
与现有技术的方式相比,本发明实施例具有以下至少一种优点:
(1)解决了单一传感器在特征环境下SLAM制图失败的情况,***采集处理稳定性、鲁棒性较强;
(2)利用多传感器的融合,解决了常规SLAM算法累计误差大的问题,提高了全局地图的成图精度。
实施例二:
图6是根据本发明实施例提供的一种基于多传感器融合的SLAM制图***的示意图,该***应用于服务器。如图6所示,该***包括:获取模块10,层级处理模块20,定位模块30,第一生成模块40和第二生成模块50。
具体地,获取模块10,用于获取移动平台关于周围环境的多个传感器数据;多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据。
层级处理模块20,用于对多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息。
定位模块30,用于基于多个定位信息,得到移动平台的目标定位信息。
第一生成模块40,用于基于目标定位信息,生成局部高精度地图。
第二生成模块50,用于对局部高精度地图进行闭环检测操作,得到移动平台的高精度全局地图。
可选地,图7是根据本发明实施例提供的另一种基于多传感器融合的SLAM制图***的示意图。如图7所示,获取模块10包括:标定单元11,同步单元12和采集单元13。
具体地,标定单元11,用于以激光器为基准,标定相机、IMU、GNSS和激光器之间的位置关系,得到标定信息;其中,激光器、相机、IMU和GNSS均为移动平台上的传感器。
同步单元12,用于以GNSS的时间为基准,同步激光器、相机、IMU的时间到当前GNSS的时间***。
采集单元13,用于同步采集激光器、相机、IMU和GNSS的数据,得到移动平台关于周围环境的多个传感器数据;其中,点云数据为激光器采集到的数据,图像数据为相机采集到的数据,IMU数据为IMU采集到的数据,GNSS数据为GNSS采集到的数据。
本发明实施例还提供了一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述实施例一中的方法的步骤。
本发明实施例还提供了一种具有处理器可执行的非易失的程序代码的计算机可读介质,程序代码使处理器执行上述实施例一中的方法。
最后应说明的是:以上各实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述各实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的范围。

Claims (10)

  1. 一种基于多传感器融合的SLAM制图方法,其特征在于,应用于服务器,包括:
    获取移动平台关于周围环境的多个传感器数据;所述多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据;
    对所述多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息;
    基于所述多个定位信息,得到所述移动平台的目标定位信息;
    基于所述目标定位信息,生成局部高精度地图;
    对所述局部高精度地图进行闭环检测操作,得到所述移动平台的高精度全局地图。
  2. 根据权利要求1所述的方法,其特征在于,获取移动平台关于周围环境的多个传感器数据,包括:
    以激光器为基准,标定相机、IMU、GNSS和所述激光器之间的位置关系,得到标定信息;其中,所述激光器、所述相机、IMU和所述GNSS均为所述移动平台上的传感器;
    以所述GNSS的时间为基准,同步所述激光器、所述相机、所述IMU的时间到当前GNSS的时间***;
    同步采集所述激光器、所述相机、所述IMU和所述GNSS的数据,得到所述移动平台关于周围环境的多个传感器数据;其中,所述点云数据为所述激光器采集到的数据,所述图像数据为所述相机采集到的数据,所述IMU数据为所述IMU采集到的数据,所述GNSS数据为所述GNSS采集到的数据。
  3. 根据权利要求2所述的方法,所述多个定位信息包括:初始定位信息,第一定位信息和第二定位信息,其特征在于,对所述多个传感器数据进行层级处理,生成多个定位信息,包括:
    基于所述IMU数据、所述GNSS数据和所述标定信息,生成初始定位信 息;
    基于所述初始定位信息和所述图像数据,利用视觉SLAM生成第一定位信息;
    基于所述第一定位信息和所述点云数据,利用激光SLAM生成第二定位信息。
  4. 根据权利要求3所述的方法,其特征在于,基于所述多个定位信息,得到所述移动平台的目标定位信息,包括:
    提取所述图像数据的关键帧匹配点集和所述点云数据匹配点集;
    基于所述第二定位信息、所述IMU数据、所述GNSS数据、所述关键帧匹配点集和所述点云数据匹配点集,生成综合定位信息数据库;
    对所述综合定位信息数据库中的数据集进行联合优化,得到所述移动平台的高精度轨迹;
    将所述高精度轨迹作为所述目标定位信息。
  5. 根据权利要求4所述的方法,所述局部高精度地图包括:影像局部地图和点云三维场景局部地图,其特征在于,基于所述目标定位信息,生成局部高精度地图,包括:
    基于所述高精度轨迹解算所述图像数据的关键帧的位置和姿态信息,生成影像局部地图;
    基于所述高精度轨迹解算所述点云数据的位置和姿态信息,生成点云三维场景局部地图。
  6. 根据权利要求5所述的方法,其特征在于,对所述局部高精度地图进行闭环检测操作,得到所述移动平台的高精度全局地图,包括:
    对所述局部高精度地图进行闭环检测操作,得到局部地图旋转平移矩阵;
    基于所述局部地图旋转平移矩阵构造图优化中位姿约束;
    利用所述图优化中位姿约束对所述高精度轨迹进行修正,得到修正后的 高精度轨迹;
    基于所述修正后的高精度轨迹,得到所述移动平台的高精度全局地图。
  7. 一种基于多传感器融合的SLAM制图***,其特征在于,应用于服务器,包括:获取模块,层级处理模块,定位模块,第一生成模块和第二生成模块,其中,
    所述获取模块,用于获取移动平台关于周围环境的多个传感器数据;所述多个传感器数据包括:点云数据,图像数据,IMU数据和GNSS数据;
    所述层级处理模块,用于对所述多个传感器数据进行层级处理,生成多个定位信息;其中,一个传感器数据对应于一个定位信息;
    所述定位模块,用于基于所述多个定位信息,得到所述移动平台的目标定位信息;
    所述第一生成模块,用于基于所述目标定位信息,生成局部高精度地图;
    所述第二生成模块,用于对所述局部高精度地图进行闭环检测操作,得到所述移动平台的高精度全局地图。
  8. 根据权利要求7所述的***,其特征在于,所述获取模块包括:标定单元,同步单元和采集单元,其中,
    所述标定单元,用于以激光器为基准,标定相机、IMU、GNSS和所述激光器之间的位置关系,得到标定信息;其中,所述激光器、所述相机、IMU和所述GNSS均为所述移动平台上的传感器;
    所述同步单元,用于以所述GNSS的时间为基准,同步所述激光器、所述相机、所述IMU的时间到当前GNSS的时间***;
    所述采集单元,用于同步采集所述激光器、所述相机、所述IMU和所述GNSS的数据,得到所述移动平台关于周围环境的多个传感器数据;其中,所述点云数据为所述激光器采集到的数据,所述图像数据为所述相机采集到的数据,所述IMU数据为所述IMU采集到的数据,所述GNSS数据为所述GNSS采集到的数据。
  9. 一种电子设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述权利要求1至6任一项所述的方法的步骤。
  10. 一种具有处理器可执行的非易失的程序代码的计算机可读介质,其特征在于,所述程序代码使所述处理器执行所述权利要求1-6任一项所述方法。
PCT/CN2020/092921 2020-05-19 2020-05-28 基于多传感器融合的slam制图方法、*** WO2021232470A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20936432.2A EP4155669A4 (en) 2020-05-19 2020-05-28 SLAM PROCESS AND SYSTEM BASED ON MULTISENSORFUSION
US17/791,505 US20230194306A1 (en) 2020-05-19 2020-05-28 Multi-sensor fusion-based slam method and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010422626.7 2020-05-19
CN202010422626.7A CN111561923B (zh) 2020-05-19 2020-05-19 基于多传感器融合的slam制图方法、***

Publications (1)

Publication Number Publication Date
WO2021232470A1 true WO2021232470A1 (zh) 2021-11-25

Family

ID=72069489

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/092921 WO2021232470A1 (zh) 2020-05-19 2020-05-28 基于多传感器融合的slam制图方法、***

Country Status (4)

Country Link
US (1) US20230194306A1 (zh)
EP (1) EP4155669A4 (zh)
CN (1) CN111561923B (zh)
WO (1) WO2021232470A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114488164A (zh) * 2022-01-17 2022-05-13 清华大学深圳国际研究生院 水下航行器同步定位与建图方法及水下航行器
CN114593724A (zh) * 2022-01-21 2022-06-07 北京邮电大学 集群融合定位方法和装置
CN114689037A (zh) * 2022-03-31 2022-07-01 山东优宝特智能机器人有限公司 用于非结构化环境的多源信息融合机器人定位方法及***
CN114777775A (zh) * 2022-05-06 2022-07-22 浙江师范大学 一种多传感器融合的定位方法及***
CN114913246A (zh) * 2022-07-15 2022-08-16 齐鲁空天信息研究院 相机标定方法、装置、电子设备及存储介质
CN115276761A (zh) * 2022-06-23 2022-11-01 中国空间技术研究院 基于高精高频姿态数据的卫星辅助数据生成方法及***
CN115685292A (zh) * 2023-01-05 2023-02-03 中国人民解放军国防科技大学 一种多源融合导航***的导航方法和装置
CN116067379A (zh) * 2023-03-07 2023-05-05 青岛慧拓智能机器有限公司 一种基于动态点云的长隧道场景下多传感器融合定位方法
CN116222543A (zh) * 2023-04-26 2023-06-06 齐鲁工业大学(山东省科学院) 用于机器人环境感知的多传感器融合地图构建方法及***
CN117782227A (zh) * 2024-02-26 2024-03-29 中国铁路设计集团有限公司 一种多源航空遥感数据采集装置、***及控制方法

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111983639B (zh) * 2020-08-25 2023-06-02 浙江光珀智能科技有限公司 一种基于Multi-Camera/Lidar/IMU的多传感器SLAM方法
CN112414415B (zh) * 2020-09-22 2023-05-23 重庆兰德适普信息科技有限公司 高精度点云地图构建方法
CN112129282B (zh) * 2020-09-30 2021-06-18 杭州海康机器人技术有限公司 一种不同导航方式之间定位结果的转换方法、转换装置
CN112254729A (zh) * 2020-10-09 2021-01-22 北京理工大学 一种基于多传感器融合的移动机器人定位方法
CN112268559B (zh) * 2020-10-22 2023-03-28 中国人民解放军战略支援部队信息工程大学 复杂环境下融合slam技术的移动测量方法
CN112486197B (zh) * 2020-12-05 2022-10-21 青岛民航凯亚***集成有限公司 基于多源图像自适应选权的融合定位跟踪控制方法
CN112446905B (zh) * 2021-01-29 2021-05-11 中国科学院自动化研究所 基于多自由度传感关联的三维实时全景监控方法
US20220309767A1 (en) * 2021-03-26 2022-09-29 Teledyne Flir Detection, Inc. Object tracking in local and global maps systems and methods
CN113108780A (zh) * 2021-03-30 2021-07-13 沈奥 一种基于视觉惯导slam算法的无人船自主导航方法
CN113484843A (zh) * 2021-06-02 2021-10-08 福瑞泰克智能***有限公司 一种激光雷达与组合导航间外参数的确定方法及装置
CN114061596B (zh) * 2021-11-19 2024-03-22 北京国家新能源汽车技术创新中心有限公司 自动驾驶定位方法、***、测试方法、设备及存储介质
CN114758001B (zh) * 2022-05-11 2023-01-24 北京国泰星云科技有限公司 基于pnt的轮胎吊自动行走方法
CN114742884B (zh) * 2022-06-09 2022-11-22 杭州迦智科技有限公司 一种基于纹理的建图、里程计算、定位方法及***
CN116423515B (zh) * 2023-04-28 2023-10-03 燕山大学 一种多机器人的数字孪生控制***及其定位与建图的方法
CN116881846B (zh) * 2023-07-17 2024-04-05 无锡北微传感科技有限公司 一种基于多传感器信息融合的多模态通信铁塔监测方法
CN117128951B (zh) * 2023-10-27 2024-02-02 中国科学院国家授时中心 适用于自动驾驶农机的多传感器融合导航定位***及方法
CN117990104A (zh) * 2023-11-09 2024-05-07 南京晓庄学院 局部地图下共享平衡车高精自主导航***

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106153048A (zh) * 2016-08-11 2016-11-23 广东技术师范学院 一种基于多传感器的机器人室内定位及制图***
US20180364045A1 (en) * 2015-01-06 2018-12-20 Discovery Robotics Robotic platform with mapping facility
CN109556615A (zh) * 2018-10-10 2019-04-02 吉林大学 基于自动驾驶的多传感器融合认知的驾驶地图生成方法
CN110057373A (zh) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 用于生成高精细语义地图的方法、装置和计算机存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9910441B2 (en) * 2015-11-04 2018-03-06 Zoox, Inc. Adaptive autonomous vehicle planner logic
US9612123B1 (en) * 2015-11-04 2017-04-04 Zoox, Inc. Adaptive mapping to navigate autonomous vehicles responsive to physical environment changes
CN107246876B (zh) * 2017-07-31 2020-07-07 中北润良新能源汽车(徐州)股份有限公司 一种无人驾驶汽车自主定位与地图构建的方法及***
CN108303721B (zh) * 2018-02-12 2020-04-03 北京经纬恒润科技有限公司 一种车辆定位方法及***
CN108665540A (zh) * 2018-03-16 2018-10-16 浙江工业大学 基于双目视觉特征和imu信息的机器人定位与地图构建***
US11277956B2 (en) * 2018-07-26 2022-03-22 Bear Flag Robotics, Inc. Vehicle controllers for agricultural and industrial applications
CN109341706B (zh) * 2018-10-17 2020-07-03 张亮 一种面向无人驾驶汽车的多特征融合地图的制作方法
CN109887057B (zh) * 2019-01-30 2023-03-24 杭州飞步科技有限公司 生成高精度地图的方法和装置
CN109900265A (zh) * 2019-03-15 2019-06-18 武汉大学 一种camera/mems辅助北斗的机器人定位算法
KR102615685B1 (ko) * 2019-05-30 2023-12-18 엘지전자 주식회사 멀티 센서를 동기화시켜 위치를 추정하는 방법 및 이를 구현하는 로봇

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180364045A1 (en) * 2015-01-06 2018-12-20 Discovery Robotics Robotic platform with mapping facility
CN106153048A (zh) * 2016-08-11 2016-11-23 广东技术师范学院 一种基于多传感器的机器人室内定位及制图***
CN109556615A (zh) * 2018-10-10 2019-04-02 吉林大学 基于自动驾驶的多传感器融合认知的驾驶地图生成方法
CN110057373A (zh) * 2019-04-22 2019-07-26 上海蔚来汽车有限公司 用于生成高精细语义地图的方法、装置和计算机存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4155669A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114488164A (zh) * 2022-01-17 2022-05-13 清华大学深圳国际研究生院 水下航行器同步定位与建图方法及水下航行器
CN114488164B (zh) * 2022-01-17 2024-05-10 清华大学深圳国际研究生院 水下航行器同步定位与建图方法及水下航行器
CN114593724A (zh) * 2022-01-21 2022-06-07 北京邮电大学 集群融合定位方法和装置
CN114689037A (zh) * 2022-03-31 2022-07-01 山东优宝特智能机器人有限公司 用于非结构化环境的多源信息融合机器人定位方法及***
CN114777775A (zh) * 2022-05-06 2022-07-22 浙江师范大学 一种多传感器融合的定位方法及***
CN115276761B (zh) * 2022-06-23 2024-04-09 中国空间技术研究院 基于高精高频姿态数据的卫星辅助数据生成方法及***
CN115276761A (zh) * 2022-06-23 2022-11-01 中国空间技术研究院 基于高精高频姿态数据的卫星辅助数据生成方法及***
CN114913246A (zh) * 2022-07-15 2022-08-16 齐鲁空天信息研究院 相机标定方法、装置、电子设备及存储介质
CN115685292A (zh) * 2023-01-05 2023-02-03 中国人民解放军国防科技大学 一种多源融合导航***的导航方法和装置
CN116067379A (zh) * 2023-03-07 2023-05-05 青岛慧拓智能机器有限公司 一种基于动态点云的长隧道场景下多传感器融合定位方法
CN116222543A (zh) * 2023-04-26 2023-06-06 齐鲁工业大学(山东省科学院) 用于机器人环境感知的多传感器融合地图构建方法及***
CN117782227A (zh) * 2024-02-26 2024-03-29 中国铁路设计集团有限公司 一种多源航空遥感数据采集装置、***及控制方法
CN117782227B (zh) * 2024-02-26 2024-05-10 中国铁路设计集团有限公司 一种多源航空遥感数据采集装置、***及控制方法

Also Published As

Publication number Publication date
CN111561923B (zh) 2022-04-15
EP4155669A4 (en) 2024-07-17
EP4155669A1 (en) 2023-03-29
CN111561923A (zh) 2020-08-21
US20230194306A1 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
WO2021232470A1 (zh) 基于多传感器融合的slam制图方法、***
CN112634451B (zh) 一种融合多传感器的室外大场景三维建图方法
WO2021035669A1 (zh) 位姿预测方法、地图构建方法、可移动平台及存储介质
CN112734852B (zh) 一种机器人建图方法、装置及计算设备
CN110033489B (zh) 一种车辆定位准确性的评估方法、装置及设备
WO2022188094A1 (zh) 一种点云匹配方法及装置、导航方法及设备、定位方法、激光雷达
CN113781582A (zh) 基于激光雷达和惯导联合标定的同步定位与地图创建方法
CN110726406A (zh) 一种改进的非线性优化单目惯导slam的方法
CN116222543B (zh) 用于机器人环境感知的多传感器融合地图构建方法及***
CN112802096A (zh) 实时定位和建图的实现装置和方法
CN115272596A (zh) 一种面向单调无纹理大场景的多传感器融合slam方法
CN111998862A (zh) 一种基于bnn的稠密双目slam方法
Zhang LILO: A novel LiDAR–IMU SLAM system with loop optimization
CN115574816A (zh) 仿生视觉多源信息智能感知无人平台
CN116429116A (zh) 一种机器人定位方法及设备
CN118010024A (zh) 基于点线面解耦6-DoF位姿估计优化方法及***
Song et al. Bundledslam: An accurate visual slam system using multiple cameras
CN112731503A (zh) 一种基于前端紧耦合的位姿估计方法及***
Hu et al. Efficient Visual-Inertial navigation with point-plane map
Sun et al. A multisensor-based tightly coupled integrated navigation system
CN115930948A (zh) 一种果园机器人融合定位方法
WO2020019116A1 (zh) 多源数据建图方法、相关装置及计算机可读存储介质
CN112767482B (zh) 一种多传感器融合的室内外定位方法及***
Wang et al. SW-LIO: A Sliding Window Based Tightly Coupled LiDAR-Inertial Odometry
CN114842224A (zh) 一种基于地理底图的单目无人机绝对视觉匹配定位方案

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20936432

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020936432

Country of ref document: EP

Effective date: 20221219