CN114429432A - Multi-source information layered fusion method and device and storage medium - Google Patents

Multi-source information layered fusion method and device and storage medium Download PDF

Info

Publication number
CN114429432A
CN114429432A CN202210356985.6A CN202210356985A CN114429432A CN 114429432 A CN114429432 A CN 114429432A CN 202210356985 A CN202210356985 A CN 202210356985A CN 114429432 A CN114429432 A CN 114429432A
Authority
CN
China
Prior art keywords
point cloud
cloud data
source information
factor
monocular image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210356985.6A
Other languages
Chinese (zh)
Other versions
CN114429432B (en
Inventor
张超
张波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Innotitan Intelligent Equipment Technology Tianjin Co Ltd
Original Assignee
Innotitan Intelligent Equipment Technology Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innotitan Intelligent Equipment Technology Tianjin Co Ltd filed Critical Innotitan Intelligent Equipment Technology Tianjin Co Ltd
Priority to CN202210356985.6A priority Critical patent/CN114429432B/en
Publication of CN114429432A publication Critical patent/CN114429432A/en
Application granted granted Critical
Publication of CN114429432B publication Critical patent/CN114429432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The application relates to a multi-source information layered fusion method, a device and a storage medium, wherein the method comprises the steps of acquiring point cloud data, monocular image data and pose information; preprocessing the acquired point cloud data to remove noise points in the point cloud data and remove motion distortion; denoising the acquired monocular image data and extracting SIFT feature points of the monocular image data; associating the depth value of the point cloud data for the monocular image; fusing the monocular image subjected to the correlation depth value and the preprocessed point cloud data to obtain a local odometer, and converting the local odometer into an odometer factor; performing pre-integration processing on the pose information to obtain a pre-integration factor; and optimizing the odometry factor, the pre-integration factor and the loop detection factor by using the factor graph model to obtain the motion trail and the map of the unmanned mobile platform. The method can improve the utilization rate of multi-source information and reduce the accumulated error.

Description

Multi-source information layered fusion method and device and storage medium
Technical Field
The application relates to the technical field of robots, in particular to a multi-source information layered fusion method.
Background
The instant positioning And Mapping (SLAM) technology is one of the key problems that the mobile robot needs to solve. Over several decades of development, numerous achievements have been achieved. A system that is often equipped with various sensors on a mobile robot, collects data and calculates the data to obtain the positioning of the position and posture of the mobile robot and scene map information.
The most common SLAMs at present are the laser SLAM algorithm and the visual SLAM algorithm. With the increasingly complex scene task, a single sensor is often difficult to meet the requirement, so that a multi-sensor fusion algorithm is necessary. How to fully utilize the information of multiple sensors is still one of the current difficulties.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art or at least partially solve the technical problems, the present application provides a method, an apparatus, and a storage medium for hierarchical fusion of multi-source information, so as to improve the utilization rate of the multi-source information and reduce the accumulated error.
In a first aspect, the application provides a multi-source information layered fusion method, which is applied to an unmanned mobile platform, wherein the unmanned mobile platform is provided with a monocular camera, a point cloud scanning device and an inertia measurement unit, and the method comprises the following steps:
acquiring point cloud data obtained by the point cloud scanning device, monocular image data obtained by the monocular camera and pose information of the unmanned mobile platform obtained by the inertial measurement unit, wherein the pose information comprises acceleration and angular velocity;
preprocessing the acquired point cloud data to remove noise points in the point cloud data and remove motion distortion;
denoising the acquired monocular image data and extracting SIFT feature points of the monocular image data;
associating a depth value of the point cloud data for the monocular image;
fusing the monocular image subjected to the correlation depth value and the preprocessed point cloud data to obtain a local odometer, and converting the local odometer into an odometer factor;
performing pre-integration processing on the pose information to obtain a pre-integration factor;
and optimizing the odometry factor, the pre-integration factor and the loop detection factor by using a factor graph model to obtain a motion track and a map of the unmanned mobile platform.
In the scheme, loop detection is used for judging whether the carrier passes through the position where the carrier has passed through again, and the purpose is to increase historical frame constraint and increase data richness. In the carrier operation process, scanning frames are continuously stored to form a historical frame set, a current frame is matched with all laser frames in the historical frame set (namely all laser points of the current frame and all laser points of the historical frame calculate physical distance residual errors and convert the physical distance residual errors into a least square problem, and a minimum residual error sum is obtained through iterative solution).
In this scheme, a preset loop detection algorithm may be adopted to perform loop detection to obtain a loop detection factor, and the formation and calculation manner of the loop detection factor belongs to the prior art and is not described herein.
In the scheme, the pose of an unmanned mobile platform (such as a mobile robot) is constructed into variable nodes, loop detection factors, odometer factors and pre-integration factors are added into a factor graph to perform joint optimization to form factor nodes among related variable nodes, globally consistent pose tracks are obtained, and the globally consistent map is obtained after map splicing.
Preferably, the multi-source information hierarchical fusion method further includes:
determining pose information of a plurality of moments existing between adjacent point cloud data, and extracting key frames of the point cloud data, wherein the key frames are point cloud data of different moments;
and performing pre-integration processing on pose information between a first moment and a second moment corresponding to any two key frames to obtain the relative variation of the first moment relative to the second moment.
Preferably, associating the depth value of the point cloud data with the monocular image specifically includes:
and performing data alignment according to the time stamps of the point cloud frame and the visual frame, projecting the SIFT feature points and the point cloud points onto a unit ball with a monocular camera as the center of a circle, performing down-sampling on the points on the unit ball, storing the points by using polar coordinates, determining three closest point cloud points around the SIFT feature points by using the polar coordinates, wherein the length of a connecting line between the center of the unit ball and the SIFT feature points is the depth value of the SIFT feature points.
Preferably, the multi-source information hierarchical fusion method further includes: and constructing a factor graph model.
Preferably, a radius filter is adopted to perform noise removal on the point cloud data, and a linear interpolation method is adopted to remove motion distortion.
Preferably, the monocular image of the associated depth value and the preprocessed point cloud data are fused by utilizing an extended Kalman filtering algorithm.
Preferably, a median filtering algorithm is used to remove noise from the monocular image.
Preferably, the point cloud scanning device is a laser radar, and the inertial measurement unit is an inertial sensor.
In a second aspect, the present application provides a multi-source information layered fusion apparatus, including:
a memory for storing program instructions;
a processor, configured to invoke the program instructions stored in the memory to implement the multi-source information hierarchical fusion method according to any of the technical solutions in the first aspect.
In a third aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores program codes for implementing the multi-source information hierarchical fusion method according to any one of the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the multi-source information layered fusion method adopts a layered thought to perform data fusion in steps on point cloud data obtained by a point cloud scanning device, monocular image data obtained by a monocular camera and pose information of the unmanned mobile platform obtained by an inertial measurement unit:
the method comprises the steps of firstly associating the depth value of point cloud data with monocular image data, conducting primary fusion, then conducting secondary fusion of the monocular image and the point cloud data associated with the depth value through an extended Kalman filtering algorithm, and finally conducting final fusion through a factor graph model.
Data preliminarily fused by depth value association are optimized twice continuously by expanding a Kalman filtering algorithm and a factor graph model, multi-source information can be fully utilized, accumulated errors are reduced, and global consistent tracks and maps with better effects are established.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of a multi-source information hierarchical fusion method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating the principle of depth correlation between SIFT feature points and point cloud data;
FIG. 3 is a schematic diagram of a multi-source information hierarchical fusion method;
fig. 4 is a schematic diagram illustrating a principle of key frame extraction of point cloud data of a laser radar.
Icon:
100. a unit ball; 1. a center point; 2. a first hollow circle; 3. a first solid point; 20. a second hollow circle; 30. the second solid point.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For convenience of understanding, the multi-source information layered fusion method provided in the embodiments of the present application is described in detail below, and with reference to fig. 1, a multi-source information layered fusion method is applied to an unmanned mobile platform provided with a monocular camera, a point cloud scanning device, and an inertial measurement unit, and includes the following steps:
step S1, point cloud data obtained by a point cloud scanning device, monocular image data obtained by a monocular camera and pose information of the unmanned mobile platform obtained by an inertial measurement unit are obtained, wherein the pose information comprises acceleration and angular velocity;
step S2, preprocessing the acquired point cloud data to remove noise points in the point cloud data and remove motion distortion;
step S3, carrying out denoising processing on the acquired monocular image data and extracting SIFT feature points of the monocular image data;
step S4, associating the depth value of the point cloud data for the monocular image;
step S5, fusing the monocular image with the associated depth value and the preprocessed point cloud data to obtain a local odometer, and converting the local odometer into an odometer factor;
step S6, pre-integration processing is carried out on the pose information to obtain a pre-integration factor;
and step S7, optimizing the odometry factor, the pre-integration factor and the loop detection factor by using the factor graph model to obtain the motion trail and the map of the unmanned mobile platform.
In some embodiments of the present application, the unmanned mobile platform is generally a mobile robot. A monocular camera for image acquisition, a point cloud scanning device (such as a laser radar scanning device) for acquiring a point cloud, and an Inertial Measurement Unit (IMU) for measuring the pose of the platform are mounted on the unmanned mobile platform.
The radius filter is adopted to remove noise from point cloud data, when laser points are aligned, it is assumed that laser points of each frame are collected in a static state, however, a carrier is often moved in a real environment, and therefore motion distortion (here, translational distortion) occurs, that is, reference coordinate systems of a first laser point and a last laser point are different, and an alignment error is caused. The value of each laser spot to remove motion distortion is the position of the first laser spot plus the product of the amount of displacement change between adjacent laser spots and the amount of laser spot change. The position of each laser point after removing motion distortion is shown as formula (1):
Figure DEST_PATH_IMAGE002
(1)
where i denotes the number of changes from the current laser spot to the first laser spot, tiRepresents the position of the ith laser point in a world coordinate system after motion distortion removal, tendIndicating the position of the last laser point in the world coordinate system, tstartThe position of the first laser point in the world coordinate system is shown, n represents the total number of laser points between the first laser point and the last laser point, (t)end-tstart) And/n represents the displacement variation between adjacent laser points in one frame of laser.
And removing noise from monocular image data of the monocular camera by adopting a median filtering algorithm, and extracting SIFT feature points.
Associating the depth value of the point cloud data for the monocular image data, which may include the following specific steps: according to the timestamps of the laser frame and the visual frame, the adjacent data are aligned, the SIFT feature points and the point cloud points of the laser radar are projected onto a unit sphere 100 with a monocular camera as the center, referring to FIG. 2, the black point at the center of the sphere is the central point 1 of the unit sphere, the first hollow circle 2 on the sphere represents the SIFT feature points, the black first solid point 3 at the periphery of the first hollow circle 2 represents the laser radar point cloud points of the SIFT feature point corresponding frame, and the corresponding real points of the first hollow circle 2 and the black first solid point 3 at the periphery thereof on the sphere in the Cartesian space are respectively marked as a second hollow circle 20 and a second solid point 30 at the periphery of the second hollow circle 20. The method comprises the steps of down-sampling points on a unit sphere 100 and storing the points by using polar coordinates, ensuring that the density of the sampled points is constant, finding three nearest cloud points around SIFT feature points by using a two-dimensional KD-tree (polar coordinates), and calculating the depth of the SIFT feature points by using the camera center, namely the length of a connecting line between the spherical center point 1 and the SIFT feature points. A schematic diagram of depth correlation between SIFT feature points (also referred to as visual feature points) and point cloud data of laser radar is shown in fig. 2.
And fusing monocular image data of the associated depth and the laser point cloud by utilizing an extended Kalman filtering algorithm, taking the laser point cloud data as state information, taking the monocular image data of the monocular camera as observation information, adjusting the weight through continuous iteration to obtain a local odometer, and converting the odometer into an odometer factor.
Referring to fig. 3, the schematic diagram of the multi-source information layered fusion method is shown, and the multi-source information layered fusion method performs data fusion in steps by using a layered thought on point cloud data obtained by a point cloud scanning device, monocular image data obtained by a monocular camera and pose information of the unmanned mobile platform obtained by an inertial measurement unit:
the method comprises the steps of firstly associating the depth value of point cloud data with monocular image data, conducting primary fusion, then conducting secondary fusion of the monocular image and the point cloud data associated with the depth value through an extended Kalman filtering algorithm, and finally conducting final fusion through a factor graph model.
Data preliminarily fused by depth value association are optimized twice continuously by expanding a Kalman filtering algorithm and a factor graph model, multi-source information can be fully utilized, accumulated errors are reduced, and global consistent tracks and maps with better effects are established.
In order to reduce the computation of redundant information, in some embodiments of the present application, the multi-source information hierarchical fusion method further includes:
determining pose information of a plurality of moments existing between adjacent point cloud data, and extracting key frames of the point cloud data, wherein the key frames are point cloud data of different moments;
and performing pre-integration processing on pose information between a first moment and a second moment corresponding to any two key frames to obtain the relative variation of the first moment relative to the second moment.
And performing pre-integration processing on the pose information to form a pre-integration factor. In the multi-sensor fusion SLAM algorithm, a common processing mode for IMUs is to pre-integrate the IMUs, a section of sensor information is assumed to exist at the ith moment and the jth moment, the IMUs are updated every delta t time, because the IMUs have relatively high working frequency, IMU information at multiple moments exists between point cloud data of adjacent laser radars, in order to reduce redundant information calculation, referring to FIG. 4, key frame extraction is performed on the point cloud data of the laser radars, the key frames are assumed to be point cloud data at the ith moment and the jth moment respectively, in order to effectively utilize pose information in the time, pre-integration processing is performed on the pose information between the ith moment and the jth moment, and the relative variation of the jth moment relative to the ith moment is obtained. Therefore, not only can the integral constraint information among the laser point clouds in relative time be obtained, but also the time loss in the iterative computation process can be reduced.
In some embodiments of the present application, associating the depth value of the point cloud data with the monocular image specifically includes:
and performing data alignment according to the time stamps of the point cloud frame and the visual frame, projecting the SIFT feature points and the point cloud points onto a unit ball with a monocular camera as the center of a circle, performing down-sampling on the points on the unit ball, storing the points by using polar coordinates, determining three closest point cloud points around the SIFT feature points by using the polar coordinates, wherein the length of a connecting line between the center of the unit ball and the SIFT feature points is the depth value of the SIFT feature points.
In some embodiments of the present application, the multi-source information hierarchical fusion method further includes: and constructing a factor graph model.
In some embodiments of the present application, a radius filter is used to perform noise removal on the point cloud data, and a linear interpolation method is used to remove motion distortion.
In some embodiments of the present application, the monocular image of the associated depth value and the preprocessed point cloud data are fused by using an extended kalman filter algorithm.
In some embodiments of the present application, a median filtering algorithm is used to remove noise from the monocular image.
In some embodiments of the present application, the point cloud scanning device is a laser radar and the inertial measurement unit is an inertial sensor.
In some embodiments of the present application, there is further provided a multi-source information hierarchical fusion apparatus, including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory to realize the multi-source information hierarchical fusion method according to any one of the embodiments.
In some embodiments of the present application, a computer-readable storage medium is further provided, where the computer-readable storage medium stores program codes for implementing the multi-source information hierarchical fusion method according to any of the above embodiments.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A multi-source information layered fusion method is applied to an unmanned mobile platform, the unmanned mobile platform is provided with a monocular camera, a point cloud scanning device and an inertia measurement unit, and is characterized by comprising the following steps:
acquiring point cloud data obtained by the point cloud scanning device, monocular image data obtained by the monocular camera and pose information of the unmanned mobile platform obtained by the inertial measurement unit, wherein the pose information comprises acceleration and angular velocity;
preprocessing the acquired point cloud data to remove noise points in the point cloud data and remove motion distortion;
denoising the acquired monocular image data and extracting SIFT feature points of the monocular image data;
associating a depth value of the point cloud data for the monocular image;
fusing the monocular image subjected to the correlation depth value and the preprocessed point cloud data to obtain a local odometer, and converting the local odometer into an odometer factor;
performing pre-integration processing on the pose information to obtain a pre-integration factor;
and optimizing the odometry factor, the pre-integration factor and the loop detection factor by using a factor graph model to obtain a motion track and a map of the unmanned mobile platform.
2. The method for hierarchical fusion of multi-source information according to claim 1, further comprising:
determining pose information of a plurality of moments existing between adjacent point cloud data, and extracting key frames of the point cloud data, wherein the key frames are point cloud data of different moments;
and performing pre-integration processing on pose information between a first moment and a second moment corresponding to any two key frames to obtain the relative variation of the first moment relative to the second moment.
3. The multi-source information hierarchical fusion method according to claim 1, wherein associating the depth value of the point cloud data for the monocular image specifically comprises:
and performing data alignment according to the time stamps of the point cloud frame and the visual frame, projecting the SIFT feature points and the point cloud points onto a unit ball with a monocular camera as the center of a circle, performing down-sampling on the points on the unit ball, storing the points by using polar coordinates, determining three point cloud points nearest to the SIFT feature points by using the polar coordinates, wherein the length of a connecting line between the center of the unit ball and the SIFT feature points is the depth value of the SIFT feature points.
4. The method for hierarchical fusion of multi-source information according to claim 1, further comprising: and constructing a factor graph model.
5. The multi-source information layered fusion method according to claim 1, wherein a radius filter is used to perform noise removal on the point cloud data, and a linear interpolation method is used to remove motion distortion.
6. The multi-source information layered fusion method according to claim 1, characterized in that a monocular image of associated depth values and the preprocessed point cloud data are fused by using an extended kalman filter algorithm.
7. The multi-source information layered fusion method according to claim 1, characterized in that a median filtering algorithm is used to remove noise of the monocular image.
8. The multi-source information layered fusion method according to any one of claims 1 to 7, wherein the point cloud scanning device is a laser radar, and the inertial measurement unit is an inertial sensor.
9. A multi-source information layered fusion device, comprising:
a memory for storing program instructions;
a processor for invoking the program instructions stored in the memory to implement the multi-source information hierarchical fusion method of any of claims 1-8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores program code for implementing the multi-source information hierarchical fusion method according to any one of claims 1 to 8.
CN202210356985.6A 2022-04-07 2022-04-07 Multi-source information layered fusion method and device and storage medium Active CN114429432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210356985.6A CN114429432B (en) 2022-04-07 2022-04-07 Multi-source information layered fusion method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210356985.6A CN114429432B (en) 2022-04-07 2022-04-07 Multi-source information layered fusion method and device and storage medium

Publications (2)

Publication Number Publication Date
CN114429432A true CN114429432A (en) 2022-05-03
CN114429432B CN114429432B (en) 2022-06-21

Family

ID=81314305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210356985.6A Active CN114429432B (en) 2022-04-07 2022-04-07 Multi-source information layered fusion method and device and storage medium

Country Status (1)

Country Link
CN (1) CN114429432B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115218804A (en) * 2022-07-13 2022-10-21 长春理工大学中山研究院 Fusion measurement method for multi-source system of large-scale component
CN115655264A (en) * 2022-09-23 2023-01-31 智己汽车科技有限公司 Pose estimation method and device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN108052103A (en) * 2017-12-13 2018-05-18 中国矿业大学 The crusing robot underground space based on depth inertia odometer positions simultaneously and map constructing method
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109934871A (en) * 2019-02-18 2019-06-25 武汉大学 A kind of system and method for the Intelligent unattended machine crawl target towards high-risk environment
CN110084272A (en) * 2019-03-26 2019-08-02 哈尔滨工业大学(深圳) A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression
CN110297491A (en) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 Semantic navigation method and its system based on multiple structured light binocular IR cameras
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111665826A (en) * 2019-03-06 2020-09-15 北京奇虎科技有限公司 Depth map acquisition method based on laser radar and monocular camera and sweeping robot
CN111798505A (en) * 2020-05-27 2020-10-20 大连理工大学 Monocular vision-based dense point cloud reconstruction method and system for triangularized measurement depth
CN111812649A (en) * 2020-07-15 2020-10-23 西北工业大学 Obstacle identification and positioning method based on fusion of monocular camera and millimeter wave radar
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112435325A (en) * 2020-09-29 2021-03-02 北京航空航天大学 VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
CN113379910A (en) * 2021-06-09 2021-09-10 山东大学 Mobile robot mine scene reconstruction method and system based on SLAM
CN113837277A (en) * 2021-09-24 2021-12-24 东南大学 Multisource fusion SLAM system based on visual point-line feature optimization
CN114013449A (en) * 2021-11-02 2022-02-08 阿波罗智能技术(北京)有限公司 Data processing method and device for automatic driving vehicle and automatic driving vehicle
CN114120075A (en) * 2021-11-25 2022-03-01 武汉市众向科技有限公司 Three-dimensional target detection method integrating monocular camera and laser radar

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056664A (en) * 2016-05-23 2016-10-26 武汉盈力科技有限公司 Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision
CN108052103A (en) * 2017-12-13 2018-05-18 中国矿业大学 The crusing robot underground space based on depth inertia odometer positions simultaneously and map constructing method
CN109099901A (en) * 2018-06-26 2018-12-28 苏州路特工智能科技有限公司 Full-automatic road roller localization method based on multisource data fusion
CN109934871A (en) * 2019-02-18 2019-06-25 武汉大学 A kind of system and method for the Intelligent unattended machine crawl target towards high-risk environment
CN111665826A (en) * 2019-03-06 2020-09-15 北京奇虎科技有限公司 Depth map acquisition method based on laser radar and monocular camera and sweeping robot
CN110084272A (en) * 2019-03-26 2019-08-02 哈尔滨工业大学(深圳) A kind of cluster map creating method and based on cluster map and the matched method for relocating of location expression
CN110297491A (en) * 2019-07-02 2019-10-01 湖南海森格诺信息技术有限公司 Semantic navigation method and its system based on multiple structured light binocular IR cameras
CN111045017A (en) * 2019-12-20 2020-04-21 成都理工大学 Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111798505A (en) * 2020-05-27 2020-10-20 大连理工大学 Monocular vision-based dense point cloud reconstruction method and system for triangularized measurement depth
CN111812649A (en) * 2020-07-15 2020-10-23 西北工业大学 Obstacle identification and positioning method based on fusion of monocular camera and millimeter wave radar
CN112347840A (en) * 2020-08-25 2021-02-09 天津大学 Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
CN112435325A (en) * 2020-09-29 2021-03-02 北京航空航天大学 VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method
CN113379910A (en) * 2021-06-09 2021-09-10 山东大学 Mobile robot mine scene reconstruction method and system based on SLAM
CN113837277A (en) * 2021-09-24 2021-12-24 东南大学 Multisource fusion SLAM system based on visual point-line feature optimization
CN114013449A (en) * 2021-11-02 2022-02-08 阿波罗智能技术(北京)有限公司 Data processing method and device for automatic driving vehicle and automatic driving vehicle
CN114120075A (en) * 2021-11-25 2022-03-01 武汉市众向科技有限公司 Three-dimensional target detection method integrating monocular camera and laser radar

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115218804A (en) * 2022-07-13 2022-10-21 长春理工大学中山研究院 Fusion measurement method for multi-source system of large-scale component
CN115655264A (en) * 2022-09-23 2023-01-31 智己汽车科技有限公司 Pose estimation method and device

Also Published As

Publication number Publication date
CN114429432B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
CN112014857B (en) Three-dimensional laser radar positioning and navigation method for intelligent inspection and inspection robot
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN112197770B (en) Robot positioning method and positioning device thereof
CN114429432B (en) Multi-source information layered fusion method and device and storage medium
CN112219087A (en) Pose prediction method, map construction method, movable platform and storage medium
CN112734852B (en) Robot mapping method and device and computing equipment
CN109974712A (en) It is a kind of that drawing method is built based on the Intelligent Mobile Robot for scheming optimization
JP5987823B2 (en) Method and system for fusing data originating from image sensors and motion or position sensors
CN113781582A (en) Synchronous positioning and map creating method based on laser radar and inertial navigation combined calibration
CN111260751B (en) Mapping method based on multi-sensor mobile robot
CN114623817B (en) Self-calibration-contained visual inertial odometer method based on key frame sliding window filtering
CN113674412B (en) Pose fusion optimization-based indoor map construction method, system and storage medium
CN112652001B (en) Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering
CN115479598A (en) Positioning and mapping method based on multi-sensor fusion and tight coupling system
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN115880364A (en) Robot pose estimation method based on laser point cloud and visual SLAM
CN114255323A (en) Robot, map construction method, map construction device and readable storage medium
CN112444246A (en) Laser fusion positioning method in high-precision digital twin scene
CN116295412A (en) Depth camera-based indoor mobile robot dense map building and autonomous navigation integrated method
CN114648584B (en) Robustness control method and system for multi-source fusion positioning
CN117824667B (en) Fusion positioning method and medium based on two-dimensional code and laser
CN113744308B (en) Pose optimization method, pose optimization device, electronic equipment, medium and program product
CN112731503A (en) Pose estimation method and system based on front-end tight coupling
CN116698014A (en) Map fusion and splicing method based on multi-robot laser SLAM and visual SLAM
CN116380039A (en) Mobile robot navigation system based on solid-state laser radar and point cloud map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant