CN111595333A - Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion - Google Patents
Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion Download PDFInfo
- Publication number
- CN111595333A CN111595333A CN202010340571.5A CN202010340571A CN111595333A CN 111595333 A CN111595333 A CN 111595333A CN 202010340571 A CN202010340571 A CN 202010340571A CN 111595333 A CN111595333 A CN 111595333A
- Authority
- CN
- China
- Prior art keywords
- pose
- imu
- camera
- laser
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Automation & Control Theory (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion, wherein the method comprises the following steps: 1) the acquisition module acquires current information of the unmanned automobile through a monocular camera, a laser radar and an IMU; 2) according to the current information of the unmanned automobile collected in the step 1), a pose estimation module carries out pose estimation on the automobile through a monocular odometer and an IMU pre-integration model to obtain pose information of the unmanned automobile; 3) the pose optimization module establishes an optimization model of multi-sensor fusion according to the pose estimation information in the step 2), adjusts the optimization proportion of each sensor through the weight coefficient dynamic adjustment module, enhances the environmental adaptability, obtains the optimal pose of the vehicle after optimization, and converts the optimal pose of the vehicle into a world coordinate system to obtain the real-time pose of the vehicle. The method can meet the requirements of accuracy and robustness of positioning of the unmanned vehicle in the complex environment, and is suitable for positioning of the unmanned vehicle in the complex environment.
Description
Technical Field
The invention relates to an intelligent automobile positioning technology, in particular to a modularized unmanned automobile positioning method and system based on visual inertial laser data fusion.
Background
With the rapid development of social economy and scientific technology, the travel demand of people is more and more vigorous, and the automobile gradually becomes an indispensable tool for transportation in the life of people. Through years of development, the traditional automobile industry also has a hot research trend of unmanned technology in order to meet the more and more complicated and changeable demands in the future. Such as Baidu, Huawei and Yuji at home, Google, Tesla and other science and technology companies at foreign countries and NVIDIA and other chip manufacturers are added to the research and development of the unmanned technology. At present, great colleges and universities in China also pay attention to the development of the unmanned related technology. In the future, the transition from the automotive industry, unmanned driving, will become the primary direction.
The positioning technology based on various sensors is a key technology in unmanned driving, and the accurate position of the vehicle in the environment is calculated by utilizing various information of the vehicle environment and the vehicle obtained by the sensors. The positioning scheme of one or two sensors such as IMU track deduction, visual inertial odometer, laser inertial odometer, etc. is difficult to perform accurate positioning in complex environments (high-speed motion, low-light environment, etc.) due to the limitations of the sensors themselves, the robustness and accuracy cannot guarantee the application requirements, and how to reasonably utilize the advantages of each sensor in the multi-sensor fusion scheme is also a challenge. Based on this background, a method and system for modular unmanned vehicle positioning with visual inertial laser data fusion is proposed herein.
Disclosure of Invention
The invention aims to solve the technical problem of providing a visual inertial laser data fusion modularized unmanned vehicle positioning method and system aiming at the defects in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a modularized unmanned vehicle positioning method based on visual inertial laser data fusion comprises the following steps:
1) acquiring current information of the unmanned vehicle, including image information acquired by a monocular camera, three-dimensional point cloud information acquired by a laser radar and acceleration and angular velocity data acquired by an IMU;
2) estimating the pose of the automobile according to the current information of the unmanned automobile acquired in the step 1) to obtain the pose information of the unmanned automobile;
3) establishing a multi-sensor fusion optimization model according to pose information estimated by the pose, obtaining the optimal pose of the vehicle according to the optimization model, and converting the optimal pose into a world coordinate system to obtain the real-time pose of the vehicle; the optimization model is an error function set based on IMU measurement errors, laser measurement errors and camera measurement errors of the sensor model, and the optimization model of multi-sensor fusion is established simultaneously according to the error function.
According to the scheme, the pose information of the unmanned automobile is obtained in the step 2), and the pose information comprises the following specific steps:
calculating the frame-to-frame motion information (x) of the monocular camera from the essential matrix by epipolar geometric constraintv,yv,zv,αv,βv,γv) (6 physical quantities respectively represent translation and rotation quantity in three coordinate axes of x, y and z in a Euclidean coordinate system, the same below), a keyframe of a monocular image is screened by a landmark comparison algorithm to be marked, measurement data of an IMU is subjected to pre-integration processing, then the IMU and a monocular camera are subjected to joint initialization, the scale of the monocular camera is recovered, the speed of the system, the zero offset of a gyroscope and the gravity direction are estimated, and non-scale pose transformation (x, y, z and g) is obtainedvi,yvi,zvi,αvi,βvi,γvi) Namely the pose information of the unmanned automobile.
According to the scheme, the pose information of the unmanned automobile is obtained in the step 2), and the pose information comprises the following specific steps:
2.1) calculating the interframe motion information (x) of the monocular camera from the essential matrix by epipolar geometric constraintv,yv,zv,αv,βv,γv):
ORB feature points in the images are extracted by using an ORB feature point algorithm, feature points of two adjacent frames of images I and I +1 are matched by using a FLANN algorithm, the essential matrixes of the two images are estimated by using epipolar geometric constraint and a RANSAC algorithm, singular value decomposition is carried out on the essential matrixes to obtain a rotation matrix R and a translation vector t between the two frames of images, and the relative pose (x) between the frames is recoveredv,yv,zv,αv,βv,γv);
2.2) screening key frames in the image by using a common-view landmark algorithm, and marking;
2.3) Using the IMU detection at time iWhereinIs the acceleration in the IMU coordinate system at time i,and (3) establishing a pre-integration model of the IMU according to the angular velocity under the IMU coordinate system at the moment i as follows:
whereinIs the amount of translation from time i to time j,is from time i toThe amount of speed at time j,is the amount of rotation from time i to time j. Recovering unknown scales in the rotation matrix R and the translation vector t in the monocular image through the rotation matrix R and the translation vector t obtained by IMU pre-integration to obtain a scale-free estimated pose, and converting the scale-free estimated pose into (x) under a world coordinate systemvi,yvi,zvi,αvi,βvi,γvi)。
According to the scheme, the step 2.2) utilizes a common-view landmark algorithm to screen the key frames in the image, and specifically comprises the following steps:
storing A common-view matching feature points of a prior keyframe as a landmark point set, acquiring a latest image I transmitted by a monocular camera, detecting that B feature points are shared in the I, wherein W feature points are matched with the landmark points, and judging whether the following formula is met or not:
σWAand σWBRespectively, preset threshold values, if the threshold values are met, recording I as a new key frame, and deleting the oldest frame in the prior key frames.
According to the scheme, the error function of the optimization model in the step 3) is as follows:
wherein the content of the first and second substances,to recover IMU position through estimated relative motion,to restore IMU velocity through estimated relative motion,for IMU rotation to be recovered by the estimated relative motion,zero-bias for the estimated accelerometer and gyroscope;
wherein the content of the first and second substances,for the result of the system normalization process to the predicted value of the coordinate of the landmark point l in the camera coordinate system,the landmark point is the coordinate of the landmark l observed by the camera on the normalized plane, and the difference between the landmark point and the coordinate is the measurement error of the camera;
wherein the content of the first and second substances,
wherein the content of the first and second substances,andrespectively a rotation matrix and a translation vector between the camera and the laser radar;andmatching points of two frames of laser point clouds;
to coordinateUsing camera coordinate system CiAnd CjRotation matrix of cellsAnd translation vectorFrom the coordinate system CiTransformation into coordinate system CjCorresponding to the coordinates
According to the scheme, the optimal pose of the vehicle is obtained according to the optimization model in the step 3), a least square problem of the pose optimization model is established, a residual error function is established to carry out least square iterative optimization, the least square problem is iteratively optimized by an L-M method, and the optimal pose of the vehicle is obtained, wherein the residual error function is as follows:
wherein the content of the first and second substances,is the error in the measurement of the IMU,is the error in the visual measurement and is,scanning error of the laser radar is shown, and P is a covariance matrix corresponding to a residual error item;
sigma is the weight coefficient of the vision measurement error, and tau is the weight coefficient of the laser measurement error.
According to the scheme, the method for determining the weight coefficient of the visual measurement error and the weight coefficient of the laser measurement error in the step 3) comprises the following steps:
the method comprises the steps of regulating and controlling the weight occupied by a monocular camera and a laser radar in optimization according to environment information, determining a weight coefficient according to the number of key frame co-vision road signs of the monocular camera and the matching feature number of laser radar point cloud data, setting the number of the optimal co-vision road signs of two frames of image key frames as A and the number of the optimal matching feature points of two frames of laser point cloud data as B, calculating the ratio a/A of the number a of the current key frame co-vision road signs to the number A of the optimal co-vision road signs before an optimization model is established each time, calculating the ratio B/B of the number B of the current laser point cloud matching feature points to the number B of the optimal matching feature points, wherein the weight coefficient sigma of a vision measurement error is 2 x (a/A)/(B/B), and the weight coefficient tau of a laser measurement error is 2 x [1- (a/A)/(B/B) ].
A visual inertial laser data-fused modular unmanned vehicle positioning system, comprising:
the acquisition module is used for acquiring and obtaining current information of the unmanned vehicle, including image information acquired by a monocular camera, three-dimensional point cloud information acquired by a laser radar and acceleration and angular velocity data acquired by an IMU;
the pose estimation module is used for estimating the pose of the automobile according to the current information of the unmanned automobile to obtain the pose information of the unmanned automobile;
the pose optimization module is used for establishing a multi-sensor fusion optimization model according to the pose information of the pose estimation module, obtaining the optimal pose of the vehicle according to the optimization model, and converting the optimal pose into a world coordinate system to obtain the real-time pose of the vehicle; the optimization model is an error function set based on IMU measurement errors, laser measurement errors and camera measurement errors of the sensor model, and the optimization model of multi-sensor fusion is established simultaneously according to the error function.
According to the scheme, the pose estimation module obtains the pose information of the unmanned automobile as follows:
1) calculating the frame-to-frame motion information (x) of the monocular camera from the essential matrix by epipolar geometric constraintv,yv,zv,αv,βv,γv):
ORB feature points in the images are extracted by using an ORB feature point algorithm, feature points of two adjacent frames of images I and I +1 are matched by using a FLANN algorithm, the essential matrixes of the two images are estimated by using epipolar geometric constraint and a RANSAC algorithm, singular value decomposition is carried out on the essential matrixes to obtain a rotation matrix R and a translation vector t between the two frames of images, and the relative pose (x) between the frames is recoveredv,yv,zv,αv,βv,γv);
2) Screening key frames in the image by using a common-view landmark algorithm, and marking;
3) using IMU detection at time iWhereinIs the acceleration in the IMU coordinate system at time i,and (3) establishing a pre-integration model of the IMU according to the angular velocity under the IMU coordinate system at the moment i as follows:
whereinIs the amount of translation from time i to time j,is the amount of velocity from time i to time j,is the amount of rotation from time i to time j. Recovering unknown scales in the rotation matrix R and the translation vector t in the monocular image through the rotation matrix R and the translation vector t obtained by IMU pre-integration to obtain a scale-free estimated pose, and converting the scale-free estimated pose into (x) under a world coordinate systemvi,yvi,zvi,αvi,βvi,γvi)。
According to the scheme, the error function of the optimization model in the pose optimization module is as follows:
wherein the content of the first and second substances,to recover IMU position through estimated relative motion,to restore IMU velocity through estimated relative motion,for IMU rotation to be recovered by the estimated relative motion,zero-bias for the estimated accelerometer and gyroscope;
wherein the content of the first and second substances,for the result of the system normalization process to the predicted value of the coordinate of the landmark point l in the camera coordinate system,the landmark point is the coordinate of the landmark l observed by the camera on the normalized plane, and the difference between the landmark point and the coordinate is the measurement error of the camera;
wherein the content of the first and second substances,
wherein the content of the first and second substances,andrespectively a rotation matrix and a translation vector between the camera and the laser radar;andmatching points of two frames of laser point clouds;
to coordinateUsing camera coordinate system CiAnd CjRotation matrix of cellsAnd translation vectorFrom the coordinate system CiTransformation into coordinate system CjCorresponding to the coordinates
According to the scheme, the pose optimization module obtains the optimal pose of the vehicle according to the optimization model, a least square problem of the pose optimization model is established, a residual error function is established to carry out least square iterative optimization, the least square problem is iteratively optimized by an L-M method, and the optimal pose of the vehicle is obtained, wherein the residual error function is as follows:
wherein the content of the first and second substances,is the error in the measurement of the IMU,is the error in the visual measurement and is,scanning error of the laser radar is shown, and P is a covariance matrix corresponding to a residual error item;
sigma is the weight coefficient of the vision measurement error, and tau is the weight coefficient of the laser measurement error.
According to the scheme, the method for determining the weight coefficient of the visual measurement error and the weight coefficient of the laser measurement error in the pose optimization module comprises the following steps:
the method comprises the steps of regulating and controlling the weight occupied by a monocular camera and a laser radar in optimization according to environment information, determining a weight coefficient according to the number of key frame co-vision road signs of the monocular camera and the matching feature number of laser radar point cloud data, setting the number of the optimal co-vision road signs of two frames of image key frames as A and the number of the optimal matching feature points of two frames of laser point cloud data as B, calculating the ratio a/A of the number a of the current key frame co-vision road signs to the number A of the optimal co-vision road signs before an optimization model is established each time, calculating the ratio B/B of the number B of the current laser point cloud matching feature points to the number B of the optimal matching feature points, wherein the weight coefficient sigma of a vision measurement error is 2 x (a/A)/(B/B), and the weight coefficient tau of a laser measurement error is 2 x [1- (a/A)/(B/B) ].
The invention has the following beneficial effects:
1) the invention makes up the unknown scale of the monocular vision odometer in the estimation of the attitude, and corrects the zero bias defect of the IMU;
2) establishing an error function of each sensor, and carrying out secondary optimization on the estimated pose in pose optimization;
3) dynamically adjusting the weight coefficient of the measuring error of the monocular camera and the laser radar according to the environmental scene by establishing a new error weight coefficient adjusting mode, and finally obtaining real-time accurate positioning information of the vehicle by a sliding window and an L-M method;
4) the method improves the environmental adaptability of the traditional positioning method, can meet the requirements of accuracy and robustness of positioning of the unmanned vehicle in the complex environment, and is suitable for positioning of the unmanned vehicle in the complex environment.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic structural diagram of an embodiment of the present invention;
FIG. 1 is a schematic diagram of a system architecture of an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a monocular visual odometer according to an embodiment of the present invention;
FIG. 3 is a diagram of the decomposition of the singular values of the essential matrix in the monocular visual odometer of an embodiment of the present invention to yield four R, t;
FIG. 4 is a flowchart of a method for jointly initializing IMU pre-integration and a monocular camera according to an embodiment of the present invention;
fig. 5 is a flowchart of a method for dynamically adjusting the weighting coefficients of the error function according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a visual inertial laser data fusion modularized unmanned vehicle positioning system, wherein sensors related to an acquisition module comprise an IMU, a laser radar and a monocular camera, real-time detection data of the three sensors are transmitted into the system for processing by a processing module, the system is designed in a multi-module separated manner, and a pose estimation module performs joint initialization through the IMU and the monocular camera to solve the scale problem of a monocular visual odometer; and the pose optimization module receives pose information transmitted by the pose estimation module, establishes an optimization model of multi-sensor fusion, dynamically adjusts the proportion of the camera and the laser radar in optimization according to different environments, and obtains the optimal positioning information of the vehicle in a world coordinate system by a sliding window method.
As shown in fig. 1, the specific implementation process is as follows:
the system comprises an acquisition module, a pose estimation module and a system, wherein when an unmanned vehicle runs in the environment, the acquisition module detects the surrounding environment of the vehicle in real time through three sensors, a laser radar provides three-dimensional point cloud data of the surrounding environment, a monocular camera provides a video stream formed by continuous image frames, an IMU provides acceleration and angular speed information of the vehicle, the data of the sensors are uploaded to the pose estimation module of the system, and the processing steps are as follows:
as shown in figure 2 of the drawings, in which,
1) firstly, a pose estimation module receives video stream information of a monocular camera and calculates a monocular vision odometer, and the steps are as follows:
(1) the system acquires two adjacent frames of images I and I +1 from a video stream, acquires internal parameters and distortion parameters of a camera by a Zhang-friend calibration method, and performs distortion removal processing on the images by using the distortion parameters.
(2) Detecting ORB characteristic points of the images I and I +1 through an ORB algorithm, and if the number of the characteristic points does not reach expectation, re-detecting;
(3) comparing the Hamming distance of the BRIEF descriptor in the binary form, wherein the Hamming distance is the number of the binary digits with the same number, matching the characteristic points of I and I +1 by adopting an FLANN algorithm, judging the matching number of the characteristic points, and if the matching number is smaller than a certain threshold value, detecting the characteristic points again;
(4) estimating an intrinsic matrix E of the two images by using a RANSAC algorithm, and performing singular value decomposition on the intrinsic matrix E to obtain an E-U ∑ VTAnd then:
as shown in fig. 3, four results are obtained for the rotation matrix R and the translation vector t at this time, only fig. 3(d) is the motion to be solved, the depth of the point P under the two cameras is detected, when the two depths are positive, R, t is obtained as the needed solution, finally the depth is restored by triangulation, and the relative pose (x) of the visual odometer with scale is outputv,yv,zv,αv,βv,γv);
The method comprises the steps that while monocular vision odometry calculation is carried out, a front-end pose estimation module synchronously screens key frames in an image sequence, A common-view matching feature points of a priori key frame are stored as a landmark point set, a latest image I transmitted by a monocular camera is obtained, B feature points in the image I are detected, W feature points are matched with the landmark points, and whether the requirement of meeting the requirementWherein sigmaWAAnd σWBRespectively, set thresholds, and if satisfied, recording I as a new key frame and deleting the oldest frame in the a priori key frames.
2) Scale recovery operation is performed on the result of the monocular visual odometer, the operation is completed through the established IMU pre-integration and the joint initialization of the monocular camera, and the specific steps are as shown in fig. 4:
(1) establishing rotational constraintsWhereinIs from the IMU coordinate system at time k to the initial camera coordinate system c1The rotation of (a) is, in terms of quaternions,is the IMU coordinate system b at the time kkTo IMU coordinate system bkRotation of (a) qbcIs the fixed rotation from the camera coordinate system to the IMU coordinate at the same time; establishing translation constraintsWhere s is the unknown scale and where s is the unknown scale,is from the IMU coordinate system at time k to the initial camera coordinate system c1The translation of (a) is carried out,is the camera coordinate system c at time kkTo the camera coordinate system ckTranslation of pbcIs a fixed translation of the camera coordinate system to the IMU coordinates at the same time.
(2) Establishing a parameter vector to be estimatedWhereinRepresenting a representation of the velocity of the IMU at time k in the IMU coordinate system,for the gravity acceleration vector in the initial image frame c1Corresponding camera coordinate system C1In the following, s is a scale factor.
(3) Establishing a least squares equation
Wherein γ bkbk+1The IMU pre-integration rotation amount at the moment from k to k +1 is recovered by an L-M method; establishing a least squares equation
And optimizing the scale of the visual odometer and the speed and gravity direction of the system by using an L-M method.
3) Restoring the pose of the visual odometer to a world coordinate system by utilizing the restored scale information to obtain (x)vi,yvi,zvi,αvi,βvi,γvi) Greatly improve the front sectionAccuracy of posture optimization.
A pose optimization module:
after the pose estimation module for visual inertial laser data fusion finishes pose estimation, the pose optimization module further optimizes the pose by the pose estimation module, and the specific process is as follows:
1) establishing an error function of a pose optimization module optimization model, wherein the error function comprises data of three sensors, and firstly, an IMU (inertial measurement Unit) measurement error:
wherein, among others,to recover IMU position through estimated relative motion,to restore IMU velocity through estimated relative motion,for IMU rotation to be recovered by the estimated relative motion,for estimated accelerometer and gyroscope zero offset, the IMU measurement error is obtained by subtracting the integral of these quantities and the measured values.
2) And then establishing a camera measurement error optimized by a pose optimization module:
wherein the content of the first and second substances,for the result of the system normalization process to the predicted value of the coordinate of the landmark point l in the camera coordinate system,and the landmark point is the coordinate of the landmark l observed by the camera on the normalized plane, and the difference between the landmark point and the coordinate is the measurement error of the camera.
3) In order to optimize the complexity of an error model shape model of a pose optimization module, the application provides a point-to-point matching algorithm of laser radar point cloud data, which comprises the following specific steps: finding point-line matching and point-surface feature matching between two adjacent frames of laser point clouds by a kd-tree methodIs matched with the characteristic line ofAnddetermined straight line, pointIs characterized by being composed ofAndthree points defining a plane, then establishing pointsAnd pointCorresponding matching pointAndthe constraint equation of (c):
solving the constraint equation of the matching point to obtain the matching point
Establishing laser measurement error function, converting the laser measurement error function into the same form of visual measurement error, firstly calculating the point-point matching of laser characteristics to obtain the matching points of two frames of laser point cloudsAndconverting the matching points to the camera coordinate system:
whereinAndrespectively, the rotation matrix and translation vector between camera and lidar, and then the system is aligned to the coordinatesUsing camera coordinate system CiAnd CjRotation matrix of cellsAnd translation vectorConversion to CjCorresponding to the coordinatesAnd will beProjection onto a normalized plane:calculating the reconstructed laser measurement error as follows:
the method comprises the steps of fusing error functions of three sensors, establishing a weight dynamic adjusting module, adjusting and controlling the weight occupied by a monocular camera and a laser radar in optimization according to environmental information, and adjusting the weight coefficient according to the number of keyframe common-view landmark points of the monocular camera and the matching characteristic number of point cloud data of the laser radar, wherein the specific steps are as shown in FIG. 5:
the system sets the optimal common-view landmark point number of two frames of image key frames as A and the optimal matching feature point number of two frames of laser point cloud data as B, the system calculates the ratio a/A of the current key frame common-view landmark point number a and the optimal common-view landmark point number A before establishing an optimization model each time, and calculates the ratio B/B of the current laser point cloud matching feature point number B and the optimal matching feature point number B, so that the weight coefficient sigma of the visual measurement error and the weight coefficient tau of the laser measurement error are calculated as follows:
the system establishes an optimization model of a pose optimization module of the visual inertial laser data fusion modularized unmanned vehicle positioning method, transmits the pose estimated by the pose estimation module into the pose optimization module to perform sliding window optimization, and specifically comprises the following steps:
importing the pose estimated by the algorithm pose estimation module and screening the pose of the key frame in the pose estimation module System velocity vw1Inverse depth of ith landmark point diAre variables to be optimized. And integrating all variables to be optimized into a vector χ, establishing a least square problem of pose optimization module optimization, and constructing a residual function of a system to perform least square iterative optimization:
wherein the content of the first and second substances,is the measurement error of the IMU,is the error in the visual measurement and is,is the scanning error of the lidar and P is the covariance matrix corresponding to the residual term.
Optimizing the least square problem by using an L-M method to obtain an optimal pose after optimization, and converting the optimal pose into a world coordinate system to obtain a real-time pose P of the vehiclevil=(xvil,yvil,zvil,αvil,βvil,γvil)。
According to the above system, we can easily obtain a corresponding positioning method, which includes the following steps:
1) acquiring current information of the unmanned vehicle, including image information acquired by a monocular camera, three-dimensional point cloud information acquired by a laser radar and acceleration and angular velocity data acquired by an IMU;
2) estimating the pose of the automobile according to the current information of the unmanned automobile acquired in the step 1) to obtain the pose information of the unmanned automobile;
by means of antipodalGeometric constraints calculate the monocular camera's inter-frame motion information (x) from the essence matrixv,yv,zv,αv,βv,γv) And screening a key frame of the monocular image by using a landmark comparison algorithm to mark, performing pre-integration processing on measurement data of the IMU, performing joint initialization with the monocular camera, recovering the scale of the monocular camera, estimating the speed of the system, the zero offset of the gyroscope and the gravity direction, and obtaining scale-free pose transformation (x)vi,yvi,zvi,αvi,βvi,γvi) Namely the pose information of the unmanned automobile.
3) Establishing a multi-sensor fusion optimization model according to pose information estimated by the pose, obtaining the optimal pose of the vehicle according to the optimization model, and converting the optimal pose into a world coordinate system to obtain the real-time pose of the vehicle; the optimization model is an error function set based on IMU measurement errors, laser measurement errors and camera measurement errors of the sensor model, and the optimization model of multi-sensor fusion is established simultaneously according to the error function.
3.1) importing the pose estimated by the pose and screening the pose of the key frame in the pose estimated by the poseSystem velocity vw1Inverse depth of ith landmark point diAre variables to be optimized. And integrates all variables to be optimized into a vector χ.
Establishing a visual measurement error of an optimization model:
establishing IMU measurement error of an optimization model:
converting point-to-point or point-to-point laser point cloud feature matching into point-to-point feature matching by the idea of approximate matching, and solving the following constraint equation:
and finding point-line matching and point-surface feature matching between two adjacent frames of laser point clouds by a kd-tree method. Points in the kth frame point cloudThe matching characteristic line of (1) is a point in the point cloud of the (k-1) th frameAndthe determined straight line, then the line matching pointSolving the matching point of the point cloud in the (k-1) th frameThe constraint equation of (a) yields a matching point:
likewise, dotsIs characterized by being composed ofAndplane determined by three points, then for plane matching pointSolving its matching pointsThe constraint equation of (a) yields a matching point:
laser measurement error of optimization model with same form of vision measurement errorThe method comprises the following specific steps:
calculating the point-point matching of the laser characteristics by using the idea of approximate matching points to obtain the matching points of two frames of laser point cloudsAnd
converting the matching points to the camera coordinate system:
To coordinateUsing camera coordinate system CiAnd CjRotation matrix of cellsAnd translation vectorConversion to CjCorresponding to the coordinatesAnd will beProjection onto a normalized plane:
calculating the reconstructed laser measurement error as follows:
3.2) dynamically adjusting the weight coefficient of the error function in the positioning posture estimation module;
the environmental information regulates and controls the weight occupied by the monocular camera and the laser radar in optimization, and the weight coefficient is dynamically adjusted according to the number of the keyframe formula road sign points of the monocular camera and the matching characteristic number of the laser radar point cloud data. The system sets the optimal co-view landmark point number of two frames of image key frames as A and the optimal matching feature point number of two frames of laser point cloud data as B, before an optimization model is established each time, the system calculates the ratio a/A of the current key frame co-view landmark point number a to the optimal co-view landmark point number A, and calculates the ratio B/B of the current laser point cloud matching feature point number B to the optimal matching feature point number B, so that the weight coefficient sigma of a vision measurement error is 2 x (a/A)/(B/B), and the weight coefficient tau of the laser measurement error is 2 x [1- (a/A)/(B/B) ].
3.3) transmitting the estimated pose into a pose optimization model for sliding window optimization, and specifically comprising the following steps:
importing poses in algorithm pose estimation and screening the poses of key frames in the algorithm pose estimationSystem velocity vw1Inverse depth of ith landmark point diAre variables to be optimized. And integrates all variables to be optimized into a vector χ.
Establishing a least square problem of optimization model optimization, and constructing a residual function of a system to carry out least square iterative optimization:
wherein the content of the first and second substances,is the error in the measurement of the IMU,is the error in the visual measurement and is,the scanning error of the laser radar is shown, and the P matrix is a covariance matrix corresponding to each error difference item.
3.4) optimizing the least square problem by utilizing an L-M method to obtain the optimal pose after optimization, and performing the optimization
The real-time pose P of the vehicle is obtained by converting the real-time pose P of the vehicle into a world coordinate systemvil=(xvil,yvil,zvil,αvil,βvil,γvil)。
The invention uses the pre-integration model of IMU and monocular camera to carry out combined initialization, makes up the unknown scale of monocular vision odometer, and corrects the zero bias defect of IMU, the pose optimization module carries out secondary optimization on the estimated pose by using a sliding window method, establishes an error function of each sensor, fuses data of the three sensors together, establishes a new error weight coefficient adjustment mode, the method has the advantages that the environmental adaptability of the traditional positioning method is improved, the requirements of accuracy and robustness of positioning of the unmanned vehicle in a complex environment can be met, and the method is suitable for positioning of the unmanned vehicle in the complex environment.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.
Claims (10)
1. A modularized unmanned vehicle positioning method based on visual inertial laser data fusion is characterized by comprising the following steps:
1) acquiring current information of the unmanned vehicle, including image information acquired by a monocular camera, three-dimensional point cloud information acquired by a laser radar and acceleration and angular velocity data acquired by an IMU;
2) estimating the pose of the automobile according to the current information of the unmanned automobile acquired in the step 1) to obtain the pose information of the unmanned automobile;
3) establishing a multi-sensor fusion optimization model according to pose information estimated by the pose, obtaining the optimal pose of the vehicle according to the optimization model, and converting the optimal pose into a world coordinate system to obtain the real-time pose of the vehicle; the optimization model is an error function set based on IMU measurement errors, laser measurement errors and camera measurement errors of the sensor model, and the optimization model of multi-sensor fusion is established simultaneously according to the error function.
2. The visual inertial laser data fusion modular unmanned vehicle positioning method according to claim 1, wherein the pose information of the unmanned vehicle obtained in step 2) is as follows:
calculating the frame-to-frame motion information (x) of the monocular camera from the essential matrix by epipolar geometric constraintv,yv,zv,αv,βv,γv) And screening a key frame of the monocular image by using a landmark comparison algorithm to mark, performing pre-integration processing on measurement data of the IMU, performing joint initialization with the monocular camera, recovering the scale of the monocular camera, estimating the speed of the system, the zero offset of the gyroscope and the gravity direction, and obtaining scale-free pose transformation (x)vi,yvi,zvi,αvi,βvi,γvi) Namely the pose information of the unmanned automobile.
3. The visual inertial laser data fusion modular unmanned vehicle positioning method according to claim 1, wherein the pose information of the unmanned vehicle obtained in step 2) is as follows:
2.1) calculating the interframe motion information (x) of the monocular camera from the essential matrix by epipolar geometric constraintv,yv,zv,αv,βv,γv):
ORB feature points in the images are extracted by using an ORB feature point algorithm, feature points of two adjacent frames of images I and I +1 are matched by using a FLANN algorithm, the essential matrixes of the two images are estimated by using epipolar geometric constraint and a RANSAC algorithm, singular value decomposition is carried out on the essential matrixes to obtain a rotation matrix R and a translation vector t between the two frames of images, and the relative pose (x) between the frames is recoveredv,yv,zv,αv,βv,γv);
2.2) screening key frames in the image by using a common-view landmark algorithm, and marking;
2.3) Using the IMU detection at time iWhereinIs the acceleration in the IMU coordinate system at time i,and (3) establishing a pre-integration model of the IMU according to the angular velocity under the IMU coordinate system at the moment i as follows:
wherein the content of the first and second substances,is the amount of translation from time i to time j,is the amount of velocity from time i to time j,is the amount of rotation from time i to time j; recovering unknown scales in the rotation matrix R and the translation vector t in the monocular image through the rotation matrix R and the translation vector t obtained by IMU pre-integration to obtain a scale-free estimated pose, and converting the scale-free estimated pose into (x) under a world coordinate systemvi,yvi,zvi,αvi,βvi,αvi)。
4. The visual inertial laser data fused modular unmanned vehicle positioning method according to claim 3, wherein the step 2.2) utilizes a co-vision landmark algorithm to screen keyframes in the image, specifically:
storing A common-view matching feature points of a prior keyframe as a landmark point set, acquiring a latest image I transmitted by a monocular camera, detecting that B feature points are shared in the I, wherein W feature points are matched with the landmark points, and judging whether the following formula is met or not:
σWAand σWBRespectively, preset threshold values, if the threshold values are met, recording I as a new key frame, and deleting the oldest frame in the prior key frames.
5. The visual inertial laser data fused modular unmanned vehicle positioning method according to claim 1, wherein the error function of the optimization model in the step 3) is as follows:
wherein, R is a rotation matrix, p represents a translation vector, v represents a speed, the corresponding time, the coordinate system where the translation vector is located and the conversion relation of the translation vector are respectively represented by upper and lower marks, w represents a world coordinate system, b represents a world coordinate systemiRepresenting a vehicle body coordinate system at the moment i;to recover IMU position through estimated relative motion,IMU velocity, g, recovered by estimated relative motionwRepresents the acceleration of gravity;quaternion, representing the IMU rotation recovered by the estimated relative motion; IMU pre-integration representing the amount of translation, the amount of velocity and the amount of rotation respectively,zero-bias for the estimated accelerometer and gyroscope;
wherein the content of the first and second substances,for the result of the system normalization process to the predicted value of the coordinate of the landmark point l in the camera coordinate system,the landmark point is the coordinate of the landmark l observed by the camera on the normalized plane, and the difference between the landmark point and the coordinate is the measurement error of the camera;
wherein
Wherein the content of the first and second substances,andrespectively a rotation matrix and a translation vector between the camera and the laser radar;andmatching points of two frames of laser point clouds;
6. The visual inertial laser data fusion modular unmanned vehicle positioning method according to claim 5, wherein the optimal pose of the vehicle obtained in step 3) according to the optimization model is obtained by establishing a least square problem of the pose optimization model, constructing a residual function to perform least square iterative optimization, and iteratively optimizing the least square problem by using an L-M method to obtain the optimal pose of the vehicle, wherein the residual function is:
wherein the content of the first and second substances,is the error in the measurement of the IMU,is the error in the visual measurement and is,scanning error of the laser radar is shown, and P is a covariance matrix corresponding to a residual error item; sigma is the weight coefficient of the vision measurement error, and tau is the weight coefficient of the laser measurement error.
7. The visual inertial laser data fused modular unmanned vehicle positioning method according to claim 6, wherein the weight coefficient of the visual measurement error and the weight coefficient of the laser measurement error in the step 3) are determined as follows:
the method comprises the steps of regulating and controlling the weight occupied by a monocular camera and a laser radar in optimization according to environment information, determining a weight coefficient according to the number of key frame co-vision road signs of the monocular camera and the matching feature number of laser radar point cloud data, setting the number of the optimal co-vision road signs of two frames of image key frames as A and the number of the optimal matching feature points of two frames of laser point cloud data as B, calculating the ratio a/A of the number a of the current key frame co-vision road signs to the number A of the optimal co-vision road signs before an optimization model is established each time, calculating the ratio B/B of the number B of the current laser point cloud matching feature points to the number B of the optimal matching feature points, wherein the weight coefficient sigma of a vision measurement error is 2 x (a/A)/(B/B), and the weight coefficient tau of a laser measurement error is 2 x [1- (a/A)/(B/B) ].
8. A modular unmanned vehicle positioning system for visual inertial laser data fusion, comprising:
the acquisition module is used for acquiring and obtaining current information of the unmanned vehicle, including image information acquired by a monocular camera, three-dimensional point cloud information acquired by a laser radar and acceleration and angular velocity data acquired by an IMU;
the pose estimation module is used for estimating the pose of the automobile according to the current information of the unmanned automobile to obtain the pose information of the unmanned automobile;
the pose optimization module is used for establishing a multi-sensor fusion optimization model according to the pose information of the pose estimation module, obtaining the optimal pose of the vehicle according to the optimization model, and converting the optimal pose into a world coordinate system to obtain the real-time pose of the vehicle; the optimization model is an error function set based on IMU measurement errors, laser measurement errors and camera measurement errors of the sensor model, and the optimization model of multi-sensor fusion is established simultaneously according to the error function.
9. The visual inertial laser data fused modular unmanned aerial vehicle positioning system of claim 8, wherein the pose estimation module obtains pose information of the unmanned aerial vehicle as follows:
1) calculating the frame-to-frame motion information (x) of the monocular camera from the essential matrix by epipolar geometric constraintv,yv,zv,αv,βv,γv):
ORB feature points in the images are extracted by using an ORB feature point algorithm, feature points of two adjacent frames of images I and I +1 are matched by using a FLANN algorithm, the essential matrixes of the two images are estimated by using epipolar geometric constraint and a RANSAC algorithm, singular value decomposition is carried out on the essential matrixes to obtain a rotation matrix R and a translation vector t between the two frames of images, and the relative pose (x) between the frames is recoveredv,yv,zv,αv,βv,γv);
2) Screening key frames in the image by using a common-view landmark algorithm, and marking;
3) using IMU detection at time iWhereinIs the acceleration in the IMU coordinate system at time i,and (3) establishing a pre-integration model of the IMU according to the angular velocity under the IMU coordinate system at the moment i as follows:
wherein the content of the first and second substances,is the amount of translation from time i to time j,is the amount of velocity from time i to time j,is the amount of rotation from time i to time j. Recovering unknown scales in the rotation matrix R and the translation vector t in the monocular image through the rotation matrix R and the translation vector t obtained by IMU pre-integration to obtain a scale-free estimated pose, and converting the scale-free estimated pose into (x) under a world coordinate systemvi,yvi,zvi,αvi,βvi,γvi)。
10. The visual inertial laser data-fused modular unmanned aerial vehicle positioning system of claim 8, wherein the error function of the optimization model in the pose optimization module is as follows:
wherein the content of the first and second substances,to recover IMU position through estimated relative motion,to restore IMU velocity through estimated relative motion,for IMU rotation to be recovered by the estimated relative motion,zero-bias for the estimated accelerometer and gyroscope;
wherein the content of the first and second substances,for the result of the system normalization process to the predicted value of the coordinate of the landmark point l in the camera coordinate system,the landmark point is the coordinate of the landmark l observed by the camera on the normalized plane, and the difference between the landmark point and the coordinate is the measurement error of the camera;
wherein the content of the first and second substances,
wherein the content of the first and second substances,andrespectively a rotation matrix and a translation vector between the camera and the laser radar;andmatching points of two frames of laser point clouds;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340571.5A CN111595333B (en) | 2020-04-26 | 2020-04-26 | Modularized unmanned vehicle positioning method and system based on visual inertia laser data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010340571.5A CN111595333B (en) | 2020-04-26 | 2020-04-26 | Modularized unmanned vehicle positioning method and system based on visual inertia laser data fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111595333A true CN111595333A (en) | 2020-08-28 |
CN111595333B CN111595333B (en) | 2023-07-28 |
Family
ID=72183682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010340571.5A Active CN111595333B (en) | 2020-04-26 | 2020-04-26 | Modularized unmanned vehicle positioning method and system based on visual inertia laser data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111595333B (en) |
Cited By (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112097768A (en) * | 2020-11-17 | 2020-12-18 | 深圳市优必选科技股份有限公司 | Robot posture determining method and device, robot and storage medium |
CN112197773A (en) * | 2020-09-30 | 2021-01-08 | 江苏集萃未来城市应用技术研究所有限公司 | Visual and laser positioning mapping method based on plane information |
CN112232126A (en) * | 2020-09-14 | 2021-01-15 | 广东工业大学 | Dimension reduction expression method for improving variable scene positioning robustness |
CN112230211A (en) * | 2020-10-15 | 2021-01-15 | 长城汽车股份有限公司 | Vehicle positioning method and device, storage medium and vehicle |
CN112304307A (en) * | 2020-09-15 | 2021-02-02 | 浙江大华技术股份有限公司 | Positioning method and device based on multi-sensor fusion and storage medium |
CN112529965A (en) * | 2020-12-08 | 2021-03-19 | 长沙行深智能科技有限公司 | Calibration method and device for laser radar and monocular camera |
CN112561841A (en) * | 2020-12-04 | 2021-03-26 | 深兰人工智能(深圳)有限公司 | Point cloud data fusion method and device for laser radar and camera |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN112652001A (en) * | 2020-11-13 | 2021-04-13 | 山东交通学院 | Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering |
CN112669354A (en) * | 2020-12-08 | 2021-04-16 | 重庆邮电大学 | Multi-camera motion state estimation method based on vehicle incomplete constraint |
CN112697131A (en) * | 2020-12-17 | 2021-04-23 | 中国矿业大学 | Underground mobile equipment positioning method and system based on vision and inertial navigation system |
CN112712018A (en) * | 2020-12-29 | 2021-04-27 | 东软睿驰汽车技术(沈阳)有限公司 | Identification map establishing method, visual positioning method and device |
CN112712107A (en) * | 2020-12-10 | 2021-04-27 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN112729294A (en) * | 2021-04-02 | 2021-04-30 | 北京科技大学 | Pose estimation method and system suitable for vision and inertia fusion of robot |
CN112747749A (en) * | 2020-12-23 | 2021-05-04 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
CN113052908A (en) * | 2021-04-16 | 2021-06-29 | 南京工业大学 | Mobile robot pose estimation method based on multi-sensor data fusion |
CN113074726A (en) * | 2021-03-16 | 2021-07-06 | 深圳市慧鲤科技有限公司 | Pose determination method and device, electronic equipment and storage medium |
CN113074754A (en) * | 2021-03-27 | 2021-07-06 | 上海智能新能源汽车科创功能平台有限公司 | Visual inertia SLAM system initialization method based on vehicle kinematic constraint |
CN113155121A (en) * | 2021-03-22 | 2021-07-23 | 珠海深圳清华大学研究院创新中心 | Vehicle positioning method and device and electronic equipment |
CN113324542A (en) * | 2021-06-07 | 2021-08-31 | 北京京东乾石科技有限公司 | Positioning method, device, equipment and storage medium |
CN113390422A (en) * | 2021-06-10 | 2021-09-14 | 奇瑞汽车股份有限公司 | Automobile positioning method and device and computer storage medium |
CN113420590A (en) * | 2021-05-13 | 2021-09-21 | 北京航空航天大学 | Robot positioning method, device, equipment and medium in weak texture environment |
CN113432595A (en) * | 2021-07-07 | 2021-09-24 | 北京三快在线科技有限公司 | Equipment state acquisition method and device, computer equipment and storage medium |
CN113432602A (en) * | 2021-06-23 | 2021-09-24 | 西安电子科技大学 | Unmanned aerial vehicle pose estimation method based on multi-sensor fusion |
CN113551666A (en) * | 2021-07-06 | 2021-10-26 | 杭州鸿泉物联网技术股份有限公司 | Automatic driving multi-sensor fusion positioning method and device, equipment and medium |
CN113566833A (en) * | 2021-07-28 | 2021-10-29 | 上海工程技术大学 | Multi-sensor fusion vehicle positioning method and system |
CN113658337A (en) * | 2021-08-24 | 2021-11-16 | 哈尔滨工业大学 | Multi-mode odometer method based on rut lines |
CN113936120A (en) * | 2021-10-12 | 2022-01-14 | 北京邮电大学 | Mark-free lightweight Web AR method and system |
CN113946151A (en) * | 2021-10-20 | 2022-01-18 | 北京百度网讯科技有限公司 | Data processing method and device for automatic driving vehicle and automatic driving vehicle |
CN114013449A (en) * | 2021-11-02 | 2022-02-08 | 阿波罗智能技术(北京)有限公司 | Data processing method and device for automatic driving vehicle and automatic driving vehicle |
CN114046787A (en) * | 2021-10-29 | 2022-02-15 | 广州文远知行科技有限公司 | Pose optimization method, device and equipment based on sensor and storage medium |
CN114088087A (en) * | 2022-01-21 | 2022-02-25 | 深圳大学 | High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED |
CN114170320A (en) * | 2021-10-29 | 2022-03-11 | 广西大学 | Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion |
CN114323035A (en) * | 2020-09-30 | 2022-04-12 | 华为技术有限公司 | Positioning method, device and system |
CN114370871A (en) * | 2022-01-13 | 2022-04-19 | 华南理工大学 | Close coupling optimization method for visible light positioning and laser radar inertial odometer |
CN114518108A (en) * | 2020-11-18 | 2022-05-20 | 郑州宇通客车股份有限公司 | Positioning map construction method and device |
CN114585879A (en) * | 2020-09-27 | 2022-06-03 | 深圳市大疆创新科技有限公司 | Pose estimation method and device |
CN114608554A (en) * | 2022-02-22 | 2022-06-10 | 北京理工大学 | Handheld SLAM equipment and robot instant positioning and mapping method |
CN114612665A (en) * | 2022-03-15 | 2022-06-10 | 北京航空航天大学 | Pose estimation and dynamic vehicle detection method based on normal vector histogram features |
CN114648584A (en) * | 2022-05-23 | 2022-06-21 | 北京理工大学前沿技术研究院 | Robustness control method and system for multi-source fusion positioning |
CN115655264A (en) * | 2022-09-23 | 2023-01-31 | 智己汽车科技有限公司 | Pose estimation method and device |
CN112461237B (en) * | 2020-11-26 | 2023-03-14 | 浙江同善人工智能技术有限公司 | Multi-sensor fusion positioning method applied to dynamic change scene |
CN115855117A (en) * | 2023-02-16 | 2023-03-28 | 深圳佑驾创新科技有限公司 | Combined calibration method for installation postures of camera and inertia measurement unit relative to vehicle body |
WO2023050632A1 (en) * | 2021-09-29 | 2023-04-06 | 深圳市慧鲤科技有限公司 | Positioning method and apparatus, and device, storage medium and computer program product |
WO2023104207A1 (en) * | 2021-12-10 | 2023-06-15 | 深圳先进技术研究院 | Collaborative three-dimensional mapping method and system |
CN116295507A (en) * | 2023-05-26 | 2023-06-23 | 南京师范大学 | Laser inertial odometer optimization method and system based on deep learning |
CN116660923A (en) * | 2023-08-01 | 2023-08-29 | 齐鲁空天信息研究院 | Unmanned agricultural machinery library positioning method and system integrating vision and laser radar |
US11835341B2 (en) | 2021-07-20 | 2023-12-05 | Honeywell International Inc. | Integrity monitoring of vehicle kinematic states using map-based, vision navigation systems |
CN114739402B (en) * | 2022-04-22 | 2024-06-25 | 合众新能源汽车股份有限公司 | Fusion positioning method, medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105371840A (en) * | 2015-10-30 | 2016-03-02 | 北京自动化控制设备研究所 | Method for combined navigation of inertia/visual odometer/laser radar |
CN109099901A (en) * | 2018-06-26 | 2018-12-28 | 苏州路特工智能科技有限公司 | Full-automatic road roller localization method based on multisource data fusion |
CN109544638A (en) * | 2018-10-29 | 2019-03-29 | 浙江工业大学 | A kind of asynchronous online calibration method for Multi-sensor Fusion |
CN110243358A (en) * | 2019-04-29 | 2019-09-17 | 武汉理工大学 | The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110428467A (en) * | 2019-07-30 | 2019-11-08 | 四川大学 | A kind of camera, imu and the united robot localization method of laser radar |
-
2020
- 2020-04-26 CN CN202010340571.5A patent/CN111595333B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105371840A (en) * | 2015-10-30 | 2016-03-02 | 北京自动化控制设备研究所 | Method for combined navigation of inertia/visual odometer/laser radar |
CN109099901A (en) * | 2018-06-26 | 2018-12-28 | 苏州路特工智能科技有限公司 | Full-automatic road roller localization method based on multisource data fusion |
CN109544638A (en) * | 2018-10-29 | 2019-03-29 | 浙江工业大学 | A kind of asynchronous online calibration method for Multi-sensor Fusion |
CN110261870A (en) * | 2019-04-15 | 2019-09-20 | 浙江工业大学 | It is a kind of to synchronize positioning for vision-inertia-laser fusion and build drawing method |
CN110243358A (en) * | 2019-04-29 | 2019-09-17 | 武汉理工大学 | The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110428467A (en) * | 2019-07-30 | 2019-11-08 | 四川大学 | A kind of camera, imu and the united robot localization method of laser radar |
Cited By (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112232126B (en) * | 2020-09-14 | 2023-08-25 | 广东工业大学 | Dimension reduction expression method for improving positioning robustness of variable scene |
CN112232126A (en) * | 2020-09-14 | 2021-01-15 | 广东工业大学 | Dimension reduction expression method for improving variable scene positioning robustness |
CN112304307A (en) * | 2020-09-15 | 2021-02-02 | 浙江大华技术股份有限公司 | Positioning method and device based on multi-sensor fusion and storage medium |
CN114585879A (en) * | 2020-09-27 | 2022-06-03 | 深圳市大疆创新科技有限公司 | Pose estimation method and device |
CN114323035A (en) * | 2020-09-30 | 2022-04-12 | 华为技术有限公司 | Positioning method, device and system |
CN112197773A (en) * | 2020-09-30 | 2021-01-08 | 江苏集萃未来城市应用技术研究所有限公司 | Visual and laser positioning mapping method based on plane information |
CN112230211A (en) * | 2020-10-15 | 2021-01-15 | 长城汽车股份有限公司 | Vehicle positioning method and device, storage medium and vehicle |
CN112652001A (en) * | 2020-11-13 | 2021-04-13 | 山东交通学院 | Underwater robot multi-sensor fusion positioning system based on extended Kalman filtering |
CN112097768B (en) * | 2020-11-17 | 2021-03-02 | 深圳市优必选科技股份有限公司 | Robot posture determining method and device, robot and storage medium |
CN112097768A (en) * | 2020-11-17 | 2020-12-18 | 深圳市优必选科技股份有限公司 | Robot posture determining method and device, robot and storage medium |
CN114518108A (en) * | 2020-11-18 | 2022-05-20 | 郑州宇通客车股份有限公司 | Positioning map construction method and device |
CN114518108B (en) * | 2020-11-18 | 2023-09-08 | 宇通客车股份有限公司 | Positioning map construction method and device |
CN112461237B (en) * | 2020-11-26 | 2023-03-14 | 浙江同善人工智能技术有限公司 | Multi-sensor fusion positioning method applied to dynamic change scene |
CN112561841A (en) * | 2020-12-04 | 2021-03-26 | 深兰人工智能(深圳)有限公司 | Point cloud data fusion method and device for laser radar and camera |
CN112669354A (en) * | 2020-12-08 | 2021-04-16 | 重庆邮电大学 | Multi-camera motion state estimation method based on vehicle incomplete constraint |
CN112529965A (en) * | 2020-12-08 | 2021-03-19 | 长沙行深智能科技有限公司 | Calibration method and device for laser radar and monocular camera |
CN112712107A (en) * | 2020-12-10 | 2021-04-27 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN112712107B (en) * | 2020-12-10 | 2022-06-28 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN112697131A (en) * | 2020-12-17 | 2021-04-23 | 中国矿业大学 | Underground mobile equipment positioning method and system based on vision and inertial navigation system |
CN112747749A (en) * | 2020-12-23 | 2021-05-04 | 浙江同筑科技有限公司 | Positioning navigation system based on binocular vision and laser fusion |
CN112712018A (en) * | 2020-12-29 | 2021-04-27 | 东软睿驰汽车技术(沈阳)有限公司 | Identification map establishing method, visual positioning method and device |
CN112634451B (en) * | 2021-01-11 | 2022-08-23 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN112634451A (en) * | 2021-01-11 | 2021-04-09 | 福州大学 | Outdoor large-scene three-dimensional mapping method integrating multiple sensors |
CN113074726A (en) * | 2021-03-16 | 2021-07-06 | 深圳市慧鲤科技有限公司 | Pose determination method and device, electronic equipment and storage medium |
CN113155121B (en) * | 2021-03-22 | 2024-04-02 | 珠海深圳清华大学研究院创新中心 | Vehicle positioning method and device and electronic equipment |
CN113155121A (en) * | 2021-03-22 | 2021-07-23 | 珠海深圳清华大学研究院创新中心 | Vehicle positioning method and device and electronic equipment |
CN113074754A (en) * | 2021-03-27 | 2021-07-06 | 上海智能新能源汽车科创功能平台有限公司 | Visual inertia SLAM system initialization method based on vehicle kinematic constraint |
CN112729294A (en) * | 2021-04-02 | 2021-04-30 | 北京科技大学 | Pose estimation method and system suitable for vision and inertia fusion of robot |
CN113052908B (en) * | 2021-04-16 | 2023-08-04 | 南京工业大学 | Mobile robot pose estimation algorithm based on multi-sensor data fusion |
CN113052908A (en) * | 2021-04-16 | 2021-06-29 | 南京工业大学 | Mobile robot pose estimation method based on multi-sensor data fusion |
CN113420590A (en) * | 2021-05-13 | 2021-09-21 | 北京航空航天大学 | Robot positioning method, device, equipment and medium in weak texture environment |
CN113420590B (en) * | 2021-05-13 | 2022-12-06 | 北京航空航天大学 | Robot positioning method, device, equipment and medium in weak texture environment |
CN113324542B (en) * | 2021-06-07 | 2024-04-12 | 北京京东乾石科技有限公司 | Positioning method, device, equipment and storage medium |
CN113324542A (en) * | 2021-06-07 | 2021-08-31 | 北京京东乾石科技有限公司 | Positioning method, device, equipment and storage medium |
CN113390422A (en) * | 2021-06-10 | 2021-09-14 | 奇瑞汽车股份有限公司 | Automobile positioning method and device and computer storage medium |
CN113390422B (en) * | 2021-06-10 | 2022-06-10 | 奇瑞汽车股份有限公司 | Automobile positioning method and device and computer storage medium |
CN113432602A (en) * | 2021-06-23 | 2021-09-24 | 西安电子科技大学 | Unmanned aerial vehicle pose estimation method based on multi-sensor fusion |
CN113432602B (en) * | 2021-06-23 | 2022-12-02 | 西安电子科技大学 | Unmanned aerial vehicle pose estimation method based on multi-sensor fusion |
CN113551666A (en) * | 2021-07-06 | 2021-10-26 | 杭州鸿泉物联网技术股份有限公司 | Automatic driving multi-sensor fusion positioning method and device, equipment and medium |
CN113432595A (en) * | 2021-07-07 | 2021-09-24 | 北京三快在线科技有限公司 | Equipment state acquisition method and device, computer equipment and storage medium |
US11835341B2 (en) | 2021-07-20 | 2023-12-05 | Honeywell International Inc. | Integrity monitoring of vehicle kinematic states using map-based, vision navigation systems |
CN113566833A (en) * | 2021-07-28 | 2021-10-29 | 上海工程技术大学 | Multi-sensor fusion vehicle positioning method and system |
CN113658337A (en) * | 2021-08-24 | 2021-11-16 | 哈尔滨工业大学 | Multi-mode odometer method based on rut lines |
WO2023050632A1 (en) * | 2021-09-29 | 2023-04-06 | 深圳市慧鲤科技有限公司 | Positioning method and apparatus, and device, storage medium and computer program product |
CN113936120A (en) * | 2021-10-12 | 2022-01-14 | 北京邮电大学 | Mark-free lightweight Web AR method and system |
CN113946151A (en) * | 2021-10-20 | 2022-01-18 | 北京百度网讯科技有限公司 | Data processing method and device for automatic driving vehicle and automatic driving vehicle |
CN114170320A (en) * | 2021-10-29 | 2022-03-11 | 广西大学 | Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion |
CN114046787A (en) * | 2021-10-29 | 2022-02-15 | 广州文远知行科技有限公司 | Pose optimization method, device and equipment based on sensor and storage medium |
CN114046787B (en) * | 2021-10-29 | 2024-01-30 | 广州文远知行科技有限公司 | Pose optimization method, device and equipment based on sensor and storage medium |
CN114170320B (en) * | 2021-10-29 | 2022-10-28 | 广西大学 | Automatic positioning and working condition self-adaption method of pile driver based on multi-sensor fusion |
CN114013449A (en) * | 2021-11-02 | 2022-02-08 | 阿波罗智能技术(北京)有限公司 | Data processing method and device for automatic driving vehicle and automatic driving vehicle |
CN114013449B (en) * | 2021-11-02 | 2023-11-03 | 阿波罗智能技术(北京)有限公司 | Data processing method and device for automatic driving vehicle and automatic driving vehicle |
WO2023104207A1 (en) * | 2021-12-10 | 2023-06-15 | 深圳先进技术研究院 | Collaborative three-dimensional mapping method and system |
CN114370871A (en) * | 2022-01-13 | 2022-04-19 | 华南理工大学 | Close coupling optimization method for visible light positioning and laser radar inertial odometer |
CN114088087A (en) * | 2022-01-21 | 2022-02-25 | 深圳大学 | High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED |
CN114088087B (en) * | 2022-01-21 | 2022-04-15 | 深圳大学 | High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED |
WO2023138007A1 (en) * | 2022-01-21 | 2023-07-27 | 深圳大学 | High-reliability and high-precision navigation positioning method and system for gps-denied unmanned aerial vehicle |
CN114608554B (en) * | 2022-02-22 | 2024-05-03 | 北京理工大学 | Handheld SLAM equipment and robot instant positioning and mapping method |
CN114608554A (en) * | 2022-02-22 | 2022-06-10 | 北京理工大学 | Handheld SLAM equipment and robot instant positioning and mapping method |
CN114612665A (en) * | 2022-03-15 | 2022-06-10 | 北京航空航天大学 | Pose estimation and dynamic vehicle detection method based on normal vector histogram features |
CN114612665B (en) * | 2022-03-15 | 2022-10-11 | 北京航空航天大学 | Pose estimation and dynamic vehicle detection method based on normal vector histogram features |
CN114739402B (en) * | 2022-04-22 | 2024-06-25 | 合众新能源汽车股份有限公司 | Fusion positioning method, medium and electronic equipment |
CN114648584A (en) * | 2022-05-23 | 2022-06-21 | 北京理工大学前沿技术研究院 | Robustness control method and system for multi-source fusion positioning |
CN114648584B (en) * | 2022-05-23 | 2022-08-30 | 北京理工大学前沿技术研究院 | Robustness control method and system for multi-source fusion positioning |
CN115655264A (en) * | 2022-09-23 | 2023-01-31 | 智己汽车科技有限公司 | Pose estimation method and device |
CN115855117B (en) * | 2023-02-16 | 2023-06-02 | 深圳佑驾创新科技有限公司 | Combined calibration method for mounting posture of camera and inertial measurement unit relative to vehicle body |
CN115855117A (en) * | 2023-02-16 | 2023-03-28 | 深圳佑驾创新科技有限公司 | Combined calibration method for installation postures of camera and inertia measurement unit relative to vehicle body |
CN116295507B (en) * | 2023-05-26 | 2023-08-15 | 南京师范大学 | Laser inertial odometer optimization method and system based on deep learning |
CN116295507A (en) * | 2023-05-26 | 2023-06-23 | 南京师范大学 | Laser inertial odometer optimization method and system based on deep learning |
CN116660923A (en) * | 2023-08-01 | 2023-08-29 | 齐鲁空天信息研究院 | Unmanned agricultural machinery library positioning method and system integrating vision and laser radar |
CN116660923B (en) * | 2023-08-01 | 2023-09-29 | 齐鲁空天信息研究院 | Unmanned agricultural machinery library positioning method and system integrating vision and laser radar |
Also Published As
Publication number | Publication date |
---|---|
CN111595333B (en) | 2023-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111595333A (en) | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion | |
CN109029433B (en) | Method for calibrating external parameters and time sequence based on vision and inertial navigation fusion SLAM on mobile platform | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
CN109307508B (en) | Panoramic inertial navigation SLAM method based on multiple key frames | |
CN111156984B (en) | Monocular vision inertia SLAM method oriented to dynamic scene | |
CN110044354A (en) | A kind of binocular vision indoor positioning and build drawing method and device | |
CN111258313A (en) | Multi-sensor fusion SLAM system and robot | |
CN112734841B (en) | Method for realizing positioning by using wheel type odometer-IMU and monocular camera | |
CN112556719B (en) | Visual inertial odometer implementation method based on CNN-EKF | |
CN111932674A (en) | Optimization method of line laser vision inertial system | |
CN114234967B (en) | Six-foot robot positioning method based on multi-sensor fusion | |
CN113503873B (en) | Visual positioning method for multi-sensor fusion | |
CN115272596A (en) | Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene | |
CN114964276A (en) | Dynamic vision SLAM method fusing inertial navigation | |
CN111890373A (en) | Sensing and positioning method of vehicle-mounted mechanical arm | |
CN114693787A (en) | Parking garage map building and positioning method and system and vehicle | |
CN112101160A (en) | Binocular semantic SLAM method oriented to automatic driving scene | |
CN116007609A (en) | Positioning method and computing system for fusion of multispectral image and inertial navigation | |
CN115147344A (en) | Three-dimensional detection and tracking method for parts in augmented reality assisted automobile maintenance | |
CN115218889A (en) | Multi-sensor indoor positioning method based on dotted line feature fusion | |
CN113345032B (en) | Initialization map building method and system based on wide-angle camera large distortion map | |
CN116182855B (en) | Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment | |
CN117367427A (en) | Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment | |
CN116804553A (en) | Odometer system and method based on event camera/IMU/natural road sign | |
CN112945233A (en) | Global drift-free autonomous robot simultaneous positioning and map building method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |