CN111121767A - GPS-fused robot vision inertial navigation combined positioning method - Google Patents
GPS-fused robot vision inertial navigation combined positioning method Download PDFInfo
- Publication number
- CN111121767A CN111121767A CN201911312943.7A CN201911312943A CN111121767A CN 111121767 A CN111121767 A CN 111121767A CN 201911312943 A CN201911312943 A CN 201911312943A CN 111121767 A CN111121767 A CN 111121767A
- Authority
- CN
- China
- Prior art keywords
- image
- pose
- gps
- robot
- inertial navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/48—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system
- G01S19/49—Determining position by combining or switching between position solutions derived from the satellite radio beacon positioning system and position solutions derived from a further system whereby the further system is an inertial position system, e.g. loosely-coupled
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Navigation (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
Abstract
The invention discloses a robot vision inertial navigation combined positioning method fusing a GPS, which comprises the steps of extracting and matching feature points of left and right images and front and rear images of a binocular camera, and calculating the three-dimensional coordinates of the feature points and the relative pose of an image frame; selecting a key frame in the image stream, creating a sliding window, and adding the key frame into the sliding window; calculating a visual re-projection error, an IMU pre-integration residual error and a zero-offset residual error and combining the visual re-projection error, the IMU pre-integration residual error and the zero-offset residual error into a combined pose estimation residual error; carrying out nonlinear optimization on the combined pose estimation residual error by using an L-M method to obtain an optimized visual inertial navigation (VIO) robot pose; if the current moment has GPS data, performing adaptive robust Kalman filtering on the GPS position data and the VIO pose estimation data to obtain a final robot pose; and if no GPS data exists, replacing the final pose data with the VIO pose data. The invention improves the positioning precision of the robot, reduces the calculation consumption and meets the requirements of large-range and long-time routing inspection.
Description
Technical Field
The invention belongs to the technical field of automatic inspection, and particularly relates to a robot vision inertial navigation combined positioning method integrating a GPS.
Background
For the autonomous inspection robot, pose estimation is of great importance and is the basis for the robot to complete inspection tasks. The traditional mode has differential GPS pose estimation, satellite signals are received through two GPS antennas, the position attitude, the speed and the like of the inspection robot are calculated, and the precision can reach centimeter level. However, the existing high-precision differential GPS solution is high in price, needs to avoid shielding of a large obstacle, and is only suitable for robot positioning in an open area. In the current stage, a simultaneous composition and positioning (SLAM) technology of the robot is popular, a local map is constructed by fully sensing the surrounding environment through sensors such as laser or vision, and the like, and robot information matching is carried out in the local map to realize positioning. The main implementation method comprises filtering and graph optimization, the filtering method only considers the optimization problems of the current pose and the nearby pose of the inspection robot, and global optimization cannot be achieved due to large calculation amount, so that the current SLAM technology based on graph optimization becomes the mainstream. However, for a long-time and large-range inspection task required by airport inspection, the SLAM technology is limited to the computing power of an industrial personal computer and cannot realize high-precision composition positioning in a large scene and a long time.
Disclosure of Invention
The invention aims to provide a robot vision inertial navigation combined positioning method fused with a GPS.
The technical scheme for realizing the purpose of the invention is as follows: a robot vision inertial navigation combined positioning method fused with a GPS comprises the following steps:
step 1, extracting and matching feature points of left and right images and front and rear images of a binocular camera, and calculating three-dimensional coordinates of the feature points and relative poses of image frames;
step 2, selecting a key frame in the image stream, creating a sliding window, and adding the key frame into the sliding window;
step 3, calculating a visual re-projection error, an IMU pre-integration residual error and a zero-offset residual error and combining the visual re-projection error, the IMU pre-integration residual error and the zero-offset residual error into a combined pose estimation residual error;
step 4, carrying out nonlinear optimization on the combined pose estimation residual error by using an L-M method to obtain an optimized visual inertial navigation (VIO) robot pose;
step 5, if GPS data exists at the current moment, performing adaptive robust Kalman filtering on the GPS position data and the VIO pose estimation data to obtain a final robot pose; and if no GPS data exists, replacing the final pose data with the VIO pose data.
Compared with the prior art, the invention has the following remarkable advantages:
1) the method estimates the local relative pose of the robot by using the visual inertial navigation tight coupling method, reduces estimation errors caused by single sensors such as binocular vision or IMU (inertial measurement unit), and can obtain the accurate local relative pose of the robot.
2) The pose estimated by the GPS and the binocular vision inertial navigation is used for filtering fusion, and as the GPS data has no accumulated error, the robot pose estimation does not provide global constraint, the computational power consumption of parameter global optimization is avoided, and the large-scale scene application can be carried out.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a flow chart of binocular vision estimation of robot pose and space point three-dimensional coordinates.
FIG. 3 is a flowchart illustrating key frame determination and sliding window establishment.
Fig. 4 is a schematic diagram of sliding window optimization.
Detailed Description
As shown in fig. 1, a robot vision inertial navigation combined positioning method with GPS fusion includes the following steps:
step 1, extracting image feature points of a left camera at the current moment according to feature points of a left camera image at the previous moment and performing feature point matching, extracting feature points of a right camera image at the current moment according to feature points of the left camera image at the current moment and performing feature point matching, and calculating three-dimensional coordinates of the feature points and relative poses of image frames by using the matching feature points, as shown in fig. 2, the specific steps are as follows:
step 1-1, extracting a Shi-Tomashi corner point of a first frame image of a left camera by using a goodFeatureToTrack () function in opencv, and tracking by using an LK optical flow method to obtain a subsequent image feature point of the left camera and a right camera image feature point at a corresponding moment;
step 1-2, judging whether the number of feature points obtained by tracking a subsequent left camera image by an LK optical flow method is smaller than a threshold number, if so, extracting Shi-Tomashi angular points with different numbers from the current left camera image;
step 1-3, triangularizing matched feature points of the left camera image and the right camera image, and solving three-dimensional coordinates of space points corresponding to the feature points through Singular Value Decomposition (SVD);
and 1-4, calculating the relative pose of the image frame according to the feature point coordinates in the left camera image and the corresponding three-dimensional coordinates thereof, namely rotation R and displacement t.
Step 2, selecting a key frame in the image stream, creating a sliding window, and adding the key frame into the sliding window, as shown in fig. 3, the specific steps are as follows:
step 2-1, if the current image is the first frame image, the current image is used as a key frame, a sliding window is created, the key frame is added into the sliding window, and the step 3 is carried out; otherwise, executing the step 2-2;
step 2-2, calculating the parallax between the current image and the first frame image in the sliding window by using the formula (1), if the parallax is larger than a threshold value, taking the image as a key frame, re-establishing a new sliding window, adding the key frame into the sliding window, and performing step 3; otherwise, executing the step 2-3;
in the formula, parallelax is the Parallax between two images, n is the number of matching feature points between two frames of images, and u is the number of matching feature points between two frames of images1i、v1iThe pixel value of the ith characteristic point of the previous frame image.
Step 2-3, calculating the number N of characteristic points tracked by the current image from the previous imagetrackAnd the number N of feature points traceable to 4 continuous key frames in the current imagelongtrackIf equation (2) is satisfied, the frame can be added as a key frame into the sliding window, otherwise, the frame is discarded:
in the formula, N is the number of the image feature points.
Step 3, calculating a visual re-projection error, an IMU pre-integration residual error and a zero-offset residual error and combining the errors into a combined pose estimation residual error, which specifically comprises the following steps:
obtaining the residual error of the IMU pre-integration residual rotation amount according to the IMU observed value among the key frames by utilizing an IMU pre-integration formulaVelocity residualPosition residualAnd IMU zero offset residual gyroscope residualAccelerometer residual
Using the relative pose of the image frame obtained in the step 1 to the step 4, and projecting the space point coordinates obtained by calculation in the step 1 to the step 3 to a projection plane by using a pinhole camera model to obtain a vision re-projection error formula
In the formula, zk,iIs the coordinates of the characteristic points corresponding to the ith space point in the kth frame image, h (-) is the pinhole camera projection formula, XkIs a projection matrix of the k frame image, PiIs the ith spatial point coordinate.
The invention designs a combined pose estimation residual error and makes a delta x approximation as follows:
wherein the content of the first and second substances,and f (x) is residual error, N is the number of key frames in the sliding window, and M is the number of three-dimensional space points of the first key frame in the sliding window.
Step 4, carrying out nonlinear optimization on the combined pose estimation residual error by using an L-M method to obtain an optimized visual inertial navigation (VIO) robot pose, specifically:
setting a trust zone matrix D, constraining the value range of delta x, and performing first-order expansion on the formula (4) to represent the first-order expansion
Wherein J is the Jacobian matrix of f (x), and u is the trust zone radius. The lagrange multiplier λ is used to convert the constrained optimization problem into an unconstrained optimization problem, as shown in the following formula:
expanding equation (6) and making the derivative equal to zero, one can obtain:
Δx=-(H+λI)-1Jf(x) (7)
wherein H ═ JTJ and I are unit matrixes. Determining iteration times, continuously adding the delta x obtained by calculating the formula (7) into x, and then recalculating the formula (7) until the iteration times are reached or the delta x is smaller than a set threshold value, so as to obtain the relative pose of each image in the optimized sliding window and the three-dimensional coordinates of the space points.
Step 5, if the GPS data are available at the current moment, self-adaptive error-tolerant Kalman filtering is carried out on the GPS position data and the position and posture data of the visual inertial navigation (VIO) robot to obtain the final position and posture of the robot; if no GPS data exists, the VIO pose data is used for replacing the final pose data, and the steps are as follows.
Step 5-1, judging whether GPS data exists, and if so, turning to step 5-2; otherwise, turning to the step 5-3;
step 5-2, integrating the position and attitude data of the robot of the GPS and the visual inertial navigation (VIO) system, namely estimating the observation error by using the adaptive robust Kalman filtering, and specifically comprising the following steps:
step 5-2-1, obtaining a state prediction equation and covariance matrix prediction of the system according to the state equation and the prediction equation of the system:
the system state equation and the prediction equation in the invention are as follows:
xk=Axk-1+wk-1(10)
zk=Hxk+vk(11)
wherein x isk=[xkykzkvxvyvz]The velocity output by the VIO visual inertial navigation odometer and the position data of the GPS are used as observations.Being a state transition matrix, I3×3Is a 3 × 3 unit array. H is unit prediction matrix, system noise wk-1And measuring the noise vkThe covariance matrices are Q and R, respectively, when considered as white Gaussian noise.
Step 5-2-2, judging innovation statistic by using formula (12)
Wherein the content of the first and second substances,is an innovation matrix. When statistic TkWhen the threshold value is smaller than the threshold value, executing the step 5-2-3; when T iskAnd when the current value is larger than the threshold value, the system is abnormal, the robot state estimation process at the current moment is cancelled, and the step 5-3 is carried out.
Step 5-2-3, designing a statistical characteristic updating formula of the measurement noise as follows:
wherein d isk=(1-b)/(1-bk+1) And b is a forgetting factor, and is generally more than or equal to 0.95 and less than or equal to 0.99. The kalman gain matrix is then:
the state estimation and covariance matrix of the inspection robot at the current moment are respectively as follows:
Pk=(I-KkHk)Pk,k-1(16)
and 5-3, using the pose data of the current visual inertial navigation (VIO) robot as the final robot pose.
Claims (10)
1. A robot vision inertial navigation combined positioning method fused with a GPS is characterized by comprising the following specific steps:
step 1, extracting characteristic points of a left camera and a right camera at the current moment, matching the characteristic points, and calculating the three-dimensional coordinates of the characteristic points and the relative pose of an image frame by using the matched characteristic points;
step 2, selecting a key frame in the image stream, creating a sliding window, and adding the key frame into the sliding window;
step 3, calculating a visual re-projection error, an IMU pre-integration residual error and a zero-offset residual error and combining the errors into a combined pose estimation residual error;
step 4, carrying out nonlinear optimization on the combined pose estimation residual error by using an L-M method to obtain an optimized pose of the visual inertial navigation robot;
step 5, if GPS data exists at the current moment, performing adaptive robust Kalman filtering on the GPS position data and VIO pose estimation data to obtain a final robot pose; and if no GPS data exists, replacing the final pose data with the VIO pose data.
2. The GPS-fused robot vision-inertial navigation combined positioning method according to claim 1, wherein the specific steps of extracting the camera feature points of the left camera and the right camera at the current moment, matching the feature points, and calculating the three-dimensional coordinates of the feature points and the relative pose of the image frame by using the matched feature points are as follows:
step 1-1, extracting the corner points of the first frame image of the left camera, and tracking by an LK optical flow method to obtain subsequent image feature points of the left camera and right camera image feature points at corresponding moments;
step 1-2, judging whether the number of feature points obtained by a subsequent left camera image is smaller than a threshold value number, if so, extracting angular points with difference quantity from the current left camera image;
step 1-3, triangularizing matched feature points of the left camera image and the right camera image, and solving a three-dimensional coordinate of a space point corresponding to the feature points through singular value decomposition;
and 1-4, calculating the relative pose of the image frame according to the feature point coordinates in the left camera image and the corresponding three-dimensional coordinates thereof, namely rotation R and displacement t.
3. The method for positioning combined vision and inertial navigation of a robot fusing GPS according to claim 1, wherein the method for selecting key frames in the image stream specifically comprises:
step 2-1, if the current image is the first frame image, the current image is used as a key frame, a sliding window is created, the key frame is added into the sliding window, and the step 3 is carried out; otherwise, executing the step 2-2;
step 2-2, calculating the parallax between the current image and the first frame image in the sliding window, if the parallax is larger than a threshold value, taking the image as a key frame, re-establishing a new sliding window, adding the key frame into the sliding window, and performing step 3; otherwise, executing the step 2-3;
step 2-3, calculating the number N of characteristic points tracked by the current image from the previous imagetrackAnd the number N of feature points traceable to 4 continuous key frames in the current imagelongtrackIf the following formula is satisfied, the frame can be added into the sliding window as a key frame, otherwise, the frame is discarded:
in the formula, N is the number of the image feature points.
4. The GPS-fused robot vision-inertial navigation combined positioning method according to claim 3, wherein the calculation formula of the parallax between the current image and the first frame image in the sliding window is as follows:
in the formula, parallelax is the Parallax between two images, n is the number of matching feature points between two frames of images, and u is the number of matching feature points between two frames of images1i、v1iThe pixel value of the ith characteristic point of the previous frame image.
5. The GPS-fused robot vision-inertial navigation combined positioning method according to claim 1, characterized in that the spatial point coordinates are re-projected to a projection plane by using a pinhole camera model according to the relative pose of the image frame, and the obtained vision re-projection error formula is as follows:
in the formula, zk,iIs the coordinates of the characteristic points corresponding to the ith space point in the kth frame image, h (-) is the pinhole camera projection formula, XkIs a projection matrix of the k frame image, PiIs the ith spatial point coordinate.
6. The GPS-fused robot vision inertial navigation combined positioning method according to claim 1, wherein the joint pose estimation residual is:
7. The GPS-fused robot vision inertial navigation combined positioning method according to claim 1, wherein the specific method for performing nonlinear optimization on the combined pose estimation residual by using an L-M method to obtain the optimized vision inertial navigation robot pose is as follows:
setting a trust zone matrix D, constraining the value range of delta x, and performing first-order expansion on the joint pose estimation residual error, wherein the expression is as follows:
j is a Jacobian matrix of f (x), and u is a trust area radius;
the lagrange multiplier λ is used to convert the constrained optimization problem into an unconstrained optimization problem, as shown in the following formula:
developing the above equation and making the derivative equal to zero, one obtains:
Δx=-(H+λI)-1Jf(x)
wherein H ═ JTJ and I are unit matrixes;
and adding the delta x into the x, and then recalculating the delta x until the iteration times are reached or the delta x is smaller than a set threshold value, so as to obtain the optimized pose of the visual inertial navigation robot.
8. The combined positioning method for the visual inertial navigation of the robot fusing the GPS according to claim 1, wherein the specific method for obtaining the final robot pose by performing the adaptive robust Kalman filtering on the GPS position data and the position and pose data of the visual inertial navigation (VIO) robot is as follows:
the state prediction equation and covariance matrix prediction of the system are obtained according to the system state equation and the prediction equation:
being a state transition matrix, I3×3Is a 3 × 3 unit array. H is unit prediction matrix, system noise wk-1And measuring the noise vkThe covariance matrices are Q and R, respectively, when considered as white Gaussian noise.
Calculating innovation statistic when TkWhen the threshold value is smaller than the threshold value, executing the next step; when T iskWhen the current time is greater than the threshold value, the system is abnormal and cancels the current timeThe robot state estimation process replaces final pose data with VIO pose data;
the statistical characteristic updating formula for determining the measurement noise is as follows:
wherein d isk=(1-b)/(1-bk+1) And b is a forgetting factor;
the kalman gain matrix is:
the state estimation and covariance matrix of the inspection robot at the current moment are respectively obtained as follows:
Pk=(I-KkHk)Pk,k-1。
9. the combined positioning method for visual inertial navigation of a robot fusing GPS according to claim 8, wherein the system state equation and the prediction equation are as follows:
xk=Axk-1+wk-1
zk=Hxk+vk
wherein x isk=[xkykzkvxvyvz]The velocity output by the VIO visual inertial navigation odometer and the position data of the GPS are used as observations.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911312943.7A CN111121767B (en) | 2019-12-18 | 2019-12-18 | GPS-fused robot vision inertial navigation combined positioning method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911312943.7A CN111121767B (en) | 2019-12-18 | 2019-12-18 | GPS-fused robot vision inertial navigation combined positioning method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111121767A true CN111121767A (en) | 2020-05-08 |
CN111121767B CN111121767B (en) | 2023-06-30 |
Family
ID=70499734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911312943.7A Active CN111121767B (en) | 2019-12-18 | 2019-12-18 | GPS-fused robot vision inertial navigation combined positioning method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111121767B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111739063A (en) * | 2020-06-23 | 2020-10-02 | 郑州大学 | Electric power inspection robot positioning method based on multi-sensor fusion |
CN111750855A (en) * | 2020-08-03 | 2020-10-09 | 长安大学 | Intelligent vibratory roller of independent operation of vision leading |
CN111880207A (en) * | 2020-07-09 | 2020-11-03 | 南京航空航天大学 | Visual inertial satellite tight coupling positioning method based on wavelet neural network |
CN111895989A (en) * | 2020-06-24 | 2020-11-06 | 浙江大华技术股份有限公司 | Robot positioning method and device and electronic equipment |
CN112233177A (en) * | 2020-10-10 | 2021-01-15 | 中国安全生产科学研究院 | Unmanned aerial vehicle pose estimation method and system |
CN112240768A (en) * | 2020-09-10 | 2021-01-19 | 西安电子科技大学 | Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration |
CN112305576A (en) * | 2020-10-31 | 2021-02-02 | 中环曼普科技(南京)有限公司 | Multi-sensor fusion SLAM algorithm and system thereof |
CN112525197A (en) * | 2020-11-23 | 2021-03-19 | 中国科学院空天信息创新研究院 | Ultra-wideband inertial navigation fusion pose estimation method based on graph optimization algorithm |
CN112712107A (en) * | 2020-12-10 | 2021-04-27 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN113031040A (en) * | 2021-03-01 | 2021-06-25 | 宁夏大学 | Positioning method and system for airport ground clothes vehicle |
CN113110446A (en) * | 2021-04-13 | 2021-07-13 | 深圳市千乘机器人有限公司 | Dynamic inspection method for autonomous mobile robot |
CN113203418A (en) * | 2021-04-20 | 2021-08-03 | 同济大学 | GNSSINS visual fusion positioning method and system based on sequential Kalman filtering |
CN113432602A (en) * | 2021-06-23 | 2021-09-24 | 西安电子科技大学 | Unmanned aerial vehicle pose estimation method based on multi-sensor fusion |
CN113465596A (en) * | 2021-06-25 | 2021-10-01 | 电子科技大学 | Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion |
CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
CN113701766A (en) * | 2020-05-20 | 2021-11-26 | 浙江欣奕华智能科技有限公司 | Robot map construction method, robot positioning method and device |
CN113838129A (en) * | 2021-08-12 | 2021-12-24 | 高德软件有限公司 | Method, device and system for obtaining pose information |
CN114719843A (en) * | 2022-06-09 | 2022-07-08 | 长沙金维信息技术有限公司 | High-precision positioning method in complex environment |
CN115793001A (en) * | 2023-02-07 | 2023-03-14 | 立得空间信息技术股份有限公司 | Vision, inertial navigation and satellite navigation fusion positioning method based on inertial navigation multiplexing |
CN117765084A (en) * | 2024-02-21 | 2024-03-26 | 电子科技大学 | Visual positioning method for iterative solution based on dynamic branch prediction |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103983263A (en) * | 2014-05-30 | 2014-08-13 | 东南大学 | Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network |
US20140333741A1 (en) * | 2013-05-08 | 2014-11-13 | Regents Of The University Of Minnesota | Constrained key frame localization and mapping for vision-aided inertial navigation |
CN105865452A (en) * | 2016-04-29 | 2016-08-17 | 浙江国自机器人技术有限公司 | Mobile platform pose estimation method based on indirect Kalman filtering |
US20160305784A1 (en) * | 2015-04-17 | 2016-10-20 | Regents Of The University Of Minnesota | Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation |
CN107909614A (en) * | 2017-11-13 | 2018-04-13 | 中国矿业大学 | Crusing robot localization method under a kind of GPS failures environment |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN110296702A (en) * | 2019-07-30 | 2019-10-01 | 清华大学 | Visual sensor and the tightly coupled position and orientation estimation method of inertial navigation and device |
CN110472585A (en) * | 2019-08-16 | 2019-11-19 | 中南大学 | A kind of VI-SLAM closed loop detection method based on inertial navigation posture trace information auxiliary |
-
2019
- 2019-12-18 CN CN201911312943.7A patent/CN111121767B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140333741A1 (en) * | 2013-05-08 | 2014-11-13 | Regents Of The University Of Minnesota | Constrained key frame localization and mapping for vision-aided inertial navigation |
CN103983263A (en) * | 2014-05-30 | 2014-08-13 | 东南大学 | Inertia/visual integrated navigation method adopting iterated extended Kalman filter and neural network |
US20160305784A1 (en) * | 2015-04-17 | 2016-10-20 | Regents Of The University Of Minnesota | Iterative kalman smoother for robust 3d localization for vision-aided inertial navigation |
CN105865452A (en) * | 2016-04-29 | 2016-08-17 | 浙江国自机器人技术有限公司 | Mobile platform pose estimation method based on indirect Kalman filtering |
CN107909614A (en) * | 2017-11-13 | 2018-04-13 | 中国矿业大学 | Crusing robot localization method under a kind of GPS failures environment |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
CN110296702A (en) * | 2019-07-30 | 2019-10-01 | 清华大学 | Visual sensor and the tightly coupled position and orientation estimation method of inertial navigation and device |
CN110472585A (en) * | 2019-08-16 | 2019-11-19 | 中南大学 | A kind of VI-SLAM closed loop detection method based on inertial navigation posture trace information auxiliary |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113701766A (en) * | 2020-05-20 | 2021-11-26 | 浙江欣奕华智能科技有限公司 | Robot map construction method, robot positioning method and device |
CN111739063B (en) * | 2020-06-23 | 2023-08-18 | 郑州大学 | Positioning method of power inspection robot based on multi-sensor fusion |
CN111739063A (en) * | 2020-06-23 | 2020-10-02 | 郑州大学 | Electric power inspection robot positioning method based on multi-sensor fusion |
CN111895989A (en) * | 2020-06-24 | 2020-11-06 | 浙江大华技术股份有限公司 | Robot positioning method and device and electronic equipment |
CN111880207A (en) * | 2020-07-09 | 2020-11-03 | 南京航空航天大学 | Visual inertial satellite tight coupling positioning method based on wavelet neural network |
CN111750855A (en) * | 2020-08-03 | 2020-10-09 | 长安大学 | Intelligent vibratory roller of independent operation of vision leading |
CN111750855B (en) * | 2020-08-03 | 2022-02-15 | 长安大学 | Intelligent vibratory roller of independent operation of vision leading |
CN112240768A (en) * | 2020-09-10 | 2021-01-19 | 西安电子科技大学 | Visual inertial navigation fusion SLAM method based on Runge-Kutta4 improved pre-integration |
CN112233177A (en) * | 2020-10-10 | 2021-01-15 | 中国安全生产科学研究院 | Unmanned aerial vehicle pose estimation method and system |
CN112305576A (en) * | 2020-10-31 | 2021-02-02 | 中环曼普科技(南京)有限公司 | Multi-sensor fusion SLAM algorithm and system thereof |
CN112525197A (en) * | 2020-11-23 | 2021-03-19 | 中国科学院空天信息创新研究院 | Ultra-wideband inertial navigation fusion pose estimation method based on graph optimization algorithm |
CN112525197B (en) * | 2020-11-23 | 2022-10-28 | 中国科学院空天信息创新研究院 | Ultra-wideband inertial navigation fusion pose estimation method based on graph optimization algorithm |
CN112712107B (en) * | 2020-12-10 | 2022-06-28 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN112712107A (en) * | 2020-12-10 | 2021-04-27 | 浙江大学 | Optimization-based vision and laser SLAM fusion positioning method |
CN113031040A (en) * | 2021-03-01 | 2021-06-25 | 宁夏大学 | Positioning method and system for airport ground clothes vehicle |
CN113110446A (en) * | 2021-04-13 | 2021-07-13 | 深圳市千乘机器人有限公司 | Dynamic inspection method for autonomous mobile robot |
CN113203418A (en) * | 2021-04-20 | 2021-08-03 | 同济大学 | GNSSINS visual fusion positioning method and system based on sequential Kalman filtering |
CN113432602A (en) * | 2021-06-23 | 2021-09-24 | 西安电子科技大学 | Unmanned aerial vehicle pose estimation method based on multi-sensor fusion |
CN113465596A (en) * | 2021-06-25 | 2021-10-01 | 电子科技大学 | Four-rotor unmanned aerial vehicle positioning method based on multi-sensor fusion |
CN113516714A (en) * | 2021-07-15 | 2021-10-19 | 北京理工大学 | Visual SLAM method based on IMU pre-integration information acceleration feature matching |
CN113838129A (en) * | 2021-08-12 | 2021-12-24 | 高德软件有限公司 | Method, device and system for obtaining pose information |
CN113838129B (en) * | 2021-08-12 | 2024-03-15 | 高德软件有限公司 | Method, device and system for obtaining pose information |
CN114719843A (en) * | 2022-06-09 | 2022-07-08 | 长沙金维信息技术有限公司 | High-precision positioning method in complex environment |
CN114719843B (en) * | 2022-06-09 | 2022-09-30 | 长沙金维信息技术有限公司 | High-precision positioning method in complex environment |
CN115793001A (en) * | 2023-02-07 | 2023-03-14 | 立得空间信息技术股份有限公司 | Vision, inertial navigation and satellite navigation fusion positioning method based on inertial navigation multiplexing |
CN117765084A (en) * | 2024-02-21 | 2024-03-26 | 电子科技大学 | Visual positioning method for iterative solution based on dynamic branch prediction |
CN117765084B (en) * | 2024-02-21 | 2024-05-03 | 电子科技大学 | Visual positioning method for iterative solution based on dynamic branch prediction |
Also Published As
Publication number | Publication date |
---|---|
CN111121767B (en) | 2023-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111121767B (en) | GPS-fused robot vision inertial navigation combined positioning method | |
Lin et al. | R $^ 2$ LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping | |
CN109885080B (en) | Autonomous control system and autonomous control method | |
US10295365B2 (en) | State estimation for aerial vehicles using multi-sensor fusion | |
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
US10907971B2 (en) | Square root inverse Schmidt-Kalman filters for vision-aided inertial navigation and mapping | |
CN112556719B (en) | Visual inertial odometer implementation method based on CNN-EKF | |
CN113376669B (en) | Monocular VIO-GNSS fusion positioning algorithm based on dotted line characteristics | |
CN113551665B (en) | High-dynamic motion state sensing system and sensing method for motion carrier | |
Liu | A robust and efficient lidar-inertial-visual fused simultaneous localization and mapping system with loop closure | |
Caruso et al. | An inverse square root filter for robust indoor/outdoor magneto-visual-inertial odometry | |
CN112444245A (en) | Insect-imitated vision integrated navigation method based on polarized light, optical flow vector and binocular vision sensor | |
Pan et al. | Tightly-coupled multi-sensor fusion for localization with LiDAR feature maps | |
Qayyum et al. | Imu aided rgb-d slam | |
Ligocki et al. | Fusing the RGBD SLAM with wheel odometry | |
Liu et al. | Semi-dense visual-inertial odometry and mapping for computationally constrained platforms | |
CN113503872B (en) | Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU | |
Huai | Collaborative slam with crowdsourced data | |
CN114295127A (en) | RONIN and 6DOF positioning fusion method and hardware system framework | |
Bulunseechart et al. | A method for UAV multi-sensor fusion 3D-localization under degraded or denied GPS situation | |
Conway et al. | Vision-based Velocimetry over Unknown Terrain with a Low-Noise IMU | |
Nguyen et al. | Likelihood-based iterated cubature multi-state-constraint Kalman filter for visual inertial navigation system | |
Ronen et al. | Development challenges and performance analysis of drone visual/inertial slam in a global reference system | |
CN117058430B (en) | Method, apparatus, electronic device and storage medium for field of view matching | |
CN117268373B (en) | Autonomous navigation method and system for multi-sensor information fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |