CN114543786A - Wall-climbing robot positioning method based on visual inertial odometer - Google Patents
Wall-climbing robot positioning method based on visual inertial odometer Download PDFInfo
- Publication number
- CN114543786A CN114543786A CN202210337210.4A CN202210337210A CN114543786A CN 114543786 A CN114543786 A CN 114543786A CN 202210337210 A CN202210337210 A CN 202210337210A CN 114543786 A CN114543786 A CN 114543786A
- Authority
- CN
- China
- Prior art keywords
- robot
- wall
- visual
- climbing robot
- sliding window
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000001179 sorption measurement Methods 0.000 claims abstract description 31
- 238000005259 measurement Methods 0.000 claims abstract description 29
- 230000009466 transformation Effects 0.000 claims abstract description 7
- 238000005457 optimization Methods 0.000 claims description 12
- 230000010354 integration Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 3
- 238000010521 absorption reaction Methods 0.000 claims description 2
- 230000036544 posture Effects 0.000 claims description 2
- 230000008901 benefit Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000009194 climbing Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000011423 initialization method Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000000178 monomer Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
- G01C21/1656—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention belongs to the field of wall-climbing robot positioning, and particularly discloses a wall-climbing robot positioning method based on a visual inertial odometer, which comprises the following steps: acquiring measurement data through a visual inertial odometer on the wall-climbing robot; solving the state variable by using a conventional bundle set model in a sliding window mode according to the acquired measurement data; projecting the position of the robot in the sliding window onto the member according to the coordinate system transformation relation between the robot and the member, and acquiring the position of a projection point and the surface normal direction of the projection point, namely, adsorption information; constructing an adsorption constraint item according to the adsorption information, and adding the adsorption constraint item into a conventional bundle model to form an improved bundle model; and solving the state variables according to the improved bundle set model to obtain the optimal pose of the robot, so as to realize the positioning of the wall-climbing robot. The invention can reduce the accumulated error of the odometer, improve the precision and robustness of the positioning algorithm and realize the large-range high-precision positioning of the wall-climbing robot.
Description
Technical Field
The invention belongs to the field of wall-climbing robot positioning, and particularly relates to a wall-climbing robot positioning method based on a visual inertial odometer.
Background
The autonomous positioning technology is a key technology in a mobile robot and is also an important precondition for the autonomous movement of the wall-climbing robot. The Visual Inertial Odometer (VIO) well integrates the advantages of a camera and an Inertial Measurement Unit (IMU), and is widely applied to autonomous positioning of a mobile robot. However, for some wall-climbing robots, an airborne camera can only observe the adsorption surface, and the visual field is small; the vacuum fan vibrates greatly, and the precision and the robustness of the system are influenced. Conventional VIOs have four degrees of freedom of float, requiring a priori information to determine the initial pose and rotation about gravity (angle of yaw). Accumulated errors exist in the motion process, especially, the influence of the yaw angle error on the positioning result is large, and when the VIO system has image degradation and large vibration, the system is easy to disperse, and positioning failure occurs.
Therefore, the problem that the traditional visual inertial odometer is not robust enough in positioning of the wall-climbing robot is a problem to be solved urgently in the field.
Disclosure of Invention
Aiming at the defects or the improvement requirements of the prior art, the invention provides a wall-climbing robot positioning method based on a visual inertial odometer, aiming at improving the robustness and the precision of a visual inertial odometer model during the positioning of the wall-climbing robot and realizing the large-range high-precision positioning of the wall-climbing robot.
In order to achieve the aim, the invention provides a wall-climbing robot positioning method based on a visual inertial odometer, which comprises the following steps:
s1, acquiring measurement data through a visual inertial odometer on the wall-climbing robot in the motion process of adsorbing the wall-climbing robot on the surface of the component;
s2, solving the state variable by using a bundle set model of a conventional visual inertial odometer according to the acquired measurement data, and processing the measurement data and the state variable of the robot at the current time and a previous time in a sliding window form during solving; the state variables comprise robot positions and postures;
s3, projecting the position of the robot in the sliding window onto the member according to the coordinate system transformation relation between the robot and the member, and acquiring the position of the projection point and the normal direction of the surface of the projection point, namely, adsorption information;
s4, constructing adsorption constraint items according to the adsorption information, and adding the adsorption constraint items into a conventional bundle model to form an improved bundle model;
and S5, solving the state variables according to the improved bundle set model, acquiring the optimal pose of the robot, and realizing the positioning of the wall-climbing robot.
wherein,the position of the robot at time k,and m is the direction vector of the axis in the robot, and n is the direction vector of the normal direction of the surface of the projection point.
Further preferably, the acquiring of the measurement data specifically includes: reading image data collected by a camera and extracting image characteristics; and simultaneously reading the data of the inertia measurement unit, and performing pre-integration processing to further obtain IMU data integration among the image data.
As a further preferred, the conventional bundle set model includes an a priori constraint term, an inertial constraint term and a visual constraint term, wherein the a priori constraint term is generated by marginalization operation of the sliding window, the inertial constraint term is constructed according to IMU data integration, and the visual constraint term is constructed according to image data and image features.
As a further preferred, the improved bundle set model is specifically:
wherein χ is a state variable in the sliding window; r isp(χ) is a priori constraint term;is an inertial constraint term, B is the IMU data integration set within the sliding window,integrating IMU data from k to k + 1;is a visual constraint term, C is a set of visual measurements within a sliding window, (l, j) is a combination of any two different image data within the set,is clTo cjVisual reprojection of (c)l、cjRespectively one image feature in the images l and j; ρ (-) represents the robust kernel function, PR is the set of sorption information within the sliding window,is the adsorption information at time k.
As a further preferred, ρ (·) is a robust kernel, and a cauchy robust kernel is used.
Preferably, when the state variables are solved according to the improved bundle set model, a nonlinear optimization method is adopted for solving.
As a further preferred, the nonlinear optimization method is the Levenberg-Marquadrdt algorithm.
Further preferably, the initial coordinate system transformation relationship between the robot and the adsorption member is acquired in advance before the robot moves, and the specific method is as follows: measured by an external sensor, or a specific position where the robot is placed is set on the member in advance.
As a further preferred, a neighbor search method is used when projecting the robot position in the sliding window onto the member.
Generally, compared with the prior art, the above technical solution conceived by the present invention mainly has the following technical advantages:
1. according to the method, the surface data of the object adsorbed by the wall climbing robot is combined, and the constraint of the adsorption surface is added to the traditional visual inertial odometer model; therefore, under the condition of large vibration or limited camera view, the accumulated error of the odometer can be reduced, and the precision and the robustness of the visual inertial odometer are improved, so that the positioning precision of the wall-climbing robot is improved. The invention can greatly improve the estimation precision of the relative attitude of the wall-climbing robot without introducing new equipment and sensors, and provides a new effective method for large-range high-precision positioning of the wall-climbing robot.
2. The invention adopts a sliding window form to process data, processes a batch of data with fixed scale when solving the pose of the robot each time, wherein the data comprises sensor measurement data and robot state variables in the latest period of time, and after new sensor measurement is obtained, performs marginalization processing on the oldest data to maintain the solving scale of the optimization problem unchanged. Therefore, the current measurement and the measurement and state variable in a previous period can be related to obtain better precision, and the scale of the optimization problem is relatively fixed without infinitely increasing the calculation efficiency.
3. The robust kernel function in the adsorption constraint term of the invention preferably selects Cauchy robust kernel to inhibit outlier, which can reduce the influence of error data and improve the algorithm robustness.
4. The Levenberg-Marquadrdt algorithm is preferably adopted in the nonlinear optimization method for solving the bundle set model, and the method has the advantage of high optimization speed.
Drawings
FIG. 1 is a diagram of state variables for which a sliding window participates in optimization according to an embodiment of the present invention;
fig. 2 is a flowchart of a wall-climbing robot positioning method based on a visual inertial odometer according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment of the invention provides a wall-climbing robot positioning method based on a visual inertial odometer, which comprises the following steps as shown in FIG. 2:
before positioning is started, the visual inertial odometer is arranged on the wall-climbing robot, and the visual inertial odometer is initialized; the visual inertial odometer comprises a camera and an Inertial Measurement Unit (IMU), and the initialization method of the visual inertial odometer comprises the steps of calibrating internal and external parameters of the camera and the IMU in advance, and performing dynamic motion to align visual and IMU tracks. The large complex component adsorbed by the wall-climbing robot can acquire the normal direction on any position of the surface of the large complex component, including but not limited to scanning point cloud, CAD model and the like.
Specifically, the camera is a sensor that can acquire external image data in real time, including but not limited to a monocular camera, a depth camera, an RGBD camera; preferably, a depth camera is adopted, the depth camera can acquire image depth information and gray scale information, the OpenCV is used for extracting corner features in the image, an optical flow method is used for tracking the features, and the feature positions and the depth are output to the rear end of the visual inertial odometer. The inertial measurement unit comprises an accelerometer and a gyroscope, and is a sensor capable of measuring self acceleration and angular velocity.
And S1, acquiring measurement data through a visual inertial odometer on the wall-climbing robot in the motion process of adsorbing the wall-climbing robot on the surface of the component.
Specifically, the acquiring of the measurement data specifically includes: reading image data acquired by a camera, enhancing and correcting the image data, and extracting image features; and simultaneously reading the data of the inertia measurement unit, and performing pre-integration processing to obtain IMU data integration between image data.
S2, processing the measured data and state variables of the robot at the current time and a previous time in a sliding window mode; specifically, as shown in fig. 1, the sliding window is to process a batch of data of a fixed scale each time the pose of the robot is solved, where the data includes sensor measurement data and robot state variables in a latest period of time, and after a new sensor measurement is obtained, perform marginalization processing on the oldest data to maintain the solution scale of the optimization problem unchanged.
Solving the state variable by using a conventional bundle set model according to the acquired measurement data; the state variables at each moment contain the robot pose, i.e. the position and attitude of the robot.
Specifically, the conventional bundle model is as follows:
wherein χ is a state variable in the sliding window; r isp(χ) is an a priori constraint term generated by the marginalization operation of the sliding window;is an inertial constraint term, B is the IMU data integration set within the sliding window,integrating IMU data from k to k + 1;is a visual constraint term, C is a set of visual measurements within a sliding window, (l, j) is a combination of any two different image data within the set,is clTo cjVisual reprojection of (c)l、cjWhich is an image feature in each of the images l, j, p (·) represents a robust kernel function.
The conventional bundling model is a commonly used bundling constraint model in conventional Visual Inertial odometers, see "Qin, T., P.Li and S.Shen, VINS-Mono: A Robust and Versatile monomer Visual-Inertial State estimate. IEEE Transactions on Robotics,2018.34(4): p.1004-1020".
And S3, projecting the position of the robot in the sliding window onto the surface of the adsorption component according to the coordinate system transformation relation between the robot and the component, and acquiring the position of the projection point and the normal direction of the surface of the projection point, namely adsorption information.
Specifically, before the wall-climbing robot works for the first time, the coordinate system transformation relation between the robot and the adsorption component is calibrated in advance, namely the initial global position of the robot relative to the adsorption component is obtained, wherein the initial global position refers to the position on the component when the robot is initialized, and the updating is not needed in the whole positioning process; the acquisition method includes, but is not limited to, measurement by an external sensor (such as a laser tracker), setting a specific position on the member where the robot is placed in advance, and the like.
Preferably, when the robot position is projected onto the component, a neighbor search method is used, that is, the position closest to the locus point in the component model is taken as a projection point, and the position of the projection point and the surface normal direction are obtained.
And S4, constructing adsorption constraint items according to the adsorption information in the sliding window, and adding the adsorption constraint items to the conventional bundle model to form an improved bundle model.
Specifically, the improved bundle set model is specifically:
wherein,for adsorption constraint term, ρ (-) represents a robust kernel function, preferably using Cauchy robust kernel; PR is the set of absorption information within the sliding window,is the adsorption information at time k.
More specifically, the method of construction of the adsorption constraint term is as follows:
wherein,the position of the robot at time k,and m is the direction vector of the axis in the robot, and n is the direction vector of the normal direction of the surface of the projection point.
And S5, solving the state variables according to the improved bundle set model by adopting a nonlinear optimization method, obtaining the optimal pose of the robot, and realizing the positioning of the wall-climbing robot.
Preferably, the nonlinear optimization method is a Levenberg-Marquadrdt algorithm.
And S6, receiving new sensor information, updating the sliding window, repeating the steps S2-S5, and updating the pose of the robot.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (10)
1. A wall-climbing robot positioning method based on a visual inertial odometer is characterized by comprising the following steps:
s1, acquiring measurement data through a visual inertial odometer on the wall-climbing robot in the motion process of adsorbing the wall-climbing robot on the surface of the component;
s2, solving the state variable by using a bundle set model of a conventional visual inertial odometer according to the acquired measurement data, and processing the measurement data and the state variable of the robot in a current and previous period of time in a sliding window mode during solving; the state variables comprise robot positions and postures;
s3, projecting the position of the robot in the sliding window onto the member according to the coordinate system transformation relation between the robot and the member, and acquiring the position of the projection point and the normal direction of the surface of the projection point, namely, adsorption information;
s4, constructing adsorption constraint items according to the adsorption information, and adding the adsorption constraint items into a conventional bundle model to form an improved bundle model;
and S5, solving the state variables according to the improved bundle model, acquiring the optimal pose of the robot, and realizing the positioning of the wall-climbing robot.
2. The visual odometry-based wall-climbing robot positioning method of claim 1, wherein the improved bundle set model adsorbs constraint termsThe construction method of (2) is as follows:
3. The visual inertial odometer-based wall-climbing robot positioning method according to claim 2, wherein the acquiring of the measurement data specifically comprises: reading image data collected by a camera and extracting image characteristics; and simultaneously reading the data of the inertia measurement unit, and performing pre-integration processing to further obtain IMU data integration among the image data.
4. The visual inertial odometry-based wall-climbing robot positioning method of claim 3, wherein the conventional bundle set model comprises a priori constraint terms, inertial constraint terms and visual constraint terms, wherein the a priori constraint terms are generated by marginalization operations of the sliding window, the inertial constraint terms are constructed according to IMU data integrals, and the visual constraint terms are constructed according to image data and image features.
5. The visual inertial odometer-based wall-climbing robot positioning method according to claim 4, wherein the improved bundle set model is specifically:
wherein χ is a state variable in the sliding window; r isp(χ) is a priori constraint term;is an inertial constraint term, B is the IMU data integration set within the sliding window,integrating IMU data from k to k + 1;is a visual constraint term, C is a set of visual measurements within a sliding window, (l, j) is a combination of any two different image data within the set,is clTo cjVisual reprojection of (c)l、cjRespectively one image feature in the images l and j; ρ (-) represents a robust kernel function; PR is the set of absorption information within the sliding window,is the adsorption information at time k.
6. The visual inertial odometer-based wall-climbing robot positioning method according to claim 5, wherein p (·) is a robust kernel function, and a Cauchy robust kernel function is adopted.
7. The visual inertial odometer-based wall-climbing robot positioning method according to claim 5, wherein when solving the state variables according to the improved bundle set model, a nonlinear optimization method is adopted for solving.
8. The visual inertial odometer-based wall-climbing robot positioning method according to claim 7, wherein the nonlinear optimization method is a Levenberg-Marquadrdt algorithm.
9. The method for positioning the wall-climbing robot based on the visual inertia odometer according to claim 1, wherein an initial coordinate system transformation relation between the robot and the adsorption member is acquired in advance before the robot moves, and the method comprises the following steps: measured by an external sensor, or a specific position where the robot is placed is set on the member in advance.
10. The visual odometry-based wall-climbing robot positioning method of any one of claims 1-9, wherein a neighbor search method is used when projecting the robot position in the sliding window onto the member.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210337210.4A CN114543786B (en) | 2022-03-31 | 2022-03-31 | Wall climbing robot positioning method based on visual inertial odometer |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210337210.4A CN114543786B (en) | 2022-03-31 | 2022-03-31 | Wall climbing robot positioning method based on visual inertial odometer |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114543786A true CN114543786A (en) | 2022-05-27 |
CN114543786B CN114543786B (en) | 2024-02-02 |
Family
ID=81665675
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210337210.4A Active CN114543786B (en) | 2022-03-31 | 2022-03-31 | Wall climbing robot positioning method based on visual inertial odometer |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114543786B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116242366A (en) * | 2023-03-23 | 2023-06-09 | 广东省特种设备检测研究院东莞检测院 | Spherical tank inner wall climbing robot walking space tracking and navigation method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100030378A1 (en) * | 2006-09-29 | 2010-02-04 | Samsung Heavy Ind. Co., Ltd. | Multi-function robot for moving on wall using indoor global positioning system |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN110717927A (en) * | 2019-10-10 | 2020-01-21 | 桂林电子科技大学 | Indoor robot motion estimation method based on deep learning and visual inertial fusion |
CN113091738A (en) * | 2021-04-09 | 2021-07-09 | 安徽工程大学 | Mobile robot map construction method based on visual inertial navigation fusion and related equipment |
CN113358117A (en) * | 2021-03-09 | 2021-09-07 | 北京工业大学 | Visual inertial indoor positioning method using map |
CN113432593A (en) * | 2021-06-25 | 2021-09-24 | 北京华捷艾米科技有限公司 | Centralized synchronous positioning and map construction method, device and system |
US20220011779A1 (en) * | 2020-07-09 | 2022-01-13 | Brookhurst Garage, Inc. | Autonomous Robotic Navigation In Storage Site |
-
2022
- 2022-03-31 CN CN202210337210.4A patent/CN114543786B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100030378A1 (en) * | 2006-09-29 | 2010-02-04 | Samsung Heavy Ind. Co., Ltd. | Multi-function robot for moving on wall using indoor global positioning system |
CN108489482A (en) * | 2018-02-13 | 2018-09-04 | 视辰信息科技(上海)有限公司 | The realization method and system of vision inertia odometer |
WO2019157925A1 (en) * | 2018-02-13 | 2019-08-22 | 视辰信息科技(上海)有限公司 | Visual-inertial odometry implementation method and system |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN110717927A (en) * | 2019-10-10 | 2020-01-21 | 桂林电子科技大学 | Indoor robot motion estimation method based on deep learning and visual inertial fusion |
US20220011779A1 (en) * | 2020-07-09 | 2022-01-13 | Brookhurst Garage, Inc. | Autonomous Robotic Navigation In Storage Site |
CN113358117A (en) * | 2021-03-09 | 2021-09-07 | 北京工业大学 | Visual inertial indoor positioning method using map |
CN113091738A (en) * | 2021-04-09 | 2021-07-09 | 安徽工程大学 | Mobile robot map construction method based on visual inertial navigation fusion and related equipment |
CN113432593A (en) * | 2021-06-25 | 2021-09-24 | 北京华捷艾米科技有限公司 | Centralized synchronous positioning and map construction method, device and system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116242366A (en) * | 2023-03-23 | 2023-06-09 | 广东省特种设备检测研究院东莞检测院 | Spherical tank inner wall climbing robot walking space tracking and navigation method |
CN116242366B (en) * | 2023-03-23 | 2023-09-12 | 广东省特种设备检测研究院东莞检测院 | Spherical tank inner wall climbing robot walking space tracking and navigation method |
Also Published As
Publication number | Publication date |
---|---|
CN114543786B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
CN111024066B (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN111207774B (en) | Method and system for laser-IMU external reference calibration | |
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN110009681B (en) | IMU (inertial measurement unit) assistance-based monocular vision odometer pose processing method | |
CN108253963B (en) | Robot active disturbance rejection positioning method and positioning system based on multi-sensor fusion | |
CN107909614B (en) | Positioning method of inspection robot in GPS failure environment | |
CN106814753B (en) | Target position correction method, device and system | |
CN111380514A (en) | Robot position and posture estimation method and device, terminal and computer storage medium | |
CN110726406A (en) | Improved nonlinear optimization monocular inertial navigation SLAM method | |
CN111156997B (en) | Vision/inertia combined navigation method based on camera internal parameter online calibration | |
CN110207693B (en) | Robust stereoscopic vision inertial pre-integration SLAM method | |
CN110954134B (en) | Gyro offset correction method, correction system, electronic device, and storage medium | |
JP2012173190A (en) | Positioning system and positioning method | |
CN112254729A (en) | Mobile robot positioning method based on multi-sensor fusion | |
CN116184430B (en) | Pose estimation algorithm fused by laser radar, visible light camera and inertial measurement unit | |
CN111238469A (en) | Unmanned aerial vehicle formation relative navigation method based on inertia/data chain | |
CN112179373A (en) | Measuring method of visual odometer and visual odometer | |
CN108827287B (en) | Robust visual SLAM system in complex environment | |
CN114543786A (en) | Wall-climbing robot positioning method based on visual inertial odometer | |
CN115015956A (en) | Laser and vision SLAM system of indoor unmanned vehicle | |
CN113503872B (en) | Low-speed unmanned aerial vehicle positioning method based on fusion of camera and consumption-level IMU | |
CN111539982B (en) | Visual inertial navigation initialization method based on nonlinear optimization in mobile platform | |
CN109674480B (en) | Human motion attitude calculation method based on improved complementary filtering | |
CN114252073B (en) | Robot attitude data fusion method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |