CN107356252B - Indoor robot positioning method integrating visual odometer and physical odometer - Google Patents

Indoor robot positioning method integrating visual odometer and physical odometer Download PDF

Info

Publication number
CN107356252B
CN107356252B CN201710408258.9A CN201710408258A CN107356252B CN 107356252 B CN107356252 B CN 107356252B CN 201710408258 A CN201710408258 A CN 201710408258A CN 107356252 B CN107356252 B CN 107356252B
Authority
CN
China
Prior art keywords
robot
odometer
physical
pose
closed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201710408258.9A
Other languages
Chinese (zh)
Other versions
CN107356252A (en
Inventor
周唐恺
江济良
王运志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Krund Robot Co ltd
Original Assignee
Qingdao Krund Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Krund Robot Co ltd filed Critical Qingdao Krund Robot Co ltd
Priority to CN201710408258.9A priority Critical patent/CN107356252B/en
Publication of CN107356252A publication Critical patent/CN107356252A/en
Application granted granted Critical
Publication of CN107356252B publication Critical patent/CN107356252B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses an indoor robot positioning method fusing a visual odometer and a physical odometer. The invention adds a visual sensor to carry out closed-loop detection on the robot in a known environment, so as to eliminate the accumulated error of the particle filter-based physical odometer in the whole world, change the global error of the odometer into staged accumulation, and construct a closed map on the basis. The method disclosed by the invention effectively solves the problem of error accumulation of the physical odometer after integrating the visual odometer, can enable the robot to carry out self-positioning and accurate repositioning in a known environment, has small added calculation amount, can ensure efficiency and real-time performance, meets the indoor navigation requirement with accuracy, and is an effective method for solving the problem of inaccurate robot positioning in a large environment at the present stage.

Description

Indoor robot positioning method integrating visual odometer and physical odometer
Technical Field
The invention relates to a method for automatically positioning precision of an indoor mobile robot, in particular to an indoor robot positioning method integrating a visual odometer and a physical odometer.
Background
In the related research of the intelligent navigation technology of the autonomous mobile robot, the simultaneous localization and mapping (SLAM) technology of the robot in an unknown environment is taken as a key technology, has double values in engineering and academia, and has become a research hotspot in the field in the last two decades. In this trend, the scholars propose various methods for solving the SLAM problem, and also apply various sensors to solve the environmental perception problem in SLAM.
The problem to be solved by SLAM technology is to select an appropriate sensor system to realize real-time robot positioning. In practical applications, sensors with high accuracy in both the range and the azimuth angle based on the laser radar are preferred sensors, and various sensors such as infrared, ultrasonic, IMU, visual sensor, and odometer are also needed to assist positioning to provide positioning accuracy. However, the multi-sensor fusion is always a technical difficulty in the SLAM field, and the SLAM method which can be effectively fused and commercialized at present is basically not available. For the indoor mobile robot, in consideration of the actual use scene and the current development condition, besides the laser radar and the physical odometer, the visual odometer is added to improve the positioning accuracy, and the method is the optimal solution for the SLAM technology of the indoor mobile robot in the real production stage.
The prior art can satisfy the situation that the robot is in an environment with a simple indoor structure and a small area through an improved Monte Carlo particle filtering and positioning method of a physical odometer, however, the physical odometer calculates through displacement increment of two time periods, only local movement is considered, so errors can be continuously superposed and accumulated until drift is too large and cannot be eliminated, and the positioning errors are larger particularly when wheels slip or incline.
Disclosure of Invention
In view of the above, the invention provides an indoor robot positioning method fusing a visual odometer and a physical odometer, which is used for accurately positioning a robot by extracting ORB (object-oriented features) characteristics through collected images to perform image matching, camera pose estimation and closed-loop detection.
An indoor robot positioning method integrating a visual odometer and a physical odometer comprises the following implementation steps:
step 1, acquiring color and depth images by using a camera;
2, extracting ORB characteristics from the obtained two continuous images, calculating a descriptor of each ORB characteristic point, and estimating the pose change of the camera through characteristic matching between adjacent images;
step 3, selecting the image with the most common characteristic points and the best matching in the adjacent frames of images as a key frame in the moving process of the robot, and simultaneously storing the robot track and laser data corresponding to each key frame;
step 4, when the robot moves to a known environment, firstly, searching a feature point matched with the current frame in an offline-trained BoW dictionary, repositioning the robot, then calculating the current pose of the robot through TF, and finally, releasing the pose information of the robot for closed-loop detection repositioning by a ros message mechanism;
step 5, subscribing visual odometer information of closed-loop detection and robot pose optimization of AMCL particle filtering real-time positioning according to an extended Kalman filter to obtain accurate real-time pose of the robot, so as to eliminate the global error accumulated by a physical odometer; the accumulated error of the robot odometer can be eliminated every time of local closed-loop detection, so that the global error is always in stage accumulation;
and 6, finally, when the robot returns to the initial position, global closed loop detection optimizes the whole motion track and the poses of all key frames, and a grid map is constructed by using the stored laser data to complete the whole process of simultaneously positioning and map construction.
Further, the step of estimating the pose change of the camera is: 1) combining the depth image to obtain depth information of the effective characteristic points; 2) matching according to orb features of the feature points and the depth values, and eliminating error point pairs by using a RANSAC algorithm; 3) and solving a rotation matrix R and a translation matrix T between adjacent images, and estimating pose transformation of the camera.
Has the advantages that:
the invention adds a visual sensor to carry out closed-loop detection on the robot in a known environment, so as to eliminate the accumulated error of the particle filter-based physical odometer in the whole world, change the global error of the odometer into staged accumulation, and construct a closed map on the basis. Compared with the traditional SLAM method, the method disclosed by the invention has the advantages that the problem of error accumulation of the physical odometer is effectively solved after the visual odometer is fused, the robot can carry out self-positioning and accurate relocation in the known environment, the increased calculation amount is small, the efficiency and the real-time performance can be ensured, the indoor navigation requirement can be met in precision, and the method is an effective method for solving the problem of inaccurate robot positioning in the large environment at the present stage.
Drawings
FIG. 1 is a flow chart of a fusion positioning method of the present invention;
FIG. 2 is a schematic diagram of the positioning process of the fusion visual odometer and physical odometer of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
As shown in fig. 1 and 2, the present invention provides a method of manufacturing a semiconductor device
Step 1, acquiring color and depth images by using a luxurious depth camera Xtion;
step 2, extracting ORB characteristics from the obtained two continuous images, calculating a descriptor of each ORB characteristic point, and estimating the pose change of the camera through characteristic matching between adjacent images: 1) combining the depth image to obtain depth information of the effective characteristic points; 2) matching according to orb features of the feature points and the depth values, and eliminating error point pairs by using a RANSAC algorithm; 3) obtaining a rotation matrix R and a translation matrix T between adjacent images, and estimating pose transformation of the camera;
step 3, selecting the image with the most common characteristic points and the best matching in the adjacent frames of images as a key frame in the moving process of the robot, and simultaneously storing the robot track and laser data corresponding to each key frame;
step 4, when the robot moves to a known environment, firstly, searching a feature point matched with the current frame in an offline-trained BoW dictionary, repositioning the robot, then calculating the current pose of the robot through TF, and finally, releasing the pose information of the robot for closed-loop detection repositioning by a ros message mechanism;
step 5, subscribing visual odometer information of closed-loop detection and robot pose optimization of AMCL particle filtering real-time positioning according to an extended Kalman filter to obtain accurate real-time pose of the robot, so as to eliminate the global error accumulated by a physical odometer; the accumulated error of the robot odometer can be eliminated every time of local closed-loop detection, so that the global error is always in stage accumulation;
and 6, finally, when the robot returns to the initial position, global closed loop detection optimizes the whole motion track and the poses of all key frames, and a grid map is constructed by using the stored laser data to complete the whole process of simultaneously positioning and map construction.
The actual width of the closed map constructed by the method is 86.4m, and the height of the closed map is 38.4 m.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (1)

1. An indoor robot positioning method integrating a visual odometer and a physical odometer is characterized by comprising the following implementation steps:
step 1, acquiring color and depth images by using a camera;
step 2, extracting ORB characteristics from the obtained two continuous images, calculating a descriptor of each ORB characteristic point, estimating the change of the camera pose through characteristic matching between adjacent images, wherein the step of estimating the change of the camera pose is as follows: 1) combining the depth image to obtain depth information of the effective characteristic points; 2) matching according to orb features of the feature points and the depth values, and eliminating error point pairs by using a RANSAC algorithm; 3) obtaining a rotation matrix R and a translation matrix T between adjacent images, and estimating pose transformation of the camera;
step 3, selecting the image with the most common characteristic points and the best matching in the adjacent frames of images as a key frame in the moving process of the robot, and simultaneously storing the robot track and laser data corresponding to each key frame;
step 4, when the robot moves to a known environment, firstly, searching a feature point matched with the current frame in an offline-trained BoW dictionary, repositioning the robot, then calculating the current pose of the robot through TF, and finally, releasing the pose information of the robot for closed-loop detection repositioning by a ros message mechanism;
step 5, subscribing visual odometer information of closed-loop detection and robot pose optimization of AMCL particle filtering real-time positioning according to an extended Kalman filter to obtain accurate real-time pose of the robot, so as to eliminate the global error accumulated by a physical odometer; the accumulated error of the robot odometer can be eliminated every time of local closed-loop detection, so that the global error is always in stage accumulation;
and 6, finally, when the robot returns to the initial position, global closed loop detection optimizes the whole motion track and the poses of all key frames, and a grid map is constructed by using the stored laser data to complete the whole process of simultaneously positioning and map construction.
CN201710408258.9A 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer Expired - Fee Related CN107356252B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710408258.9A CN107356252B (en) 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710408258.9A CN107356252B (en) 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer

Publications (2)

Publication Number Publication Date
CN107356252A CN107356252A (en) 2017-11-17
CN107356252B true CN107356252B (en) 2020-06-16

Family

ID=60271649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710408258.9A Expired - Fee Related CN107356252B (en) 2017-06-02 2017-06-02 Indoor robot positioning method integrating visual odometer and physical odometer

Country Status (1)

Country Link
CN (1) CN107356252B (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107092264A (en) * 2017-06-21 2017-08-25 北京理工大学 Towards the service robot autonomous navigation and automatic recharging method of bank's hall environment
CN108958232A (en) * 2017-12-07 2018-12-07 炬大科技有限公司 A kind of mobile sweeping robot SLAM device and algorithm based on deep vision
CN108247647B (en) * 2018-01-24 2021-06-22 速感科技(北京)有限公司 Cleaning robot
CN110360999B (en) 2018-03-26 2021-08-27 京东方科技集团股份有限公司 Indoor positioning method, indoor positioning system, and computer readable medium
CN108931245B (en) * 2018-08-02 2021-09-07 上海思岚科技有限公司 Local self-positioning method and equipment for mobile robot
CN111060101B (en) * 2018-10-16 2022-06-28 深圳市优必选科技有限公司 Vision-assisted distance SLAM method and device and robot
CN111322993B (en) * 2018-12-13 2022-03-04 杭州海康机器人技术有限公司 Visual positioning method and device
CN109658445A (en) * 2018-12-14 2019-04-19 北京旷视科技有限公司 Network training method, increment build drawing method, localization method, device and equipment
CN109633664B (en) * 2018-12-29 2023-03-28 南京理工大学工程技术研究院有限公司 Combined positioning method based on RGB-D and laser odometer
CN109813334B (en) * 2019-03-14 2023-04-07 西安工业大学 Binocular vision-based real-time high-precision vehicle mileage calculation method
CN110221607A (en) * 2019-05-22 2019-09-10 北京德威佳业科技有限公司 A kind of control system and control method holding formula vehicle access AGV
CN110196044A (en) * 2019-05-28 2019-09-03 广东亿嘉和科技有限公司 It is a kind of based on GPS closed loop detection Intelligent Mobile Robot build drawing method
CN110274597B (en) * 2019-06-13 2022-09-16 大连理工大学 Method for solving problem of 'particle binding frame' when indoor robot is started at any point
CN110333513B (en) * 2019-07-10 2023-01-10 国网四川省电力公司电力科学研究院 Particle filter SLAM method fusing least square method
CN110472585B (en) * 2019-08-16 2020-08-04 中南大学 VI-S L AM closed-loop detection method based on inertial navigation attitude track information assistance
CN110648354B (en) * 2019-09-29 2022-02-01 电子科技大学 Slam method in dynamic environment
CN111076733B (en) * 2019-12-10 2022-06-14 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN111337943B (en) * 2020-02-26 2022-04-05 同济大学 Mobile robot positioning method based on visual guidance laser repositioning
CN111862163B (en) * 2020-08-03 2021-07-23 湖北亿咖通科技有限公司 Trajectory optimization method and device
CN112450820B (en) * 2020-11-23 2022-01-21 深圳市银星智能科技股份有限公司 Pose optimization method, mobile robot and storage medium
CN112596064B (en) * 2020-11-30 2024-03-08 中科院软件研究所南京软件技术研究院 Laser and vision integrated global positioning method for indoor robot
CN113052906A (en) * 2021-04-01 2021-06-29 福州大学 Indoor robot positioning method based on monocular camera and odometer
CN113203419B (en) * 2021-04-25 2023-11-10 重庆大学 Indoor inspection robot correction positioning method based on neural network
CN113238554A (en) * 2021-05-08 2021-08-10 武汉科技大学 Indoor navigation method and system based on SLAM technology integrating laser and vision
CN113777615B (en) * 2021-07-19 2024-03-29 派特纳(上海)机器人科技有限公司 Positioning method and system of indoor robot and cleaning robot
CN113808270B (en) * 2021-09-28 2023-07-21 中国科学技术大学先进技术研究院 Unmanned test environment map building method and system based on internet access
CN114440892B (en) * 2022-01-27 2023-11-03 中国人民解放军军事科学院国防科技创新研究院 Self-positioning method based on topological map and odometer
CN117452429B (en) * 2023-12-21 2024-03-01 江苏中科重德智能科技有限公司 Robot positioning method and system based on multi-line laser radar

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105953785A (en) * 2016-04-15 2016-09-21 青岛克路德机器人有限公司 Map representation method for robot indoor autonomous navigation
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140139635A1 (en) * 2012-09-17 2014-05-22 Nec Laboratories America, Inc. Real-time monocular structure from motion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045263A (en) * 2015-07-06 2015-11-11 杭州南江机器人股份有限公司 Kinect-based robot self-positioning method
CN105953785A (en) * 2016-04-15 2016-09-21 青岛克路德机器人有限公司 Map representation method for robot indoor autonomous navigation
CN106052674A (en) * 2016-05-20 2016-10-26 青岛克路德机器人有限公司 Indoor robot SLAM method and system
CN106780699A (en) * 2017-01-09 2017-05-31 东南大学 A kind of vision SLAM methods aided in based on SINS/GPS and odometer

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A HIGH EFFICIENT 3D SLAM ALGORITHM BASED ON PCA;施尚杰等;《The 6th Annual IEEE International Conference on Cyber Technology in Automation,Control, and Intelligent Systems(cyber)》;20160330;第109-114页 *

Also Published As

Publication number Publication date
CN107356252A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN107356252B (en) Indoor robot positioning method integrating visual odometer and physical odometer
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN108051002B (en) Transport vehicle space positioning method and system based on inertial measurement auxiliary vision
CN113781582B (en) Synchronous positioning and map creation method based on laser radar and inertial navigation combined calibration
CN109186606B (en) Robot composition and navigation method based on SLAM and image information
CN106840148B (en) Wearable positioning and path guiding method based on binocular camera under outdoor working environment
CN106017463B (en) A kind of Aerial vehicle position method based on orientation sensing device
CN110243358A (en) The unmanned vehicle indoor and outdoor localization method and system of multi-source fusion
CN103680291B (en) The method synchronizing location and mapping based on ceiling vision
CN103412565B (en) A kind of robot localization method with the quick estimated capacity of global position
CN109282808B (en) Unmanned aerial vehicle and multi-sensor fusion positioning method for bridge three-dimensional cruise detection
WO2017008454A1 (en) Robot positioning method
CN112254729A (en) Mobile robot positioning method based on multi-sensor fusion
CN112833892B (en) Semantic mapping method based on track alignment
CN105987697B (en) The wheeled AGV navigation locating method of Mecanum and system under a kind of quarter bend
CN111260751B (en) Mapping method based on multi-sensor mobile robot
CN113238554A (en) Indoor navigation method and system based on SLAM technology integrating laser and vision
Li et al. A binocular MSCKF-based visual inertial odometry system using LK optical flow
CN116429116A (en) Robot positioning method and equipment
Khoshelham et al. Vehicle positioning in the absence of GNSS signals: Potential of visual-inertial odometry
Chen et al. Trajectory optimization of LiDAR SLAM based on local pose graph
CN104359482A (en) Visual navigation method based on LK optical flow algorithm
CN113960614A (en) Elevation map construction method based on frame-map matching
Yang et al. Simultaneous estimation of ego-motion and vehicle distance by using a monocular camera
Wang et al. Micro aerial vehicle navigation with visual-inertial integration aided by structured light

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200616