CN113126602A - Positioning method of mobile robot - Google Patents
Positioning method of mobile robot Download PDFInfo
- Publication number
- CN113126602A CN113126602A CN201911388707.3A CN201911388707A CN113126602A CN 113126602 A CN113126602 A CN 113126602A CN 201911388707 A CN201911388707 A CN 201911388707A CN 113126602 A CN113126602 A CN 113126602A
- Authority
- CN
- China
- Prior art keywords
- positioning
- pose
- mobile robot
- slam
- inertial navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 230000004927 fusion Effects 0.000 claims description 12
- 238000011084 recovery Methods 0.000 claims description 9
- 238000005070 sampling Methods 0.000 claims description 9
- 101100348852 Aspergillus sp. (strain MF297-2) notD gene Proteins 0.000 claims description 6
- 239000002131 composite material Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a mobile robot positioning method, which is characterized in that an inertia and SLAM combined navigation mode is used, the condition of SLAM positioning loss is compensated through inertial navigation positioning, the probability of mobile robot positioning loss is reduced, and accurate positioning of a mobile robot in a scene with large pedestrian flow is realized.
Description
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a positioning method of a mobile robot.
Background
In the current society of rapid development of the internet of things, the development of cities increasingly focuses on the development towards intellectualization and informatization, and intelligent facility construction is carried out in a plurality of public places. At present, the robot technology based on artificial intelligence is continuously emerging in the market, the application of the mobile robot is more and more extensive, and the mode that the mobile robot replaces manpower in public places such as markets, airports, banks and the like slowly realizes that few people or no people are on duty.
The mobile robot in the scene generally adopts an SLAM navigation mode, and the accurate positioning of the mobile robot is firstly obtained through laser point cloud data acquired by a laser sensor; then adding the laser point cloud data into the grid map to complete the construction of the scene map; and finally, planning a path on the basis of the constructed map to realize the navigation of the mobile robot. However, this navigation method is not suitable for public places with large traffic, and especially when people around the mobile robot, the calculation result of SLAM navigation will generate large error, resulting in positioning loss.
Disclosure of Invention
In view of the above disadvantages, the technical problem to be solved by the present invention is to provide a positioning method for a mobile robot, which uses a combined navigation mode of inertia and SLAM, compensates for the loss of SLAM positioning through inertial navigation positioning, reduces the probability of mobile robot positioning loss, and realizes accurate positioning of the mobile robot in a scene with a large traffic.
The purpose of the invention is realized by the following technical scheme:
a positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, the positioning method comprises the following steps:
(1) after the mobile robot is started up and powered on, the control processing module waits for a sampling signal, and if the mobile robot is not started up, the control processing module continues to wait; if yes, entering step 2);
(2) the control processing module obtains a pose estimation value of the mobile robotCovariance estimation value of sum system pose uncertainty。
K is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimation value of the mobile robotAnd initial pose uncertainty covariance estimateWherein=[0,0,0];
(3) the control processing module acquires the information sent by the slam navigation positioning moduleSlam pose of mobile robotAnd uncertainty covariance of slam pose;
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation moduleChassis speedAnd inertial navigation pose uncertainty covariance;
(5) The control processing module adopts a SLAM inertial navigation composite positioning algorithm and according to the system pose uncertainty covarianceThe slam positioning uncertainty covarianceUncertainty covariance of the inertial navigation positioningWeighted fusion of the slam poseAnd the inertial navigation poseUpdating the optimal pose of the mobile robot;
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) to (5) are repeated.
Preferably, the first and second electrodes are formed of a metal, =[x,y,θ]x and y are coordinates of the mobile robot in the current pose in a pre-drawn slam map,θis the orientation angle of the mobile robot; the orientation angle of the mobile robot in the theoretical initial pose is 0 degree, and the anticlockwise direction is positive.
Preferably, the SLAM inertial navigation composite positioning algorithm includes the following steps:
(a) calculating the slam pose according to a formula (1)And the pose estimateMahalanobis distance betweenD m (ii) a If it is notD m <And calculating to obtain the optimized pose of the mobile robot after fusion slam positioning according to formulas (2) - (4)Performing step (b); if it is notD m ≥If yes, ignoring the slam pose, not processing, and performing the step (c);
wherein,is a preset first threshold value;a Kalman gain for the fusion slam fix;the covariance of the uncertainty of the system pose after the slam positioning is fused;
(c) calculating the chassis speed of the inertial navigation pose according to the formula (5)Instantaneous speed with the poseMahalanobis distance between(ii) a If it is not<And calculating according to the formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioningPerforming step (d); if it is not≥Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
wherein,is a preset second threshold value; tany sampling time in the positioning process;in order to fuse the kalman gain of inertial navigation positioning,the optimal chassis speed after inertial navigation positioning is fused;the system uncertainty covariance after inertial navigation positioning is fused;
(d) updating pose estimation value of next sampling momentCovariance estimation value of uncertainty of pose and system,=,=。
Preferably, the SLAM navigation positioning module obtains the SLAM pose data through self-adaptive monte carlo self-positioning algorithm calculation.
Preferably, theA threshold value which can ensure accurate positioning of the slam and is obtained by actual test in a stable scene; the above-mentionedAnd obtaining a threshold value which can ensure the accuracy of the inertial navigation positioning in a stable scene.
Preferably, the control processing module records the pose which is not lost in the current positioning and stores the pose into a last positioning data list which is not lost under the condition that the positioning is not lost; the control processing module enters a positioning recovery process after judging that the positioning is lost, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes the positioning recovery, the mobile robot is continuously positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides the positioning method of the mobile robot, which adopts a combined navigation mode of inertia and SLAM, compensates the situation of SLAM positioning loss through inertial navigation positioning, reduces the probability of positioning loss of the mobile robot, and realizes accurate positioning of the mobile robot in a scene with large pedestrian volume.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a positioning method of a mobile robot according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Positioning method of mobile robot, wherein the mobile robot comprises a machineThe positioning method comprises a vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device set, wherein the inertial navigation module comprises an encoder and a gyroscope, the driving device set comprises a chassis and two driving wheels, and as shown in figure 1, the positioning method comprises the following steps:
(1) after the mobile robot is started up and powered on, the control processing module waits for a sampling signal, and if the mobile robot is not started up, the control processing module continues to wait; if yes, entering step 2);
(2) the control processing module obtains a pose estimation value of the mobile robotCovariance estimation value of sum system pose uncertainty。
K is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimation value of the mobile robotAnd initial pose uncertainty covariance estimateWherein=[0,0,0];
(3) the control processing module acquires the slam pose of the mobile robot sent by the slam navigation positioning moduleAnd uncertainty covariance of slam pose;
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation moduleChassis speedAnd inertial navigation pose uncertainty covariance;
(5) The control processing module adopts a SLAM inertial navigation composite positioning algorithm and according to the system pose uncertainty covarianceThe slam positioning uncertainty covarianceUncertainty covariance of the inertial navigation positioningWeighted fusion of the slam poseAnd the inertial navigation poseUpdating the optimal pose of the mobile robot;
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) to (5) are repeated.
=[x,y,θ]X and y are coordinates of the mobile robot in the current pose in a pre-drawn slam map,θis the orientation angle of the mobile robot; the orientation angle of the mobile robot in the theoretical initial pose is 0 degree, and the anticlockwise direction is positive.
The SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) calculating the slam pose according to a formula (1)And the pose estimateMahalanobis distance betweenD m (ii) a If it is notD m <And calculating to obtain the optimized pose of the mobile robot after fusion slam positioning according to formulas (2) - (4)Performing step (b); if it is notD m ≥If yes, ignoring the slam pose, not processing, and performing the step (c);
wherein,is a preset first threshold value;a Kalman gain for the fusion slam fix;the covariance of the uncertainty of the system pose after the slam positioning is fused;
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)Instantaneous speed with the poseMahalanobis distance between(ii) a If it is not<And calculating according to the formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioningPerforming step (d); if it is not≥Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
wherein,is a preset second threshold value; tany sampling time in the positioning process;in order to fuse the kalman gain of inertial navigation positioning,the optimal chassis speed after inertial navigation positioning is fused;the system uncertainty covariance after inertial navigation positioning is fused;
(d) updating pose estimation value of next sampling momentCovariance estimation value of uncertainty of pose and system,=,=。
And the SLAM navigation positioning module calculates and obtains the SLAM pose data through a self-adaptive Monte Carlo self-positioning algorithm.
The above-mentionedA threshold value which can ensure accurate positioning of the slam and is obtained by actual test in a stable scene; the above-mentionedAnd obtaining a threshold value which can ensure the accuracy of the inertial navigation positioning in a stable scene.
Under the condition that the positioning loss does not occur, the control processing module records the pose of the current positioning which is not lost and stores the pose into a last positioning data list which is not lost; the control processing module enters a positioning recovery process after judging that the positioning is lost, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes the positioning recovery, the mobile robot is continuously positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides the positioning method of the mobile robot, which adopts a combined navigation mode of inertia and SLAM, compensates the situation of SLAM positioning loss through inertial navigation positioning, reduces the probability of positioning loss of the mobile robot, and realizes accurate positioning of the mobile robot in a scene with large pedestrian volume.
Claims (6)
1. A positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, and the positioning method comprises the following steps:
(1) after the mobile robot is started up and powered on, the control processing module waits for a sampling signal, and if the mobile robot is not started up, the control processing module continues to wait; if yes, entering step 2);
(2) the control processing module obtains a pose estimation value of the mobile robotCovariance estimation value of sum system pose uncertainty;
K is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimation value of the mobile robotAnd initial pose uncertainty covariance estimateWherein=[0,0,0];
(3) the control processing module acquires the slam pose of the mobile robot sent by the slam navigation positioning moduleAnd uncertainty covariance of slam pose;
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation moduleChassis speedAnd inertial navigation pose uncertainty covariance;
(5) The control processing module adopts a SLAM inertial navigation composite positioning algorithm and according to the system pose uncertainty covarianceThe slam positioning uncertainty covarianceUncertainty covariance of the inertial navigation positioningWeighted fusion of the slam poseAnd the inertial navigation poseUpdating the optimal pose of the mobile robot;
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) to (5) are repeated.
2. The method according to claim 1, wherein the mobile robot is a mobile robot, =[x,y,θ]x and y are coordinates of the mobile robot in the current pose in a pre-drawn slam map,θis the orientation angle of the mobile robot; the orientation angle of the mobile robot in the theoretical initial pose is 0 degree, and the anticlockwise direction is positive.
3. The method of claim 1, wherein the SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) calculating the slam pose according to a formula (1)And the pose estimateMahalanobis distance betweenD m (ii) a If it is notD m <And calculating to obtain the optimized pose of the mobile robot after fusion slam positioning according to formulas (2) - (4)Performing step (b); if it is notD m ≥Then ignore said sPerforming the step (c) without processing the lam pose;
wherein,is a preset first threshold value;a Kalman gain for the fusion slam fix;the covariance of the uncertainty of the system pose after the slam positioning is fused;
(c) calculating the chassis speed of the inertial navigation pose according to the formula (5)Instantaneous speed with the poseMahalanobis distance between(ii) a If it is not<Calculating to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning according to the formulas (6) - (10), and performing the step (d); if it is not≥Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
wherein,is a preset second threshold value; tany sampling time in the positioning process;in order to fuse the kalman gain of inertial navigation positioning,the optimal chassis speed after inertial navigation positioning is fused;the system uncertainty covariance after inertial navigation positioning is fused;
4. The method of claim 1, wherein the SLAM navigation and positioning module obtains SLAM pose data through self-adaptive Monte Carlo self-positioning algorithm calculation.
5. A method according to claim 3, wherein said method comprisesA threshold value which can ensure accurate positioning of the slam and is obtained by actual test in a stable scene; the above-mentionedAnd obtaining a threshold value which can ensure the accuracy of the inertial navigation positioning in a stable scene.
6. The method according to claim 1, wherein the control processing module records the pose of the current position which is not lost and stores the pose into a last positioning data list which is not lost when the position is not lost; the control processing module enters a positioning recovery process after judging that the positioning is lost, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes the positioning recovery, the mobile robot is continuously positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911388707.3A CN113126602B (en) | 2019-12-30 | 2019-12-30 | Positioning method of mobile robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911388707.3A CN113126602B (en) | 2019-12-30 | 2019-12-30 | Positioning method of mobile robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113126602A true CN113126602A (en) | 2021-07-16 |
CN113126602B CN113126602B (en) | 2023-07-14 |
Family
ID=76768725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911388707.3A Active CN113126602B (en) | 2019-12-30 | 2019-12-30 | Positioning method of mobile robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113126602B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113907645A (en) * | 2021-09-23 | 2022-01-11 | 追觅创新科技(苏州)有限公司 | Mobile robot positioning method and device, storage medium and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106679648A (en) * | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
CN106969784A (en) * | 2017-03-24 | 2017-07-21 | 中国石油大学(华东) | It is a kind of concurrently to build figure positioning and the combined error emerging system of inertial navigation |
CN109828588A (en) * | 2019-03-11 | 2019-05-31 | 浙江工业大学 | Paths planning method in a kind of robot chamber based on Multi-sensor Fusion |
CN110285811A (en) * | 2019-06-15 | 2019-09-27 | 南京巴乌克智能科技有限公司 | The fusion and positioning method and device of satellite positioning and inertial navigation |
-
2019
- 2019-12-30 CN CN201911388707.3A patent/CN113126602B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106679648A (en) * | 2016-12-08 | 2017-05-17 | 东南大学 | Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm |
CN106969784A (en) * | 2017-03-24 | 2017-07-21 | 中国石油大学(华东) | It is a kind of concurrently to build figure positioning and the combined error emerging system of inertial navigation |
CN109828588A (en) * | 2019-03-11 | 2019-05-31 | 浙江工业大学 | Paths planning method in a kind of robot chamber based on Multi-sensor Fusion |
CN110285811A (en) * | 2019-06-15 | 2019-09-27 | 南京巴乌克智能科技有限公司 | The fusion and positioning method and device of satellite positioning and inertial navigation |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113907645A (en) * | 2021-09-23 | 2022-01-11 | 追觅创新科技(苏州)有限公司 | Mobile robot positioning method and device, storage medium and electronic device |
Also Published As
Publication number | Publication date |
---|---|
CN113126602B (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6644742B2 (en) | Algorithms and infrastructure for robust and efficient vehicle positioning | |
CN107144285B (en) | Pose information determination method and device and movable equipment | |
EP3371671B1 (en) | Method, device and assembly for map generation | |
US10437252B1 (en) | High-precision multi-layer visual and semantic map for autonomous driving | |
US10794710B1 (en) | High-precision multi-layer visual and semantic map by autonomous units | |
WO2021017212A1 (en) | Multi-scene high-precision vehicle positioning method and apparatus, and vehicle-mounted terminal | |
US9697730B2 (en) | Spatial clustering of vehicle probe data | |
US11726208B2 (en) | Autonomous vehicle localization using a Lidar intensity map | |
JP2021504796A (en) | Sensor data segmentation | |
CN102915039B (en) | A kind of multirobot joint objective method for searching of imitative animal spatial cognition | |
JP2019533810A (en) | Neural network system for autonomous vehicle control | |
RU2759975C1 (en) | Operational control of autonomous vehicle with visual salence perception control | |
US20200042656A1 (en) | Systems and methods for persistent simulation | |
CN112762957B (en) | Multi-sensor fusion-based environment modeling and path planning method | |
CN110488842B (en) | Vehicle track prediction method based on bidirectional kernel ridge regression | |
US20220185271A1 (en) | Method and apparatus for controlling vehicle driving | |
Perea-Strom et al. | GNSS integration in the localization system of an autonomous vehicle based on particle weighting | |
CN110487286B (en) | Robot pose judgment method based on point feature projection and laser point cloud fusion | |
US20230071794A1 (en) | Method and system for building lane-level map by using 3D point cloud map | |
CN111982114A (en) | Rescue robot for estimating three-dimensional pose by adopting IMU data fusion | |
CN113238554A (en) | Indoor navigation method and system based on SLAM technology integrating laser and vision | |
Gao et al. | Towards autonomous wheelchair systems in urban environments | |
CN115339453B (en) | Vehicle lane change decision information generation method, device, equipment and computer medium | |
CN113126602A (en) | Positioning method of mobile robot | |
EP3876165A2 (en) | Method, apparatus, and system for progressive training of evolving machine learning architectures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221114 Address after: 211135 7th Floor, Building 6, Artificial Intelligence Industrial Park, 266 Chuangyan Road, Qilin Science and Technology Innovation Park, Jiangning District, Nanjing, Jiangsu Province Applicant after: NANJING KINGYOUNG INTELLIGENT SCIENCE AND TECHNOLOGY Co.,Ltd. Address before: 211100 1st floor, building 15, Fuli science and Technology City, 277 Dongqi Road, Jiangning District, Nanjing City, Jiangsu Province Applicant before: NANJING JINGYI ROBOT ENGINEERING TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |