CN113126602B - Positioning method of mobile robot - Google Patents
Positioning method of mobile robot Download PDFInfo
- Publication number
- CN113126602B CN113126602B CN201911388707.3A CN201911388707A CN113126602B CN 113126602 B CN113126602 B CN 113126602B CN 201911388707 A CN201911388707 A CN 201911388707A CN 113126602 B CN113126602 B CN 113126602B
- Authority
- CN
- China
- Prior art keywords
- positioning
- pose
- mobile robot
- slam
- inertial navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000005070 sampling Methods 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 11
- 238000011084 recovery Methods 0.000 claims description 9
- 239000002131 composite material Substances 0.000 claims description 6
- 238000012360 testing method Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0223—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention provides a positioning method of a mobile robot, which uses a combined navigation mode of inertia and SLAM, compensates the condition of lost SLAM positioning through inertial navigation positioning, reduces the probability of lost positioning of the mobile robot, and realizes accurate positioning of the mobile robot in a scene with large flow of people.
Description
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a positioning method of a mobile robot.
Background
In the current society of the rapid development of the Internet of things, the development of cities is increasingly focused on the development of intellectualization and informatization, and intelligent facility construction is carried out in many public places. At present, the robot technology based on artificial intelligence is continuously emerging in the market, the application of mobile robots is more and more wide, and the mode that the mobile robots replace manpower in public places such as markets, airports and banks is adopted to realize the small-man or unmanned duty.
The mobile robot of the scene generally adopts an SLAM navigation mode, and firstly, the accurate positioning of the mobile robot is obtained through laser point cloud data collected by a laser sensor; then adding the laser point cloud data into a grid map to complete the construction of a scene map; and finally, carrying out path planning on the basis of the constructed map to realize navigation of the mobile robot. However, this navigation method is not suitable for public places with large traffic, especially when the crowd surrounds the mobile robot, the calculation result of SLAM navigation will generate large errors, resulting in lost positioning.
Disclosure of Invention
Aiming at the defects, the technical problem solved by the invention is to provide a positioning method of the mobile robot, which uses a combined navigation mode of inertia and SLAM, compensates the condition of losing the SLAM by inertial navigation positioning, reduces the probability of losing the mobile robot positioning, and realizes the accurate positioning of the mobile robot in a scene with large flow of people.
The invention aims at realizing the following technical scheme:
a positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, and the positioning method comprises the following steps:
(1) After the mobile robot is started up and electrified, the control processing module waits for a sampling signal, and if the sampling signal is not available, the control processing module continues waiting; if yes, entering step 2);
(2) The control processing module obtains the pose estimated value of the mobile robotAnd a system pose uncertainty covariance estimate。
Wherein k is the moment when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimated value of the mobile robotAnd an initial pose uncertainty covariance estimateWherein, the method comprises the steps of, wherein,=[0,0,0];
(3) The control processing module obtains the slam pose of the mobile robot sent by the slam navigation positioning moduleUncertainty covariance with slam pose;
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation moduleChassis speedUncertainty covariance of inertial navigation pose;
(5) The control processing module adopts SLAM inertial navigation composite positioning algorithm, and covariance is obtained according to uncertainty of the system poseCovariance of the slam positioning uncertaintyUncertainty covariance of the inertial navigation positioningWeighting and fusing the slam poseAnd the inertial navigation poseUpdating the optimal pose of the mobile robot;
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) - (5) are repeated.
Preferably, the method comprises the steps of, =[x,y,θ]x and y are the coordinates of the mobile robot in the current pose in a pre-drawn slam map,θan orientation angle of the mobile robot; and the orientation angle of the mobile robot in the theoretical initial pose is 0 degrees, and the anticlockwise direction is positive.
Preferably, the SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) Calculating the slam pose according to the formula (1)And the pose estimation valueDistance between MarshallD m The method comprises the steps of carrying out a first treatment on the surface of the If it isD m <Calculating according to formulas (2) - (4) to obtain the optimized pose of the mobile robot after the fusion slam positioningPerforming step (b); if it isD m ≥Ignoring the slam pose, and performing the step (c) without processing;
wherein,,is a preset first threshold value;kalman gain for fused slam positioning;covariance of uncertainty of the system pose after fusion slam positioning;
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)Instantaneous speed with the poseDistance between MarshallThe method comprises the steps of carrying out a first treatment on the surface of the If it is<Calculating according to formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioningPerforming step (d); if it is≥Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
wherein,,is a preset second threshold value; tis any sampling time in the positioning process;to fuse the kalman gain of inertial navigation positioning,the optimal chassis speed after fusion inertial navigation positioning is obtained;the system uncertainty covariance after the inertial navigation positioning is fused;
(d) Updating pose estimation value of next sampling momentCovariance estimation value of uncertainty of pose of system,=,=。
Preferably, the SLAM navigation positioning module calculates and obtains SLAM pose data through a self-adaptive Monte Carlo self-positioning algorithm.
Preferably, the saidA threshold value which is obtained for practical testing in a stable scene and can ensure the accuracy of slam positioning; the saidAnd a threshold value which is obtained for practical testing in a stable scene and can ensure that the inertial navigation positioning is accurate.
Preferably, the control processing module records the pose of which the current positioning is not lost and stores the pose into a last positioning data list which is not lost under the condition that the positioning is not lost; the control processing module judges that the positioning is lost and then enters a positioning recovery process, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes positioning recovery, the mobile robot is positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides a positioning method of the mobile robot, which uses the combined navigation mode of inertia and SLAM, compensates the condition of lost SLAM positioning through inertial navigation positioning, reduces the probability of lost positioning of the mobile robot, and realizes the accurate positioning of the mobile robot in a scene with large flow of people.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a positioning method of a mobile robot according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
A positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, as shown in fig. 1, the positioning method comprises the following steps:
(1) After the mobile robot is started up and electrified, the control processing module waits for a sampling signal, and if the sampling signal is not available, the control processing module continues waiting; if yes, entering step 2);
(2) The control processing module obtains the pose estimated value of the mobile robotAnd a system pose uncertainty covariance estimate。
Wherein k is the moment when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimated value of the mobile robotAnd an initial pose uncertainty covariance estimateWherein, the method comprises the steps of, wherein,=[0,0,0];
(3) The control processing module obtains the slam pose of the mobile robot sent by the slam navigation positioning moduleUncertainty covariance with slam pose;
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation moduleChassis speedUncertainty covariance of inertial navigation pose;
(5) The control processing module adopts SLAM inertial navigation composite positioning algorithm, and covariance is obtained according to uncertainty of the system poseCovariance of the slam positioning uncertaintyUncertainty covariance of the inertial navigation positioningWeighting and fusing the slam poseAnd the inertial navigation poseUpdating the optimal pose of the mobile robot;
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) - (5) are repeated.
=[x,y,θ]X and y are the coordinates of the mobile robot in the current pose in a pre-drawn slam map,θan orientation angle of the mobile robot; and the orientation angle of the mobile robot in the theoretical initial pose is 0 degrees, and the anticlockwise direction is positive.
The SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) Calculating the slam pose according to the formula (1)And the pose estimation valueDistance between MarshallD m The method comprises the steps of carrying out a first treatment on the surface of the If it isD m <Calculating according to formulas (2) - (4) to obtain the optimized pose of the mobile robot after the fusion slam positioningPerforming step (b); if it isD m ≥Ignoring the slam pose, and performing the step (c) without processing;
wherein,,is a preset first threshold value;kalman gain for fused slam positioning;covariance of uncertainty of the system pose after fusion slam positioning;
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)Instantaneous speed with the poseDistance between MarshallThe method comprises the steps of carrying out a first treatment on the surface of the If it is<Calculating according to formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioningPerforming step (d); if it is≥Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
wherein,,is a preset second threshold value; tis any sampling time in the positioning process;to fuse the kalman gain of inertial navigation positioning,the optimal chassis speed after fusion inertial navigation positioning is obtained;the system uncertainty covariance after the inertial navigation positioning is fused;
(d) Updating pose estimation value of next sampling momentCovariance estimation value of uncertainty of pose of system,=,=。
And the SLAM navigation positioning module calculates and acquires SLAM pose data through a self-adaptive Monte Carlo self-positioning algorithm.
The saidA threshold value which is obtained for practical testing in a stable scene and can ensure the accuracy of slam positioning; the saidCan ensure the accuracy of the inertial navigation positioning obtained for the actual test in a stable sceneA threshold value.
The control processing module records the pose which is not lost in the current positioning and stores the pose into a last positioning data list which is not lost under the condition that the positioning is not lost; the control processing module judges that the positioning is lost and then enters a positioning recovery process, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes positioning recovery, the mobile robot is positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides a positioning method of the mobile robot, which uses the combined navigation mode of inertia and SLAM, compensates the condition of lost SLAM positioning through inertial navigation positioning, reduces the probability of lost positioning of the mobile robot, and realizes the accurate positioning of the mobile robot in a scene with large flow of people.
Claims (5)
1. A positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a SLAM navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, and the driving device group comprises a chassis and two driving wheels, the positioning method is characterized by comprising the following steps:
(1) After the mobile robot is started up and electrified, the control processing module waits for a sampling signal, and if the sampling signal is not available, the control processing module continues waiting; if yes, entering step 2);
(2) The control processing module obtains a pose estimated value X of the mobile robot k And a system pose uncertainty covariance estimate P k ;
Wherein k is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the pose of the robotInitial pose estimation value X of mobile robot 0 And an initial pose uncertainty covariance estimate P 0 Wherein X is 0 =[0,0,0];
(3) The control processing module obtains the SLAM pose of the mobile robot sent by the SLAM navigation positioning moduleUncertainty covariance P of SLAM pose s ;
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation moduleChassis speed->Uncertainty covariance P of inertial navigation pose e ;
(5) The control processing module adopts SLAM inertial navigation composite positioning algorithm, and covariance P is obtained according to uncertainty of the system pose k The SLAM positioning uncertainty covariance P s Uncertainty covariance P of the inertial navigation position e Weighting and fusing the SLAM poseAnd said inertial navigation pose +.>Updating the optimal pose X' of the mobile robot k ;
(6) The control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) - (5) are repeated;
the SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) Calculating the SLAM pose according to formula (1)And the pose estimated value X k Distance D between March m The method comprises the steps of carrying out a first treatment on the surface of the If it isCalculating according to formulas (2) - (4) to obtain the optimized pose X 'of the mobile robot after the mobile robot is fused with SLAM positioning' k Performing step (b); if->Ignoring the SLAM pose, not processing, and performing step (c);
K s =P k (P k +P s ) -1 (2)
P′ k =P k -K s P k (4)
wherein,,is a preset first threshold value; k (K) s Kalman gain for fused SLAM positioning; p'. k Covariance of uncertainty of system pose after SLAM positioning is fused;
(b) Updating pose estimation value X k ,X k =X′ k ,P k =P′ k Performing step (c);
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)And the institute are connected withInstantaneous velocity V of the pose k Distance D between March e The method comprises the steps of carrying out a first treatment on the surface of the If->Calculating according to formulas (6) to (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning, and performing step (d); if->Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
K e =P k (P k +P e ) -1 (7)
P″ k =P k -K e P k (10)
wherein,,is a preset second threshold value; t is any sampling time in the positioning process; k (K) e Kalman gain for fusion inertial navigation positioning, V k The optimal chassis speed after fusion inertial navigation positioning is obtained;P″ k the system uncertainty covariance after the inertial navigation positioning is fused;
(d) Updating pose estimation value X of next sampling moment k+1 Covariance estimate P with uncertainty of system pose k+1 ,X k+1 =X″ k ,P k+1 =P″ k 。
2. The method for positioning a mobile robot according to claim 1, wherein X is k =[x,y,θ]X and y are coordinates of the mobile robot in a pre-drawn SLAM map of the current pose, and θ is an orientation angle of the mobile robot; and the orientation angle of the mobile robot in the theoretical initial pose is 0 degrees, and the anticlockwise direction is positive.
3. The positioning method of a mobile robot according to claim 1, wherein the SLAM navigation positioning module calculates and obtains SLAM pose data through a self-adaptive monte carlo self-positioning algorithm.
4. A method of positioning a mobile robot as recited in claim 1, wherein the steps ofA threshold value which is obtained for practical testing in a stable scene and can ensure the SLAM positioning accuracy; said->And a threshold value which is obtained for practical testing in a stable scene and can ensure that the inertial navigation positioning is accurate.
5. The positioning method of a mobile robot according to claim 1, wherein the control processing module records a pose of which the current positioning is not lost and stores the pose into a last list of positioning data which is not lost when no positioning loss occurs; the control processing module judges that the positioning is lost and then enters a positioning recovery process, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes positioning recovery, the mobile robot is positioned by adopting a positioning method integrating SLAM positioning and inertial navigation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911388707.3A CN113126602B (en) | 2019-12-30 | 2019-12-30 | Positioning method of mobile robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911388707.3A CN113126602B (en) | 2019-12-30 | 2019-12-30 | Positioning method of mobile robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113126602A CN113126602A (en) | 2021-07-16 |
CN113126602B true CN113126602B (en) | 2023-07-14 |
Family
ID=76768725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911388707.3A Active CN113126602B (en) | 2019-12-30 | 2019-12-30 | Positioning method of mobile robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113126602B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113907645A (en) * | 2021-09-23 | 2022-01-11 | 追觅创新科技(苏州)有限公司 | Mobile robot positioning method and device, storage medium and electronic device |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106679648B (en) * | 2016-12-08 | 2019-12-10 | 东南大学 | Visual inertia combination SLAM method based on genetic algorithm |
CN106969784B (en) * | 2017-03-24 | 2019-08-13 | 山东大学 | A kind of combined error emerging system for concurrently building figure positioning and inertial navigation |
CN109828588A (en) * | 2019-03-11 | 2019-05-31 | 浙江工业大学 | Paths planning method in a kind of robot chamber based on Multi-sensor Fusion |
CN110285811A (en) * | 2019-06-15 | 2019-09-27 | 南京巴乌克智能科技有限公司 | The fusion and positioning method and device of satellite positioning and inertial navigation |
-
2019
- 2019-12-30 CN CN201911388707.3A patent/CN113126602B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113126602A (en) | 2021-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7170619B2 (en) | Method and apparatus for generating driving route | |
WO2017088720A1 (en) | Method and device for planning optimal following path and computer storage medium | |
CN107246876B (en) | Method and system for autonomous positioning and map construction of unmanned automobile | |
RU2759975C1 (en) | Operational control of autonomous vehicle with visual salence perception control | |
JP6963158B2 (en) | Centralized shared autonomous vehicle operation management | |
JP2021523057A (en) | Direction adjustment action for autonomous vehicle motion management | |
US11378957B1 (en) | Method and apparatus for controlling vehicle driving | |
Peng et al. | Path planning and obstacle avoidance for vision guided quadrotor UAV navigation | |
JP2019533810A (en) | Neural network system for autonomous vehicle control | |
US10782384B2 (en) | Localization methods and systems for autonomous systems | |
CA3002308A1 (en) | Device and method for autonomous localisation | |
US20200042656A1 (en) | Systems and methods for persistent simulation | |
CN110488842B (en) | Vehicle track prediction method based on bidirectional kernel ridge regression | |
JP2021504825A (en) | Autonomous vehicle operation management plan | |
US20190318050A1 (en) | Environmental modification in autonomous simulation | |
JP2020513623A (en) | Generation of solution data for autonomous vehicles to deal with problem situations | |
Wu et al. | Robust LiDAR-based localization scheme for unmanned ground vehicle via multisensor fusion | |
CN113126602B (en) | Positioning method of mobile robot | |
CN117152249A (en) | Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency | |
CN116540706A (en) | System and method for providing local path planning for ground unmanned aerial vehicle by unmanned aerial vehicle | |
Zhou et al. | Architecture design and implementation of image based autonomous car: THUNDER-1 | |
Shangguan et al. | Interactive perception-based multiple object tracking via CVIS and AV | |
Wen et al. | Vision based sidewalk navigation for last-mile delivery robot | |
Paton et al. | Eyes in the back of your head: Robust visual teach & repeat using multiple stereo cameras | |
Zheng et al. | The Navigation Based on Hybrid A Star and TEB Algorithm Implemented in Obstacles Avoidance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20221114 Address after: 211135 7th Floor, Building 6, Artificial Intelligence Industrial Park, 266 Chuangyan Road, Qilin Science and Technology Innovation Park, Jiangning District, Nanjing, Jiangsu Province Applicant after: NANJING KINGYOUNG INTELLIGENT SCIENCE AND TECHNOLOGY Co.,Ltd. Address before: 211100 1st floor, building 15, Fuli science and Technology City, 277 Dongqi Road, Jiangning District, Nanjing City, Jiangsu Province Applicant before: NANJING JINGYI ROBOT ENGINEERING TECHNOLOGY Co.,Ltd. |
|
TA01 | Transfer of patent application right | ||
GR01 | Patent grant | ||
GR01 | Patent grant |