CN113126602B - Positioning method of mobile robot - Google Patents

Positioning method of mobile robot Download PDF

Info

Publication number
CN113126602B
CN113126602B CN201911388707.3A CN201911388707A CN113126602B CN 113126602 B CN113126602 B CN 113126602B CN 201911388707 A CN201911388707 A CN 201911388707A CN 113126602 B CN113126602 B CN 113126602B
Authority
CN
China
Prior art keywords
positioning
pose
mobile robot
slam
inertial navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911388707.3A
Other languages
Chinese (zh)
Other versions
CN113126602A (en
Inventor
石飞
赵荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Kingyoung Intelligent Science And Technology Co ltd
Original Assignee
Nanjing Kingyoung Intelligent Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Kingyoung Intelligent Science And Technology Co ltd filed Critical Nanjing Kingyoung Intelligent Science And Technology Co ltd
Priority to CN201911388707.3A priority Critical patent/CN113126602B/en
Publication of CN113126602A publication Critical patent/CN113126602A/en
Application granted granted Critical
Publication of CN113126602B publication Critical patent/CN113126602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a positioning method of a mobile robot, which uses a combined navigation mode of inertia and SLAM, compensates the condition of lost SLAM positioning through inertial navigation positioning, reduces the probability of lost positioning of the mobile robot, and realizes accurate positioning of the mobile robot in a scene with large flow of people.

Description

Positioning method of mobile robot
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a positioning method of a mobile robot.
Background
In the current society of the rapid development of the Internet of things, the development of cities is increasingly focused on the development of intellectualization and informatization, and intelligent facility construction is carried out in many public places. At present, the robot technology based on artificial intelligence is continuously emerging in the market, the application of mobile robots is more and more wide, and the mode that the mobile robots replace manpower in public places such as markets, airports and banks is adopted to realize the small-man or unmanned duty.
The mobile robot of the scene generally adopts an SLAM navigation mode, and firstly, the accurate positioning of the mobile robot is obtained through laser point cloud data collected by a laser sensor; then adding the laser point cloud data into a grid map to complete the construction of a scene map; and finally, carrying out path planning on the basis of the constructed map to realize navigation of the mobile robot. However, this navigation method is not suitable for public places with large traffic, especially when the crowd surrounds the mobile robot, the calculation result of SLAM navigation will generate large errors, resulting in lost positioning.
Disclosure of Invention
Aiming at the defects, the technical problem solved by the invention is to provide a positioning method of the mobile robot, which uses a combined navigation mode of inertia and SLAM, compensates the condition of losing the SLAM by inertial navigation positioning, reduces the probability of losing the mobile robot positioning, and realizes the accurate positioning of the mobile robot in a scene with large flow of people.
The invention aims at realizing the following technical scheme:
a positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, and the positioning method comprises the following steps:
(1) After the mobile robot is started up and electrified, the control processing module waits for a sampling signal, and if the sampling signal is not available, the control processing module continues waiting; if yes, entering step 2);
(2) The control processing module obtains the pose estimated value of the mobile robot
Figure DEST_PATH_IMAGE001
And a system pose uncertainty covariance estimate
Figure 854400DEST_PATH_IMAGE002
Wherein k is the moment when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimated value of the mobile robot
Figure DEST_PATH_IMAGE003
And an initial pose uncertainty covariance estimate
Figure 983899DEST_PATH_IMAGE004
Wherein, the method comprises the steps of, wherein,
Figure 890675DEST_PATH_IMAGE003
=[0,0,0];
(3) The control processing module obtains the slam pose of the mobile robot sent by the slam navigation positioning module
Figure DEST_PATH_IMAGE005
Uncertainty covariance with slam pose
Figure 514555DEST_PATH_IMAGE006
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation module
Figure DEST_PATH_IMAGE007
Chassis speed
Figure 377337DEST_PATH_IMAGE008
Uncertainty covariance of inertial navigation pose
Figure DEST_PATH_IMAGE009
(5) The control processing module adopts SLAM inertial navigation composite positioning algorithm, and covariance is obtained according to uncertainty of the system pose
Figure 412158DEST_PATH_IMAGE002
Covariance of the slam positioning uncertainty
Figure 806231DEST_PATH_IMAGE006
Uncertainty covariance of the inertial navigation positioning
Figure 764959DEST_PATH_IMAGE009
Weighting and fusing the slam pose
Figure 560877DEST_PATH_IMAGE005
And the inertial navigation pose
Figure 563337DEST_PATH_IMAGE007
Updating the optimal pose of the mobile robot
Figure 444705DEST_PATH_IMAGE010
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) - (5) are repeated.
Preferably, the method comprises the steps of,
Figure 941546DEST_PATH_IMAGE001
=[x,y,θ]x and y are the coordinates of the mobile robot in the current pose in a pre-drawn slam map,θan orientation angle of the mobile robot; and the orientation angle of the mobile robot in the theoretical initial pose is 0 degrees, and the anticlockwise direction is positive.
Preferably, the SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) Calculating the slam pose according to the formula (1)
Figure 591970DEST_PATH_IMAGE005
And the pose estimation value
Figure 781643DEST_PATH_IMAGE001
Distance between MarshallD m The method comprises the steps of carrying out a first treatment on the surface of the If it isD m
Figure DEST_PATH_IMAGE011
Calculating according to formulas (2) - (4) to obtain the optimized pose of the mobile robot after the fusion slam positioning
Figure 602837DEST_PATH_IMAGE012
Performing step (b); if it isD m
Figure 637789DEST_PATH_IMAGE011
Ignoring the slam pose, and performing the step (c) without processing;
Figure DEST_PATH_IMAGE013
(1)
Figure 126408DEST_PATH_IMAGE014
(2)
Figure DEST_PATH_IMAGE015
(3)
Figure 955824DEST_PATH_IMAGE016
(4)
wherein,,
Figure 811785DEST_PATH_IMAGE011
is a preset first threshold value;
Figure DEST_PATH_IMAGE017
kalman gain for fused slam positioning;
Figure 102957DEST_PATH_IMAGE018
covariance of uncertainty of the system pose after fusion slam positioning;
(b) Updating pose estimation values
Figure 993553DEST_PATH_IMAGE001
Figure 259449DEST_PATH_IMAGE001
=
Figure 602706DEST_PATH_IMAGE012
Figure 517745DEST_PATH_IMAGE002
=
Figure 262847DEST_PATH_IMAGE018
Performing step (c);
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)
Figure 699645DEST_PATH_IMAGE008
Instantaneous speed with the pose
Figure DEST_PATH_IMAGE019
Distance between Marshall
Figure 264618DEST_PATH_IMAGE020
The method comprises the steps of carrying out a first treatment on the surface of the If it is
Figure 632015DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
Calculating according to formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning
Figure 966044DEST_PATH_IMAGE010
Performing step (d); if it is
Figure 573743DEST_PATH_IMAGE020
Figure 140859DEST_PATH_IMAGE021
Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
Figure 859416DEST_PATH_IMAGE022
(5)
Figure DEST_PATH_IMAGE023
(6)
Figure 47952DEST_PATH_IMAGE024
(7)
Figure DEST_PATH_IMAGE025
(8)
Figure 279082DEST_PATH_IMAGE026
(9)
Figure DEST_PATH_IMAGE027
(10)
wherein,,
Figure 412124DEST_PATH_IMAGE021
is a preset second threshold value; tis any sampling time in the positioning process;
Figure 183639DEST_PATH_IMAGE028
to fuse the kalman gain of inertial navigation positioning,
Figure DEST_PATH_IMAGE029
the optimal chassis speed after fusion inertial navigation positioning is obtained;
Figure 226682DEST_PATH_IMAGE030
the system uncertainty covariance after the inertial navigation positioning is fused;
(d) Updating pose estimation value of next sampling moment
Figure DEST_PATH_IMAGE031
Covariance estimation value of uncertainty of pose of system
Figure 645025DEST_PATH_IMAGE032
Figure 921154DEST_PATH_IMAGE031
=
Figure 247093DEST_PATH_IMAGE010
Figure 410221DEST_PATH_IMAGE032
=
Figure 530624DEST_PATH_IMAGE030
Preferably, the SLAM navigation positioning module calculates and obtains SLAM pose data through a self-adaptive Monte Carlo self-positioning algorithm.
Preferably, the said
Figure 310361DEST_PATH_IMAGE011
A threshold value which is obtained for practical testing in a stable scene and can ensure the accuracy of slam positioning; the said
Figure 423680DEST_PATH_IMAGE021
And a threshold value which is obtained for practical testing in a stable scene and can ensure that the inertial navigation positioning is accurate.
Preferably, the control processing module records the pose of which the current positioning is not lost and stores the pose into a last positioning data list which is not lost under the condition that the positioning is not lost; the control processing module judges that the positioning is lost and then enters a positioning recovery process, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes positioning recovery, the mobile robot is positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides a positioning method of the mobile robot, which uses the combined navigation mode of inertia and SLAM, compensates the condition of lost SLAM positioning through inertial navigation positioning, reduces the probability of lost positioning of the mobile robot, and realizes the accurate positioning of the mobile robot in a scene with large flow of people.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a positioning method of a mobile robot according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
A positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, as shown in fig. 1, the positioning method comprises the following steps:
(1) After the mobile robot is started up and electrified, the control processing module waits for a sampling signal, and if the sampling signal is not available, the control processing module continues waiting; if yes, entering step 2);
(2) The control processing module obtains the pose estimated value of the mobile robot
Figure 175735DEST_PATH_IMAGE001
And a system pose uncertainty covariance estimate
Figure 998197DEST_PATH_IMAGE002
Wherein k is the moment when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimated value of the mobile robot
Figure 734072DEST_PATH_IMAGE003
And an initial pose uncertainty covariance estimate
Figure 385502DEST_PATH_IMAGE004
Wherein, the method comprises the steps of, wherein,
Figure 523223DEST_PATH_IMAGE003
=[0,0,0];
(3) The control processing module obtains the slam pose of the mobile robot sent by the slam navigation positioning module
Figure 251007DEST_PATH_IMAGE005
Uncertainty covariance with slam pose
Figure 208599DEST_PATH_IMAGE006
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation module
Figure 663720DEST_PATH_IMAGE007
Chassis speed
Figure 390367DEST_PATH_IMAGE008
Uncertainty covariance of inertial navigation pose
Figure 289053DEST_PATH_IMAGE009
(5) The control processing module adopts SLAM inertial navigation composite positioning algorithm, and covariance is obtained according to uncertainty of the system pose
Figure 983209DEST_PATH_IMAGE002
Covariance of the slam positioning uncertainty
Figure 461595DEST_PATH_IMAGE006
Uncertainty covariance of the inertial navigation positioning
Figure 308328DEST_PATH_IMAGE009
Weighting and fusing the slam pose
Figure 627183DEST_PATH_IMAGE005
And the inertial navigation pose
Figure 824946DEST_PATH_IMAGE007
Updating the optimal pose of the mobile robot
Figure 372602DEST_PATH_IMAGE010
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) - (5) are repeated.
Figure 339421DEST_PATH_IMAGE001
=[x,y,θ]X and y are the coordinates of the mobile robot in the current pose in a pre-drawn slam map,θan orientation angle of the mobile robot; and the orientation angle of the mobile robot in the theoretical initial pose is 0 degrees, and the anticlockwise direction is positive.
The SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) Calculating the slam pose according to the formula (1)
Figure 563598DEST_PATH_IMAGE005
And the pose estimation value
Figure 514236DEST_PATH_IMAGE001
Distance between MarshallD m The method comprises the steps of carrying out a first treatment on the surface of the If it isD m
Figure 865583DEST_PATH_IMAGE011
Calculating according to formulas (2) - (4) to obtain the optimized pose of the mobile robot after the fusion slam positioning
Figure 686908DEST_PATH_IMAGE012
Performing step (b); if it isD m
Figure 81987DEST_PATH_IMAGE011
Ignoring the slam pose, and performing the step (c) without processing;
Figure 457604DEST_PATH_IMAGE013
(1)
Figure 347063DEST_PATH_IMAGE014
(2)
Figure 272162DEST_PATH_IMAGE015
(3)
Figure 323295DEST_PATH_IMAGE016
(4)
wherein,,
Figure 701056DEST_PATH_IMAGE011
is a preset first threshold value;
Figure 394205DEST_PATH_IMAGE017
kalman gain for fused slam positioning;
Figure 190123DEST_PATH_IMAGE018
covariance of uncertainty of the system pose after fusion slam positioning;
(b) Updating pose estimation values
Figure 208894DEST_PATH_IMAGE001
Figure 542793DEST_PATH_IMAGE001
=
Figure 305212DEST_PATH_IMAGE012
Figure 690057DEST_PATH_IMAGE002
=
Figure 614151DEST_PATH_IMAGE018
Performing step (c);
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)
Figure 700924DEST_PATH_IMAGE008
Instantaneous speed with the poseDistance between Marshall
Figure 506386DEST_PATH_IMAGE020
The method comprises the steps of carrying out a first treatment on the surface of the If it is
Figure 850649DEST_PATH_IMAGE020
Figure 441030DEST_PATH_IMAGE021
Calculating according to formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning
Figure 279673DEST_PATH_IMAGE010
Performing step (d); if it is
Figure 904690DEST_PATH_IMAGE020
Figure 419854DEST_PATH_IMAGE021
Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
Figure 497531DEST_PATH_IMAGE022
(5)
Figure 608706DEST_PATH_IMAGE023
(6)
Figure 88229DEST_PATH_IMAGE024
(7)
Figure 774294DEST_PATH_IMAGE025
(8)
Figure 339268DEST_PATH_IMAGE026
(9)
Figure 519714DEST_PATH_IMAGE027
(10)
wherein,,
Figure 119322DEST_PATH_IMAGE021
is a preset second threshold value; tis any sampling time in the positioning process;
Figure 976289DEST_PATH_IMAGE028
to fuse the kalman gain of inertial navigation positioning,
Figure 762979DEST_PATH_IMAGE029
the optimal chassis speed after fusion inertial navigation positioning is obtained;
Figure 12695DEST_PATH_IMAGE030
the system uncertainty covariance after the inertial navigation positioning is fused;
(d) Updating pose estimation value of next sampling moment
Figure 466810DEST_PATH_IMAGE031
Covariance estimation value of uncertainty of pose of system
Figure 494678DEST_PATH_IMAGE032
Figure 768664DEST_PATH_IMAGE031
=
Figure 290912DEST_PATH_IMAGE010
Figure 599534DEST_PATH_IMAGE032
=
Figure 798303DEST_PATH_IMAGE030
And the SLAM navigation positioning module calculates and acquires SLAM pose data through a self-adaptive Monte Carlo self-positioning algorithm.
The said
Figure 90744DEST_PATH_IMAGE011
A threshold value which is obtained for practical testing in a stable scene and can ensure the accuracy of slam positioning; the said
Figure 151104DEST_PATH_IMAGE021
Can ensure the accuracy of the inertial navigation positioning obtained for the actual test in a stable sceneA threshold value.
The control processing module records the pose which is not lost in the current positioning and stores the pose into a last positioning data list which is not lost under the condition that the positioning is not lost; the control processing module judges that the positioning is lost and then enters a positioning recovery process, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes positioning recovery, the mobile robot is positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides a positioning method of the mobile robot, which uses the combined navigation mode of inertia and SLAM, compensates the condition of lost SLAM positioning through inertial navigation positioning, reduces the probability of lost positioning of the mobile robot, and realizes the accurate positioning of the mobile robot in a scene with large flow of people.

Claims (5)

1. A positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a SLAM navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, and the driving device group comprises a chassis and two driving wheels, the positioning method is characterized by comprising the following steps:
(1) After the mobile robot is started up and electrified, the control processing module waits for a sampling signal, and if the sampling signal is not available, the control processing module continues waiting; if yes, entering step 2);
(2) The control processing module obtains a pose estimated value X of the mobile robot k And a system pose uncertainty covariance estimate P k
Wherein k is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the pose of the robotInitial pose estimation value X of mobile robot 0 And an initial pose uncertainty covariance estimate P 0 Wherein X is 0 =[0,0,0];
(3) The control processing module obtains the SLAM pose of the mobile robot sent by the SLAM navigation positioning module
Figure FDA0004239189900000011
Uncertainty covariance P of SLAM pose s
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation module
Figure FDA0004239189900000012
Chassis speed->
Figure FDA00042391899000000113
Uncertainty covariance P of inertial navigation pose e
(5) The control processing module adopts SLAM inertial navigation composite positioning algorithm, and covariance P is obtained according to uncertainty of the system pose k The SLAM positioning uncertainty covariance P s Uncertainty covariance P of the inertial navigation position e Weighting and fusing the SLAM pose
Figure FDA00042391899000000114
And said inertial navigation pose +.>
Figure FDA0004239189900000013
Updating the optimal pose X' of the mobile robot k
(6) The control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) - (5) are repeated;
the SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) Calculating the SLAM pose according to formula (1)
Figure FDA0004239189900000014
And the pose estimated value X k Distance D between March m The method comprises the steps of carrying out a first treatment on the surface of the If it is
Figure FDA0004239189900000015
Calculating according to formulas (2) - (4) to obtain the optimized pose X 'of the mobile robot after the mobile robot is fused with SLAM positioning' k Performing step (b); if->
Figure FDA0004239189900000016
Ignoring the SLAM pose, not processing, and performing step (c);
Figure FDA0004239189900000017
K s =P k (P k +P s ) -1 (2)
Figure FDA0004239189900000018
P′ k =P k -K s P k (4)
wherein,,
Figure FDA0004239189900000019
is a preset first threshold value; k (K) s Kalman gain for fused SLAM positioning; p'. k Covariance of uncertainty of system pose after SLAM positioning is fused;
(b) Updating pose estimation value X k ,X k =X′ k ,P k =P′ k Performing step (c);
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)
Figure FDA00042391899000000110
And the institute are connected withInstantaneous velocity V of the pose k Distance D between March e The method comprises the steps of carrying out a first treatment on the surface of the If->
Figure FDA00042391899000000111
Calculating according to formulas (6) to (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning, and performing step (d); if->
Figure FDA00042391899000000112
Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
Figure FDA0004239189900000021
Figure FDA0004239189900000022
K e =P k (P k +P e ) -1 (7)
Figure FDA0004239189900000023
Figure FDA0004239189900000024
P″ k =P k -K e P k (10)
wherein,,
Figure FDA0004239189900000025
is a preset second threshold value; t is any sampling time in the positioning process; k (K) e Kalman gain for fusion inertial navigation positioning, V k The optimal chassis speed after fusion inertial navigation positioning is obtained;P″ k the system uncertainty covariance after the inertial navigation positioning is fused;
(d) Updating pose estimation value X of next sampling moment k+1 Covariance estimate P with uncertainty of system pose k+1 ,X k+1 =X″ k ,P k+1 =P″ k
2. The method for positioning a mobile robot according to claim 1, wherein X is k =[x,y,θ]X and y are coordinates of the mobile robot in a pre-drawn SLAM map of the current pose, and θ is an orientation angle of the mobile robot; and the orientation angle of the mobile robot in the theoretical initial pose is 0 degrees, and the anticlockwise direction is positive.
3. The positioning method of a mobile robot according to claim 1, wherein the SLAM navigation positioning module calculates and obtains SLAM pose data through a self-adaptive monte carlo self-positioning algorithm.
4. A method of positioning a mobile robot as recited in claim 1, wherein the steps of
Figure FDA0004239189900000026
A threshold value which is obtained for practical testing in a stable scene and can ensure the SLAM positioning accuracy; said->
Figure FDA0004239189900000027
And a threshold value which is obtained for practical testing in a stable scene and can ensure that the inertial navigation positioning is accurate.
5. The positioning method of a mobile robot according to claim 1, wherein the control processing module records a pose of which the current positioning is not lost and stores the pose into a last list of positioning data which is not lost when no positioning loss occurs; the control processing module judges that the positioning is lost and then enters a positioning recovery process, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes positioning recovery, the mobile robot is positioned by adopting a positioning method integrating SLAM positioning and inertial navigation.
CN201911388707.3A 2019-12-30 2019-12-30 Positioning method of mobile robot Active CN113126602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911388707.3A CN113126602B (en) 2019-12-30 2019-12-30 Positioning method of mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911388707.3A CN113126602B (en) 2019-12-30 2019-12-30 Positioning method of mobile robot

Publications (2)

Publication Number Publication Date
CN113126602A CN113126602A (en) 2021-07-16
CN113126602B true CN113126602B (en) 2023-07-14

Family

ID=76768725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911388707.3A Active CN113126602B (en) 2019-12-30 2019-12-30 Positioning method of mobile robot

Country Status (1)

Country Link
CN (1) CN113126602B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113907645A (en) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 Mobile robot positioning method and device, storage medium and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106679648B (en) * 2016-12-08 2019-12-10 东南大学 Visual inertia combination SLAM method based on genetic algorithm
CN106969784B (en) * 2017-03-24 2019-08-13 山东大学 A kind of combined error emerging system for concurrently building figure positioning and inertial navigation
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN110285811A (en) * 2019-06-15 2019-09-27 南京巴乌克智能科技有限公司 The fusion and positioning method and device of satellite positioning and inertial navigation

Also Published As

Publication number Publication date
CN113126602A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
JP7170619B2 (en) Method and apparatus for generating driving route
WO2017088720A1 (en) Method and device for planning optimal following path and computer storage medium
CN107246876B (en) Method and system for autonomous positioning and map construction of unmanned automobile
RU2759975C1 (en) Operational control of autonomous vehicle with visual salence perception control
JP6963158B2 (en) Centralized shared autonomous vehicle operation management
JP2021523057A (en) Direction adjustment action for autonomous vehicle motion management
US11378957B1 (en) Method and apparatus for controlling vehicle driving
Peng et al. Path planning and obstacle avoidance for vision guided quadrotor UAV navigation
JP2019533810A (en) Neural network system for autonomous vehicle control
US10782384B2 (en) Localization methods and systems for autonomous systems
CA3002308A1 (en) Device and method for autonomous localisation
US20200042656A1 (en) Systems and methods for persistent simulation
CN110488842B (en) Vehicle track prediction method based on bidirectional kernel ridge regression
JP2021504825A (en) Autonomous vehicle operation management plan
US20190318050A1 (en) Environmental modification in autonomous simulation
JP2020513623A (en) Generation of solution data for autonomous vehicles to deal with problem situations
Wu et al. Robust LiDAR-based localization scheme for unmanned ground vehicle via multisensor fusion
CN113126602B (en) Positioning method of mobile robot
CN117152249A (en) Multi-unmanned aerial vehicle collaborative mapping and perception method and system based on semantic consistency
CN116540706A (en) System and method for providing local path planning for ground unmanned aerial vehicle by unmanned aerial vehicle
Zhou et al. Architecture design and implementation of image based autonomous car: THUNDER-1
Shangguan et al. Interactive perception-based multiple object tracking via CVIS and AV
Wen et al. Vision based sidewalk navigation for last-mile delivery robot
Paton et al. Eyes in the back of your head: Robust visual teach & repeat using multiple stereo cameras
Zheng et al. The Navigation Based on Hybrid A Star and TEB Algorithm Implemented in Obstacles Avoidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221114

Address after: 211135 7th Floor, Building 6, Artificial Intelligence Industrial Park, 266 Chuangyan Road, Qilin Science and Technology Innovation Park, Jiangning District, Nanjing, Jiangsu Province

Applicant after: NANJING KINGYOUNG INTELLIGENT SCIENCE AND TECHNOLOGY Co.,Ltd.

Address before: 211100 1st floor, building 15, Fuli science and Technology City, 277 Dongqi Road, Jiangning District, Nanjing City, Jiangsu Province

Applicant before: NANJING JINGYI ROBOT ENGINEERING TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant