CN113126602A - Positioning method of mobile robot - Google Patents

Positioning method of mobile robot Download PDF

Info

Publication number
CN113126602A
CN113126602A CN201911388707.3A CN201911388707A CN113126602A CN 113126602 A CN113126602 A CN 113126602A CN 201911388707 A CN201911388707 A CN 201911388707A CN 113126602 A CN113126602 A CN 113126602A
Authority
CN
China
Prior art keywords
positioning
pose
mobile robot
slam
inertial navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911388707.3A
Other languages
Chinese (zh)
Other versions
CN113126602B (en
Inventor
石飞
赵荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Kingyoung Intelligent Science And Technology Co ltd
Original Assignee
Nanjing Jingyi Robot Engineering Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jingyi Robot Engineering Technology Co ltd filed Critical Nanjing Jingyi Robot Engineering Technology Co ltd
Priority to CN201911388707.3A priority Critical patent/CN113126602B/en
Publication of CN113126602A publication Critical patent/CN113126602A/en
Application granted granted Critical
Publication of CN113126602B publication Critical patent/CN113126602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a mobile robot positioning method, which is characterized in that an inertia and SLAM combined navigation mode is used, the condition of SLAM positioning loss is compensated through inertial navigation positioning, the probability of mobile robot positioning loss is reduced, and accurate positioning of a mobile robot in a scene with large pedestrian flow is realized.

Description

Positioning method of mobile robot
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a positioning method of a mobile robot.
Background
In the current society of rapid development of the internet of things, the development of cities increasingly focuses on the development towards intellectualization and informatization, and intelligent facility construction is carried out in a plurality of public places. At present, the robot technology based on artificial intelligence is continuously emerging in the market, the application of the mobile robot is more and more extensive, and the mode that the mobile robot replaces manpower in public places such as markets, airports, banks and the like slowly realizes that few people or no people are on duty.
The mobile robot in the scene generally adopts an SLAM navigation mode, and the accurate positioning of the mobile robot is firstly obtained through laser point cloud data acquired by a laser sensor; then adding the laser point cloud data into the grid map to complete the construction of the scene map; and finally, planning a path on the basis of the constructed map to realize the navigation of the mobile robot. However, this navigation method is not suitable for public places with large traffic, and especially when people around the mobile robot, the calculation result of SLAM navigation will generate large error, resulting in positioning loss.
Disclosure of Invention
In view of the above disadvantages, the technical problem to be solved by the present invention is to provide a positioning method for a mobile robot, which uses a combined navigation mode of inertia and SLAM, compensates for the loss of SLAM positioning through inertial navigation positioning, reduces the probability of mobile robot positioning loss, and realizes accurate positioning of the mobile robot in a scene with a large traffic.
The purpose of the invention is realized by the following technical scheme:
a positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, the positioning method comprises the following steps:
(1) after the mobile robot is started up and powered on, the control processing module waits for a sampling signal, and if the mobile robot is not started up, the control processing module continues to wait; if yes, entering step 2);
(2) the control processing module obtains a pose estimation value of the mobile robot
Figure RE-DEST_PATH_IMAGE001
Covariance estimation value of sum system pose uncertainty
Figure 912365DEST_PATH_IMAGE002
K is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimation value of the mobile robot
Figure RE-DEST_PATH_IMAGE003
And initial pose uncertainty covariance estimate
Figure 95085DEST_PATH_IMAGE004
Wherein
Figure 140401DEST_PATH_IMAGE003
=[0,0,0];
(3) the control processing module acquires the information sent by the slam navigation positioning moduleSlam pose of mobile robot
Figure RE-DEST_PATH_IMAGE005
And uncertainty covariance of slam pose
Figure 801190DEST_PATH_IMAGE006
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation module
Figure RE-DEST_PATH_IMAGE007
Chassis speed
Figure 146720DEST_PATH_IMAGE008
And inertial navigation pose uncertainty covariance
Figure RE-DEST_PATH_IMAGE009
(5) The control processing module adopts a SLAM inertial navigation composite positioning algorithm and according to the system pose uncertainty covariance
Figure 234762DEST_PATH_IMAGE002
The slam positioning uncertainty covariance
Figure 501795DEST_PATH_IMAGE006
Uncertainty covariance of the inertial navigation positioning
Figure 966275DEST_PATH_IMAGE009
Weighted fusion of the slam pose
Figure 635153DEST_PATH_IMAGE005
And the inertial navigation pose
Figure 628517DEST_PATH_IMAGE007
Updating the optimal pose of the mobile robot
Figure 914005DEST_PATH_IMAGE010
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) to (5) are repeated.
Preferably, the first and second electrodes are formed of a metal,
Figure 651017DEST_PATH_IMAGE001
=[x,y,θ]x and y are coordinates of the mobile robot in the current pose in a pre-drawn slam map,θis the orientation angle of the mobile robot; the orientation angle of the mobile robot in the theoretical initial pose is 0 degree, and the anticlockwise direction is positive.
Preferably, the SLAM inertial navigation composite positioning algorithm includes the following steps:
(a) calculating the slam pose according to a formula (1)
Figure RE-591970DEST_PATH_IMAGE005
And the pose estimate
Figure RE-781643DEST_PATH_IMAGE001
Mahalanobis distance betweenD m (ii) a If it is notD m
Figure RE-RE-DEST_PATH_IMAGE011
And calculating to obtain the optimized pose of the mobile robot after fusion slam positioning according to formulas (2) - (4)
Figure RE-602837DEST_PATH_IMAGE012
Performing step (b); if it is notD m
Figure RE-637789DEST_PATH_IMAGE011
If yes, ignoring the slam pose, not processing, and performing the step (c);
Figure RE-RE-DEST_PATH_IMAGE013
(1)
Figure RE-126408DEST_PATH_IMAGE014
(2)
Figure RE-RE-DEST_PATH_IMAGE015
(3)
Figure RE-955824DEST_PATH_IMAGE016
(4)
wherein,
Figure RE-811785DEST_PATH_IMAGE011
is a preset first threshold value;
Figure RE-RE-DEST_PATH_IMAGE017
a Kalman gain for the fusion slam fix;
Figure RE-102957DEST_PATH_IMAGE018
the covariance of the uncertainty of the system pose after the slam positioning is fused;
(b) updating pose estimate
Figure RE-993553DEST_PATH_IMAGE001
Figure RE-259449DEST_PATH_IMAGE001
=
Figure RE-602706DEST_PATH_IMAGE012
Figure RE-517745DEST_PATH_IMAGE002
=
Figure RE-262847DEST_PATH_IMAGE018
Performing step (c);
(c) calculating the chassis speed of the inertial navigation pose according to the formula (5)
Figure RE-699645DEST_PATH_IMAGE008
Instantaneous speed with the pose
Figure RE-RE-DEST_PATH_IMAGE019
Mahalanobis distance between
Figure RE-264618DEST_PATH_IMAGE020
(ii) a If it is not
Figure RE-632015DEST_PATH_IMAGE020
Figure RE-RE-DEST_PATH_IMAGE021
And calculating according to the formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning
Figure RE-966044DEST_PATH_IMAGE010
Performing step (d); if it is not
Figure RE-573743DEST_PATH_IMAGE020
Figure RE-140859DEST_PATH_IMAGE021
Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
Figure RE-859416DEST_PATH_IMAGE022
(5)
Figure RE-RE-DEST_PATH_IMAGE023
(6)
Figure RE-47952DEST_PATH_IMAGE024
(7)
Figure RE-RE-DEST_PATH_IMAGE025
(8)
Figure RE-279082DEST_PATH_IMAGE026
(9)
Figure RE-RE-DEST_PATH_IMAGE027
(10)
wherein,
Figure RE-412124DEST_PATH_IMAGE021
is a preset second threshold value; tany sampling time in the positioning process;
Figure RE-183639DEST_PATH_IMAGE028
in order to fuse the kalman gain of inertial navigation positioning,
Figure RE-RE-DEST_PATH_IMAGE029
the optimal chassis speed after inertial navigation positioning is fused;
Figure RE-226682DEST_PATH_IMAGE030
the system uncertainty covariance after inertial navigation positioning is fused;
(d) updating pose estimation value of next sampling moment
Figure RE-RE-DEST_PATH_IMAGE031
Covariance estimation value of uncertainty of pose and system
Figure RE-645025DEST_PATH_IMAGE032
Figure RE-921154DEST_PATH_IMAGE031
=
Figure RE-247093DEST_PATH_IMAGE010
Figure RE-410221DEST_PATH_IMAGE032
=
Figure RE-530624DEST_PATH_IMAGE030
Preferably, the SLAM navigation positioning module obtains the SLAM pose data through self-adaptive monte carlo self-positioning algorithm calculation.
Preferably, the
Figure 896742DEST_PATH_IMAGE011
A threshold value which can ensure accurate positioning of the slam and is obtained by actual test in a stable scene; the above-mentioned
Figure 198411DEST_PATH_IMAGE021
And obtaining a threshold value which can ensure the accuracy of the inertial navigation positioning in a stable scene.
Preferably, the control processing module records the pose which is not lost in the current positioning and stores the pose into a last positioning data list which is not lost under the condition that the positioning is not lost; the control processing module enters a positioning recovery process after judging that the positioning is lost, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes the positioning recovery, the mobile robot is continuously positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides the positioning method of the mobile robot, which adopts a combined navigation mode of inertia and SLAM, compensates the situation of SLAM positioning loss through inertial navigation positioning, reduces the probability of positioning loss of the mobile robot, and realizes accurate positioning of the mobile robot in a scene with large pedestrian volume.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart of a positioning method of a mobile robot according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Positioning method of mobile robot, wherein the mobile robot comprises a machine
Figure 824564DEST_PATH_IMAGE032
The positioning method comprises a vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device set, wherein the inertial navigation module comprises an encoder and a gyroscope, the driving device set comprises a chassis and two driving wheels, and as shown in figure 1, the positioning method comprises the following steps:
(1) after the mobile robot is started up and powered on, the control processing module waits for a sampling signal, and if the mobile robot is not started up, the control processing module continues to wait; if yes, entering step 2);
(2) the control processing module obtains a pose estimation value of the mobile robot
Figure 211683DEST_PATH_IMAGE001
Covariance estimation value of sum system pose uncertainty
Figure 847064DEST_PATH_IMAGE002
K is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimation value of the mobile robot
Figure 737659DEST_PATH_IMAGE003
And initial pose uncertainty covariance estimate
Figure 534714DEST_PATH_IMAGE004
Wherein
Figure 674708DEST_PATH_IMAGE003
=[0,0,0];
(3) the control processing module acquires the slam pose of the mobile robot sent by the slam navigation positioning module
Figure 848201DEST_PATH_IMAGE005
And uncertainty covariance of slam pose
Figure 858882DEST_PATH_IMAGE006
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation module
Figure 104136DEST_PATH_IMAGE007
Chassis speed
Figure 200268DEST_PATH_IMAGE008
And inertial navigation pose uncertainty covariance
Figure 177451DEST_PATH_IMAGE009
(5) The control processing module adopts a SLAM inertial navigation composite positioning algorithm and according to the system pose uncertainty covariance
Figure 42639DEST_PATH_IMAGE002
The slam positioning uncertainty covariance
Figure 181497DEST_PATH_IMAGE006
Uncertainty covariance of the inertial navigation positioning
Figure 296083DEST_PATH_IMAGE009
Weighted fusion of the slam pose
Figure 545799DEST_PATH_IMAGE005
And the inertial navigation pose
Figure 531072DEST_PATH_IMAGE007
Updating the optimal pose of the mobile robot
Figure 106410DEST_PATH_IMAGE010
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) to (5) are repeated.
=[x,y,θ]X and y are coordinates of the mobile robot in the current pose in a pre-drawn slam map,θis the orientation angle of the mobile robot; the orientation angle of the mobile robot in the theoretical initial pose is 0 degree, and the anticlockwise direction is positive.
The SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) calculating the slam pose according to a formula (1)
Figure 177134DEST_PATH_IMAGE005
And the pose estimate
Figure 496120DEST_PATH_IMAGE001
Mahalanobis distance betweenD m (ii) a If it is notD m
Figure 335900DEST_PATH_IMAGE011
And calculating to obtain the optimized pose of the mobile robot after fusion slam positioning according to formulas (2) - (4)
Figure 816560DEST_PATH_IMAGE012
Performing step (b); if it is notD m
Figure 374580DEST_PATH_IMAGE011
If yes, ignoring the slam pose, not processing, and performing the step (c);
Figure 231678DEST_PATH_IMAGE013
(1)
Figure 925965DEST_PATH_IMAGE014
(2)
Figure 108684DEST_PATH_IMAGE015
(3)
Figure 154001DEST_PATH_IMAGE016
(4)
wherein,
Figure 814789DEST_PATH_IMAGE011
is a preset first threshold value;
Figure 363582DEST_PATH_IMAGE017
a Kalman gain for the fusion slam fix;
Figure 451624DEST_PATH_IMAGE018
the covariance of the uncertainty of the system pose after the slam positioning is fused;
(b) updating pose estimate
Figure 984236DEST_PATH_IMAGE001
Figure 183136DEST_PATH_IMAGE001
=
Figure 117594DEST_PATH_IMAGE012
Figure 376537DEST_PATH_IMAGE002
=
Figure 130867DEST_PATH_IMAGE018
Carrying out step (c));
(c) Calculating the chassis speed of the inertial navigation pose according to the formula (5)
Figure 133458DEST_PATH_IMAGE008
Instantaneous speed with the pose
Figure 656843DEST_PATH_IMAGE019
Mahalanobis distance between
Figure 86687DEST_PATH_IMAGE020
(ii) a If it is not
Figure 328313DEST_PATH_IMAGE020
Figure 869016DEST_PATH_IMAGE021
And calculating according to the formulas (6) - (10) to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning
Figure 778066DEST_PATH_IMAGE010
Performing step (d); if it is not
Figure 113232DEST_PATH_IMAGE020
Figure 842154DEST_PATH_IMAGE021
Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
Figure 452127DEST_PATH_IMAGE032
(5)
Figure 950104DEST_PATH_IMAGE022
(6)
Figure 456172DEST_PATH_IMAGE023
(7)
Figure 937969DEST_PATH_IMAGE024
(8)
Figure 820474DEST_PATH_IMAGE025
(9)
Figure 438537DEST_PATH_IMAGE026
(10)
wherein,
Figure 381086DEST_PATH_IMAGE021
is a preset second threshold value; tany sampling time in the positioning process;
Figure 819020DEST_PATH_IMAGE027
in order to fuse the kalman gain of inertial navigation positioning,
Figure 505216DEST_PATH_IMAGE028
the optimal chassis speed after inertial navigation positioning is fused;
Figure 977786DEST_PATH_IMAGE029
the system uncertainty covariance after inertial navigation positioning is fused;
(d) updating pose estimation value of next sampling moment
Figure 356815DEST_PATH_IMAGE031
Covariance estimation value of uncertainty of pose and system
Figure 282045DEST_PATH_IMAGE030
Figure 506353DEST_PATH_IMAGE031
=
Figure 99009DEST_PATH_IMAGE010
Figure 383360DEST_PATH_IMAGE030
=
Figure 795886DEST_PATH_IMAGE029
And the SLAM navigation positioning module calculates and obtains the SLAM pose data through a self-adaptive Monte Carlo self-positioning algorithm.
The above-mentioned
Figure 823885DEST_PATH_IMAGE011
A threshold value which can ensure accurate positioning of the slam and is obtained by actual test in a stable scene; the above-mentioned
Figure 5468DEST_PATH_IMAGE021
And obtaining a threshold value which can ensure the accuracy of the inertial navigation positioning in a stable scene.
Under the condition that the positioning loss does not occur, the control processing module records the pose of the current positioning which is not lost and stores the pose into a last positioning data list which is not lost; the control processing module enters a positioning recovery process after judging that the positioning is lost, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes the positioning recovery, the mobile robot is continuously positioned by adopting a positioning method integrating slam positioning and inertial navigation.
Compared with the prior art, the invention provides the positioning method of the mobile robot, which adopts a combined navigation mode of inertia and SLAM, compensates the situation of SLAM positioning loss through inertial navigation positioning, reduces the probability of positioning loss of the mobile robot, and realizes accurate positioning of the mobile robot in a scene with large pedestrian volume.

Claims (6)

1. A positioning method of a mobile robot, wherein the mobile robot comprises a machine vision module, an inertial navigation module, a slam navigation positioning module, a control processing module and a driving device group, the inertial navigation module comprises an encoder and a gyroscope, the driving device group comprises a chassis and two driving wheels, and the positioning method comprises the following steps:
(1) after the mobile robot is started up and powered on, the control processing module waits for a sampling signal, and if the mobile robot is not started up, the control processing module continues to wait; if yes, entering step 2);
(2) the control processing module obtains a pose estimation value of the mobile robot
Figure 321211DEST_PATH_IMAGE001
Covariance estimation value of sum system pose uncertainty
Figure 254532DEST_PATH_IMAGE002
K is the time when the control processing module collects the pose of the robot, and after the mobile robot is powered on, the control processing module initializes and sets the initial pose estimation value of the mobile robot
Figure 657832DEST_PATH_IMAGE003
And initial pose uncertainty covariance estimate
Figure 11715DEST_PATH_IMAGE004
Wherein
Figure 766044DEST_PATH_IMAGE003
=[0,0,0];
(3) the control processing module acquires the slam pose of the mobile robot sent by the slam navigation positioning module
Figure 503056DEST_PATH_IMAGE005
And uncertainty covariance of slam pose
Figure 823179DEST_PATH_IMAGE006
(4) The control processing module acquires the inertial navigation pose of the mobile robot sent by the inertial navigation module
Figure 253024DEST_PATH_IMAGE007
Chassis speed
Figure 697911DEST_PATH_IMAGE008
And inertial navigation pose uncertainty covariance
Figure 238614DEST_PATH_IMAGE009
(5) The control processing module adopts a SLAM inertial navigation composite positioning algorithm and according to the system pose uncertainty covariance
Figure 741140DEST_PATH_IMAGE002
The slam positioning uncertainty covariance
Figure 76306DEST_PATH_IMAGE006
Uncertainty covariance of the inertial navigation positioning
Figure 274069DEST_PATH_IMAGE009
Weighted fusion of the slam pose
Figure 352884DEST_PATH_IMAGE005
And the inertial navigation pose
Figure 647599DEST_PATH_IMAGE007
Updating the optimal pose of the mobile robot
Figure 153666DEST_PATH_IMAGE010
(6) And (3) the control processing module controls the driving device group to drive the mobile robot to move, and the steps (1) to (5) are repeated.
2. The method according to claim 1, wherein the mobile robot is a mobile robot,
Figure 718685DEST_PATH_IMAGE001
=[x,y,θ]x and y are coordinates of the mobile robot in the current pose in a pre-drawn slam map,θis the orientation angle of the mobile robot; the orientation angle of the mobile robot in the theoretical initial pose is 0 degree, and the anticlockwise direction is positive.
3. The method of claim 1, wherein the SLAM inertial navigation composite positioning algorithm comprises the following steps:
(a) calculating the slam pose according to a formula (1)
Figure 601190DEST_PATH_IMAGE005
And the pose estimate
Figure 422516DEST_PATH_IMAGE001
Mahalanobis distance betweenD m (ii) a If it is notD m
Figure 365064DEST_PATH_IMAGE011
And calculating to obtain the optimized pose of the mobile robot after fusion slam positioning according to formulas (2) - (4)
Figure 599736DEST_PATH_IMAGE012
Performing step (b); if it is notD m
Figure 20353DEST_PATH_IMAGE011
Then ignore said sPerforming the step (c) without processing the lam pose;
Figure 86398DEST_PATH_IMAGE013
(1)
Figure 934268DEST_PATH_IMAGE014
(2)
Figure 593920DEST_PATH_IMAGE015
(3)
Figure 818228DEST_PATH_IMAGE016
(4)
wherein,
Figure 738779DEST_PATH_IMAGE011
is a preset first threshold value;
Figure 23130DEST_PATH_IMAGE017
a Kalman gain for the fusion slam fix;
Figure 733859DEST_PATH_IMAGE018
the covariance of the uncertainty of the system pose after the slam positioning is fused;
(b) updating pose estimate
Figure 496279DEST_PATH_IMAGE001
Figure 146703DEST_PATH_IMAGE001
=
Figure 601955DEST_PATH_IMAGE012
Figure 360833DEST_PATH_IMAGE002
=
Figure 926943DEST_PATH_IMAGE018
Performing step (c);
(c) calculating the chassis speed of the inertial navigation pose according to the formula (5)
Figure 759770DEST_PATH_IMAGE008
Instantaneous speed with the pose
Figure 385924DEST_PATH_IMAGE019
Mahalanobis distance between
Figure 241884DEST_PATH_IMAGE020
(ii) a If it is not
Figure 346106DEST_PATH_IMAGE020
Figure 361336DEST_PATH_IMAGE021
Calculating to obtain the optimal pose of the mobile robot after fusion inertial navigation positioning according to the formulas (6) - (10), and performing the step (d); if it is not
Figure 158390DEST_PATH_IMAGE020
Figure 236068DEST_PATH_IMAGE021
Ignoring the inertial navigation pose; the control processing module judges that the positioning is lost;
Figure 409560DEST_PATH_IMAGE022
(5)
Figure 718444DEST_PATH_IMAGE023
(6)
Figure 686400DEST_PATH_IMAGE024
(7)
Figure 376007DEST_PATH_IMAGE025
(8)
Figure 822032DEST_PATH_IMAGE026
(9)
Figure 687220DEST_PATH_IMAGE027
(10)
wherein,
Figure 826077DEST_PATH_IMAGE021
is a preset second threshold value; tany sampling time in the positioning process;
Figure 2981DEST_PATH_IMAGE028
in order to fuse the kalman gain of inertial navigation positioning,
Figure 252697DEST_PATH_IMAGE029
the optimal chassis speed after inertial navigation positioning is fused;
Figure 34708DEST_PATH_IMAGE030
the system uncertainty covariance after inertial navigation positioning is fused;
(d) updating pose estimation value of next sampling moment
Figure 344466DEST_PATH_IMAGE031
Is uncertain with the system poseQuantitative covariance estimation
Figure 149611DEST_PATH_IMAGE032
Figure 203018DEST_PATH_IMAGE031
=
Figure 137738DEST_PATH_IMAGE010
Figure 618398DEST_PATH_IMAGE032
=
Figure 707577DEST_PATH_IMAGE030
4. The method of claim 1, wherein the SLAM navigation and positioning module obtains SLAM pose data through self-adaptive Monte Carlo self-positioning algorithm calculation.
5. A method according to claim 3, wherein said method comprises
Figure 564674DEST_PATH_IMAGE011
A threshold value which can ensure accurate positioning of the slam and is obtained by actual test in a stable scene; the above-mentioned
Figure 727802DEST_PATH_IMAGE021
And obtaining a threshold value which can ensure the accuracy of the inertial navigation positioning in a stable scene.
6. The method according to claim 1, wherein the control processing module records the pose of the current position which is not lost and stores the pose into a last positioning data list which is not lost when the position is not lost; the control processing module enters a positioning recovery process after judging that the positioning is lost, and the positioning recovery process comprises the following steps:
the control processing module controls the driving device group to drive the mobile robot to move to a position where the last positioning is not lost, and an inertial navigation module is adopted to position the mobile robot in the moving process;
and after the control processing module finishes the positioning recovery, the mobile robot is continuously positioned by adopting a positioning method integrating slam positioning and inertial navigation.
CN201911388707.3A 2019-12-30 2019-12-30 Positioning method of mobile robot Active CN113126602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911388707.3A CN113126602B (en) 2019-12-30 2019-12-30 Positioning method of mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911388707.3A CN113126602B (en) 2019-12-30 2019-12-30 Positioning method of mobile robot

Publications (2)

Publication Number Publication Date
CN113126602A true CN113126602A (en) 2021-07-16
CN113126602B CN113126602B (en) 2023-07-14

Family

ID=76768725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911388707.3A Active CN113126602B (en) 2019-12-30 2019-12-30 Positioning method of mobile robot

Country Status (1)

Country Link
CN (1) CN113126602B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113907645A (en) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 Mobile robot positioning method and device, storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN106969784A (en) * 2017-03-24 2017-07-21 中国石油大学(华东) It is a kind of concurrently to build figure positioning and the combined error emerging system of inertial navigation
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN110285811A (en) * 2019-06-15 2019-09-27 南京巴乌克智能科技有限公司 The fusion and positioning method and device of satellite positioning and inertial navigation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106679648A (en) * 2016-12-08 2017-05-17 东南大学 Vision-inertia integrated SLAM (Simultaneous Localization and Mapping) method based on genetic algorithm
CN106969784A (en) * 2017-03-24 2017-07-21 中国石油大学(华东) It is a kind of concurrently to build figure positioning and the combined error emerging system of inertial navigation
CN109828588A (en) * 2019-03-11 2019-05-31 浙江工业大学 Paths planning method in a kind of robot chamber based on Multi-sensor Fusion
CN110285811A (en) * 2019-06-15 2019-09-27 南京巴乌克智能科技有限公司 The fusion and positioning method and device of satellite positioning and inertial navigation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113907645A (en) * 2021-09-23 2022-01-11 追觅创新科技(苏州)有限公司 Mobile robot positioning method and device, storage medium and electronic device

Also Published As

Publication number Publication date
CN113126602B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
JP6644742B2 (en) Algorithms and infrastructure for robust and efficient vehicle positioning
CN107144285B (en) Pose information determination method and device and movable equipment
EP3371671B1 (en) Method, device and assembly for map generation
US10437252B1 (en) High-precision multi-layer visual and semantic map for autonomous driving
US10794710B1 (en) High-precision multi-layer visual and semantic map by autonomous units
WO2021017212A1 (en) Multi-scene high-precision vehicle positioning method and apparatus, and vehicle-mounted terminal
US9697730B2 (en) Spatial clustering of vehicle probe data
US11726208B2 (en) Autonomous vehicle localization using a Lidar intensity map
JP2021504796A (en) Sensor data segmentation
CN102915039B (en) A kind of multirobot joint objective method for searching of imitative animal spatial cognition
JP2019533810A (en) Neural network system for autonomous vehicle control
RU2759975C1 (en) Operational control of autonomous vehicle with visual salence perception control
US20200042656A1 (en) Systems and methods for persistent simulation
CN112762957B (en) Multi-sensor fusion-based environment modeling and path planning method
CN110488842B (en) Vehicle track prediction method based on bidirectional kernel ridge regression
US20220185271A1 (en) Method and apparatus for controlling vehicle driving
Perea-Strom et al. GNSS integration in the localization system of an autonomous vehicle based on particle weighting
CN110487286B (en) Robot pose judgment method based on point feature projection and laser point cloud fusion
US20230071794A1 (en) Method and system for building lane-level map by using 3D point cloud map
CN111982114A (en) Rescue robot for estimating three-dimensional pose by adopting IMU data fusion
CN113238554A (en) Indoor navigation method and system based on SLAM technology integrating laser and vision
Gao et al. Towards autonomous wheelchair systems in urban environments
CN115339453B (en) Vehicle lane change decision information generation method, device, equipment and computer medium
CN113126602A (en) Positioning method of mobile robot
EP3876165A2 (en) Method, apparatus, and system for progressive training of evolving machine learning architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20221114

Address after: 211135 7th Floor, Building 6, Artificial Intelligence Industrial Park, 266 Chuangyan Road, Qilin Science and Technology Innovation Park, Jiangning District, Nanjing, Jiangsu Province

Applicant after: NANJING KINGYOUNG INTELLIGENT SCIENCE AND TECHNOLOGY Co.,Ltd.

Address before: 211100 1st floor, building 15, Fuli science and Technology City, 277 Dongqi Road, Jiangning District, Nanjing City, Jiangsu Province

Applicant before: NANJING JINGYI ROBOT ENGINEERING TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant