CN110160527B - Mobile robot navigation method and device - Google Patents

Mobile robot navigation method and device Download PDF

Info

Publication number
CN110160527B
CN110160527B CN201910370245.6A CN201910370245A CN110160527B CN 110160527 B CN110160527 B CN 110160527B CN 201910370245 A CN201910370245 A CN 201910370245A CN 110160527 B CN110160527 B CN 110160527B
Authority
CN
China
Prior art keywords
time
robot
value
navigation
wheel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910370245.6A
Other languages
Chinese (zh)
Other versions
CN110160527A (en
Inventor
宇晓龙
黄逸飞
解广州
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Hongyun Network Technology Co ltd
Original Assignee
Anhui Red Bat Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Red Bat Intelligent Technology Co ltd filed Critical Anhui Red Bat Intelligent Technology Co ltd
Priority to CN201910370245.6A priority Critical patent/CN110160527B/en
Publication of CN110160527A publication Critical patent/CN110160527A/en
Application granted granted Critical
Publication of CN110160527B publication Critical patent/CN110160527B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C17/00Compasses; Devices for ascertaining true or magnetic north for navigation or surveying purposes
    • G01C17/02Magnetic compasses
    • G01C17/28Electromagnetic compasses
    • G01C17/32Electron compasses
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a mobile robot navigation method, which comprises the following steps: initializing, time increasing, reading sensor data, estimating pose and sliding coefficient, and calculating the expected rotating speed of the left wheel and the right wheel. The invention also discloses a mobile robot navigation device, which comprises a positioning system, an electronic compass, an odometer and a computer, wherein the positioning system, the electronic compass and the odometer are in telecommunication connection with the computer, and the computer is used for executing the mobile robot navigation method disclosed by the invention. Compared with the prior art, the method and the device have the advantages that the slip characteristics of different ground types are considered, the pose and the slip coefficient of the robot can be estimated at the same time, the slip effect can be considered in a navigation algorithm, the movement time and the energy consumption are comprehensively considered in the process of optimizing the path, and the running time of the robot powered by the battery is improved.

Description

Mobile robot navigation method and device
Technical Field
The invention relates to the technical field of robots, in particular to a mobile robot navigation method and device.
Background
In recent years, autonomous robots play an important role in the fields of space exploration, military missions, agriculture, and the like. In the future, it is expected that these robots can perform various tasks in unstructured and dynamic outdoor environments and increase autonomy. However, the energy in the battery and/or fuel that the robot can carry is limited, which limits its useful life. Energy saving is very important in order to enable the robot to perform a wider range of tasks without charging or refueling. Therefore, the energy consumption can be reduced to the maximum extent through a good navigation mechanism.
The literature "service robot navigation based on detection of interaction intention between pedestrians [ J ]. academic newspaper of science and technology university in China (Nature science edition), 2017,45(10): 80-84." provides a service robot navigation method based on detection of interaction intention between pedestrians for a robot navigation problem in an environment where people and robots coexist. Patent CN201610203026.5 provides a method for robot navigation and a navigation robot, the method includes: when the robot is detected to move, acquiring an image arranged at a preset position in real time, processing the acquired image, acquiring a path map in the image, comparing the path map in the image acquired after processing with a preset path map, acquiring the instantaneous position of the robot, generating a moving path of the robot to the destination according to the instantaneous position, the destination of the robot and the path map, and controlling the robot to move according to the generated moving path. According to the invention, the robot is controlled to move according to the preset moving path through an image processing technology, so that the navigation of the robot becomes more accurate. Patent CN201510891364.8 provides a robot navigation method and system, wherein the method includes: acquiring identification points around the robot by controlling a camera arranged on the robot according to the navigation instruction; inquiring a preset map according to surrounding identification points, and determining the current position of the robot, wherein the preset map comprises: each navigation track and an identification point on each navigation track; inquiring a preset map according to the current position and the target position of the robot, and acquiring a navigation track matched with the current position and the target position; the traveling direction and the traveling route of the robot are determined according to the current position and the navigation track, so that the robot can move to the target position according to the traveling direction and the traveling route, navigation is achieved, the arrangement of access point equipment with high cost is avoided, construction transformation of places is achieved, cost is saved, and construction requirements of the places to which the robot is applied are lowered.
The influence of different ground types on the energy consumption of the robot is rarely considered in the traditional navigation means, a road with large resistance is sometimes planned in order to realize the shortest path, and the energy consumption of the robot is improved.
Disclosure of Invention
In order to solve the problems, the invention discloses a mobile robot navigation method, which specifically comprises the following steps:
s101: initializing, setting the time T to 1, setting the sampling time interval T and the robot width B, and determining the optimal state estimation value at the time T
Figure BDA0002049697480000021
Wherein
Figure BDA0002049697480000022
Respectively representing the optimal estimation value of the east coordinate, the optimal estimation value of the north coordinate, the optimal estimation value of the direction, the optimal estimation value of the left slip ratio, the optimal estimation value of the right slip ratio and the optimal estimation value of the sideslip factor of the robot at the time t, setting the variances Q and R of process noise and observation noise, and setting the state optimal estimation error covariance at the time t
Figure BDA0002049697480000023
Figure BDA0002049697480000024
Is a 6-dimensional square matrix; let the best navigation point at time t
Figure BDA0002049697480000025
Initial coordinates of the robot;
s102: increasing t by 1;
s103: reading the robot position data at the time t from a positioning system, reading the robot direction data at the time t from an electronic compass, and obtaining an observation vector at the time t
Figure BDA0002049697480000026
Wherein
Figure BDA0002049697480000027
Indicating the east coordinate detection value of the robot at the time t,
Figure BDA0002049697480000028
indicating the detected north coordinate value of the robot at the time t,
Figure BDA0002049697480000029
indicating a robot direction detection value at time t; reading the rotating speed data of the left and right wheels of the robot at the time t from the odometer
Figure BDA00020496974800000210
Wherein
Figure BDA00020496974800000211
Representing the detected value of the rotation speed of the left wheel of the robot,
Figure BDA00020496974800000212
representing a detected value of the rotation speed of the right wheel of the robot;
s104: by ytAnd wtEstimating pose of robot at time t
Figure BDA00020496974800000213
And coefficient of sliding
Figure BDA00020496974800000214
The following were used:
s1041: state prediction estimation, obtaining the state prediction estimation value at t moment
Figure BDA00020496974800000215
Wherein
Figure BDA00020496974800000216
The predicted estimated value of the east coordinate, the predicted estimated value of the north coordinate, the predicted estimated value of the direction, the predicted estimated value of the left slip ratio, the predicted estimated value of the right slip ratio and the predicted estimated value of the sideslip factor of the robot at the time t can be represented by a state transition equation
Figure BDA00020496974800000217
The state transition equation is derived as follows:
Figure BDA00020496974800000218
Figure BDA00020496974800000219
Figure BDA00020496974800000220
Figure BDA0002049697480000031
and the state prediction estimation error covariance is calculated as follows:
Figure BDA0002049697480000032
wherein F is
Figure BDA0002049697480000033
Relative to
Figure BDA0002049697480000034
The jacobian matrix of (a) is,
Figure BDA0002049697480000035
the error covariance is estimated optimally for the state at time t,
Figure BDA0002049697480000036
is a 6-dimensional square matrix, Q represents the variance of process noise, and F' represents the transposition of F;
s1042: calculating observation information
Figure BDA0002049697480000037
Covariance with innovation
Figure BDA0002049697480000038
Wherein
Figure BDA0002049697480000039
Calculating an innovation covariance estimate
Figure BDA00020496974800000310
Wherein N iswIs composed of
Figure BDA00020496974800000311
Estimated sliding window width, the attenuation factor γ is calculated as follows:
Figure BDA00020496974800000312
wherein, alpha is a real number larger than 1, R represents the variance of the observation noise, and H' represents the transposition of H;
s1043: adjustment of
Figure BDA00020496974800000313
Order to
Figure BDA00020496974800000314
And recalculate innovation covariance
Figure BDA00020496974800000315
S1044: performing optimal state estimation to obtain optimal state estimation value at t moment
Figure BDA00020496974800000316
The following were used:
Figure BDA00020496974800000317
Figure BDA00020496974800000318
wherein the content of the first and second substances,
Figure BDA00020496974800000319
and calculating a state optimal estimation error covariance
Figure BDA00020496974800000320
Wherein I6Representing a 6-dimensional unit matrix;
s105: calculating the expected rotating speeds of the left wheel and the right wheel at the t +1 moment according to the pose and the sliding coefficient of the robot at the t moment acquired in the step S104
Figure BDA00020496974800000321
And
Figure BDA00020496974800000322
the following were used:
first, a set of sets of possible rotation speeds of the left and right wheels at time t +1 is randomly generated
Figure BDA00020496974800000323
Figure BDA00020496974800000324
And
Figure BDA00020496974800000325
each set has L elements, where,
Figure BDA00020496974800000326
representing the randomly generated possible rotation speed of the left wheel at time t +1,
Figure BDA00020496974800000327
Figure BDA00020496974800000328
representing the randomly generated possible rotation speed of the right wheel at time t +1, will then
Figure BDA00020496974800000329
And
Figure BDA00020496974800000330
speed pair in
Figure BDA00020496974800000331
Brought into
Figure BDA00020496974800000332
f is a state transition equation to obtain a corresponding position predicted value
Figure BDA00020496974800000333
Recording coordinate points
Figure BDA00020496974800000334
Calculate each Ot+1,iCorresponding objective function JiObjective function Ji=Ji,1+Ji,2
Figure BDA0002049697480000041
Wherein k is1、k2Respectively a rotation resistance energy consumption coefficient and a forward resistance energy consumption coefficient,
Figure BDA0002049697480000042
Figure BDA0002049697480000043
wherein, OTThe coordinates of the end point are shown,
Figure BDA0002049697480000044
and (O)t+1,i,OT) Each represents Ot+1,iAnd
Figure BDA0002049697480000045
euclidean distance of, Ot+1,iAnd OTThe Euclidean distance of (c); find JiTaking the minimum Ot+1,iI.e. the best navigation point at time t +1
Figure BDA0002049697480000046
The method for randomly generating a set of left and right wheel possible rotation speeds at time t +1 involved in step S105 is as follows:
s1051: from t-1 to t-NuTime of day setting
Figure BDA0002049697480000047
And
Figure BDA0002049697480000048
wherein v ismAn upper limit of the rotational speed of the wheel is indicated,
Figure BDA0002049697480000049
represents 0 to vmIs uniformly distributed, NuIs a positive integer greater than 1; at t>NuTime of day calculation
Figure BDA00020496974800000410
Wherein
Figure BDA00020496974800000411
And
Figure BDA00020496974800000412
is a sequence of
Figure BDA00020496974800000413
The mean and the variance of (a) is,
Figure BDA00020496974800000414
and
Figure BDA00020496974800000415
is a sequence of
Figure BDA00020496974800000416
The mean and the variance of (a) is,
Figure BDA00020496974800000417
represents t-NuAll the desired rotational speeds of the left wheel from the moment +1 to the moment tThe degree of the magnetic field is measured,
Figure BDA00020496974800000418
represents t-NuAll the desired rotational speeds of the right wheel from the time +1 to the time t are set
Figure BDA00020496974800000419
And
Figure BDA00020496974800000420
wherein
Figure BDA00020496974800000421
Represents a gaussian distribution;
s1052: the profile set according to step S1051, i.e.And
Figure BDA00020496974800000423
Figure BDA00020496974800000424
randomly generating a set of sets of possible rotation speeds of the left and right wheels at the time t +1
Figure BDA00020496974800000425
Figure BDA00020496974800000426
And
Figure BDA00020496974800000427
where each set has L elements.
The invention also discloses a mobile robot navigation device which is characterized by comprising a positioning system, an electronic compass, an odometer and a computer, wherein the positioning system, the electronic compass and the odometer are in telecommunication connection with the computer;
the positioning system is used for reading the robot position data at the time t;
the electronic compass is used for reading the robot direction data at the time t;
the odometer is used for reading the rotating speed data of the left wheel and the right wheel of the robot at the time t;
and the computer is used for processing the robot position data, the robot direction data and the rotation speed data of the left and right wheels of the robot by using the method of claims 1-2 to realize the navigation of the mobile robot.
The invention also discloses a computer readable storage medium, which is characterized in that a plurality of navigation programs are stored on the computer readable storage medium, and the navigation programs are used for being called by a processor and executing the steps in the mobile robot navigation method of any one of the claims 1-2.
The invention also discloses a differential steering wheel type mobile robot which comprises a navigation device, and is characterized in that the navigation device is the navigation device in claim 3.
The invention also discloses a crawler-type mobile robot, which comprises a navigation device, and is characterized in that the navigation device is the navigation device in claim 3.
Compared with the prior art, the method and the device have the advantages that the slip characteristics of different ground types are considered, the pose and the slip coefficient of the robot can be estimated at the same time, the slip effect can be considered in a navigation algorithm, the motion time and the energy consumption are comprehensively considered in the process of optimizing the path, and the running time of the robot powered by the battery is improved.
Drawings
The following detailed description of embodiments of the invention refers to the accompanying drawings in which:
fig. 1 is a block diagram of a mobile robot navigation device.
Detailed Description
To further illustrate the features of the present invention, refer to the following detailed description of the invention and the accompanying drawings. The drawings are for reference and illustration purposes only and are not intended to limit the scope of the present disclosure.
The invention discloses a mobile robot navigation method, which specifically comprises the following steps:
s101: initializing, setting the time T to 1, setting the sampling time interval T and the robot width B, and determining the optimal state estimation value at the time T
Figure BDA0002049697480000051
Wherein
Figure BDA0002049697480000052
Respectively representing the optimal estimation value of the east coordinate, the optimal estimation value of the north coordinate, the optimal estimation value of the direction, the optimal estimation value of the left slip ratio, the optimal estimation value of the right slip ratio and the optimal estimation value of the sideslip factor of the robot at the time t, setting the variances Q and R of process noise and observation noise, and setting the state optimal estimation error covariance at the time t
Figure BDA0002049697480000053
Figure BDA0002049697480000054
Is a 6-dimensional square matrix; let the best navigation point at time t
Figure BDA0002049697480000055
Initial coordinates of the robot;
determining
Figure BDA0002049697480000056
The following methods can be adopted: the east coordinate, the north coordinate and the direction of the robot are manually detected, and the results are respectively given
Figure BDA0002049697480000057
For the
Figure BDA0002049697480000058
Can be set to 0; q and R can be determined according to parameters of the sensor or through statistics of output noise of the sensor, and the error covariance can be estimated according to the optimal state
Figure BDA0002049697480000059
Can be set as a diagonal matrixThe diagonal elements are 0.01;
s102: increasing t by 1;
s103: reading the robot position data at the time t from a positioning system, reading the robot direction data at the time t from an electronic compass, and obtaining an observation vector at the time t
Figure BDA0002049697480000061
Wherein
Figure BDA0002049697480000062
Indicating the east coordinate detection value of the robot at the time t,
Figure BDA0002049697480000063
indicating the detected north coordinate value of the robot at the time t,
Figure BDA0002049697480000064
indicating a robot direction detection value at time t; reading the rotating speed data of the left and right wheels of the robot at the time t from the odometer
Figure BDA0002049697480000065
Wherein
Figure BDA0002049697480000066
Representing the detected value of the rotation speed of the left wheel of the robot,
Figure BDA0002049697480000067
representing a detected value of the rotation speed of the right wheel of the robot;
s104: by ytAnd wtEstimating pose of robot at time t
Figure BDA0002049697480000068
And coefficient of sliding
Figure BDA0002049697480000069
The following were used:
s1041: state prediction estimation, obtaining the state prediction estimation value at t moment
Figure BDA00020496974800000610
Wherein
Figure BDA00020496974800000611
The predicted estimated value of the east coordinate, the predicted estimated value of the north coordinate, the predicted estimated value of the direction, the predicted estimated value of the left slip ratio, the predicted estimated value of the right slip ratio and the predicted estimated value of the sideslip factor of the robot at the time t can be represented by a state transition equation
Figure BDA00020496974800000612
The state transition equation is derived as follows:
Figure BDA00020496974800000613
Figure BDA00020496974800000614
Figure BDA00020496974800000615
Figure BDA00020496974800000616
and the state prediction estimation error covariance is calculated as follows:
Figure BDA00020496974800000617
wherein F is
Figure BDA00020496974800000618
Relative to
Figure BDA00020496974800000619
The jacobian matrix of (a) is,
Figure BDA00020496974800000620
the error covariance is estimated optimally for the state at time t,
Figure BDA00020496974800000621
is a 6-dimensional square matrix, Q represents the variance of process noise, and F' represents the transposition of F;
s1042: calculating observation information
Figure BDA00020496974800000622
Covariance with innovation
Figure BDA00020496974800000623
Wherein
Figure BDA00020496974800000624
Calculating an innovation covariance estimate
Figure BDA00020496974800000625
Wherein N iswIs composed of
Figure BDA0002049697480000071
Estimated sliding window width, the attenuation factor γ is calculated as follows:
Figure BDA0002049697480000072
wherein, alpha is a real number larger than 1, R represents the variance of the observation noise, and H' represents the transposition of H;
s1043: adjustment of
Figure BDA0002049697480000073
Order to
Figure BDA0002049697480000074
And recalculate innovation covariance
Figure BDA0002049697480000075
S1044: performing optimal state estimation to obtain optimal state estimation at time tEvaluating value
Figure BDA0002049697480000076
The following were used:
Figure BDA0002049697480000077
Figure BDA0002049697480000078
wherein the content of the first and second substances,
Figure BDA0002049697480000079
and calculating a state optimal estimation error covariance
Figure BDA00020496974800000710
Wherein I6Representing a 6-dimensional unit matrix;
s105: calculating the expected rotating speeds of the left wheel and the right wheel at the t +1 moment according to the pose and the sliding coefficient of the robot at the t moment acquired in the step S104
Figure BDA00020496974800000711
And
Figure BDA00020496974800000712
the following were used:
first, a set of sets of possible rotation speeds of the left and right wheels at time t +1 is randomly generated
Figure BDA00020496974800000713
Figure BDA00020496974800000714
And
Figure BDA00020496974800000715
each set has L elements, where,
Figure BDA00020496974800000716
representing the randomly generated possible rotation speed of the left wheel at time t +1,
Figure BDA00020496974800000717
Figure BDA00020496974800000718
representing the randomly generated possible rotation speed of the right wheel at time t +1, will then
Figure BDA00020496974800000719
And
Figure BDA00020496974800000720
speed pair in
Figure BDA00020496974800000721
Brought into
Figure BDA00020496974800000722
Obtaining corresponding position predicted value
Figure BDA00020496974800000723
Wherein f is the state transition equation, using
Figure BDA00020496974800000724
Replacement of
Figure BDA00020496974800000725
In
Figure BDA00020496974800000726
That is to say
Figure BDA00020496974800000727
The specific form of (a); recording coordinate points
Figure BDA00020496974800000728
Calculate each Ot+1,iCorresponding objective function JiObjective function Ji=Ji,1+Ji,2
Figure BDA00020496974800000729
Wherein k is1、k2Respectively a rotation resistance energy consumption coefficient and a forward resistance energy consumption coefficient,
Figure BDA00020496974800000730
Figure BDA00020496974800000731
wherein, OTThe coordinates of the end point are shown,
Figure BDA00020496974800000732
and (O)t+1,i,OT) Each represents Ot+1,iAnd
Figure BDA00020496974800000733
euclidean distance of, Ot+1,iAnd OTThe Euclidean distance of (c); find JiTaking the minimum Ot+1,iI.e. the best navigation point at time t +1
Figure BDA00020496974800000734
Preferably, the method for randomly generating a set of possible rotation speeds of the left and right wheels at the time t +1 involved in the step S105 is as follows:
s1051: from t-1 to t-NuTime of day setting
Figure BDA0002049697480000081
And
Figure BDA0002049697480000082
wherein v ismAn upper limit of the rotational speed of the wheel is indicated,
Figure BDA0002049697480000083
represents 0 to vmIs uniformly distributed, NuIs a positive integer greater than 1; at t>NuTime of day calculation
Figure BDA0002049697480000084
Wherein
Figure BDA0002049697480000085
And
Figure BDA0002049697480000086
is a sequence of
Figure BDA0002049697480000087
The mean and the variance of (a) is,
Figure BDA0002049697480000088
and
Figure BDA0002049697480000089
is a sequence of
Figure BDA00020496974800000810
The mean and the variance of (a) is,
Figure BDA00020496974800000811
represents t-NuAll of the desired rotational speeds of the left wheel from time +1 to time t,
Figure BDA00020496974800000812
represents t-NuAll the desired rotational speeds of the right wheel from the time +1 to the time t are set
Figure BDA00020496974800000813
And
Figure BDA00020496974800000814
wherein
Figure BDA00020496974800000815
Represents a gaussian distribution;
s1052: the profile set according to step S1051, i.e.
Figure BDA00020496974800000816
And
Figure BDA00020496974800000817
Figure BDA00020496974800000818
randomly generating a set of sets of possible rotation speeds of the left and right wheels at the time t +1
Figure BDA00020496974800000819
Figure BDA00020496974800000820
And
Figure BDA00020496974800000821
where each set has L elements.
The invention also discloses a mobile robot navigation device, which is characterized by comprising a positioning system, an electronic compass, an odometer and a computer, wherein the positioning system, the electronic compass and the odometer are in telecommunication connection with the computer;
the positioning system is used for reading the robot position data at the time t;
the electronic compass is used for reading the robot direction data at the time t;
the odometer is used for reading the rotating speed data of the left wheel and the right wheel of the robot at the time t;
and the computer is used for processing the robot position data, the robot direction data and the rotation speed data of the left and right wheels of the robot by using the method of claims 1-2 to realize the navigation of the mobile robot.
The invention also discloses a computer readable storage medium, which is characterized in that a plurality of navigation programs are stored on the computer readable storage medium, and the navigation programs are used for being called by a processor and executing the steps in the mobile robot navigation method of any one of the claims 1-2.
The invention also discloses a differential steering wheel type mobile robot which comprises a navigation device, and is characterized in that the navigation device is the navigation device in claim 3.
The invention also discloses a crawler-type mobile robot, which comprises a navigation device, and is characterized in that the navigation device is the navigation device in claim 3.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (6)

1. A mobile robot navigation method is characterized by comprising the following steps:
s101: initializing, setting the time T to 1, setting the sampling time interval T and the robot width B, and determining the optimal state estimation value at the time T
Figure FDA0002049697470000011
Wherein
Figure FDA0002049697470000012
Respectively representing the optimal estimation value of the east coordinate, the optimal estimation value of the north coordinate, the optimal estimation value of the direction, the optimal estimation value of the left slip ratio, the optimal estimation value of the right slip ratio and the optimal estimation value of the sideslip factor of the robot at the time t, setting the variances Q and R of process noise and observation noise, and setting the state optimal estimation error covariance at the time t
Figure FDA0002049697470000013
Is a 6-dimensional square matrix; let the best navigation point at time t
Figure FDA0002049697470000014
Initial coordinates of the robot;
s102: increasing t by 1;
s103: reading the robot position data at the time t from a positioning system, reading the robot direction data at the time t from an electronic compass, and obtaining an observation vector at the time t
Figure FDA0002049697470000015
Wherein
Figure FDA0002049697470000016
Indicating the east coordinate detection value of the robot at the time t,
Figure FDA0002049697470000017
indicating the detected north coordinate value of the robot at the time t,
Figure FDA0002049697470000018
indicating a robot direction detection value at time t; reading the rotating speed data of the left and right wheels of the robot at the time t from the odometer
Figure FDA0002049697470000019
Wherein
Figure FDA00020496974700000110
Representing the detected value of the rotation speed of the left wheel of the robot,
Figure FDA00020496974700000111
representing a detected value of the rotation speed of the right wheel of the robot;
s104: by ytAnd wtEstimating pose of robot at time t
Figure FDA00020496974700000112
And coefficient of sliding
Figure FDA00020496974700000113
The following were used:
s1041: state prediction estimation to obtain the state prediction of t timeMeasure the estimated value
Figure FDA00020496974700000114
Wherein
Figure FDA00020496974700000115
The predicted estimated value of the east coordinate, the predicted estimated value of the north coordinate, the predicted estimated value of the direction, the predicted estimated value of the left slip ratio, the predicted estimated value of the right slip ratio and the predicted estimated value of the sideslip factor of the robot at the time t can be represented by a state transition equation
Figure FDA00020496974700000116
The state transition equation is derived as follows:
Figure FDA00020496974700000117
Figure FDA00020496974700000118
Figure FDA00020496974700000119
Figure FDA00020496974700000120
and the state prediction estimation error covariance is calculated as follows:
Figure FDA00020496974700000121
wherein F is
Figure FDA00020496974700000122
Relative to
Figure FDA00020496974700000123
The jacobian matrix of (a) is,
Figure FDA00020496974700000124
the error covariance is estimated optimally for the state at time t,
Figure FDA00020496974700000125
is a 6-dimensional square matrix, Q represents the variance of process noise, and F' represents the transposition of F;
s1042: calculating observation information
Figure FDA0002049697470000021
Covariance with innovation
Figure FDA0002049697470000022
Wherein
Figure FDA0002049697470000023
Calculating an innovation covariance estimate
Figure FDA0002049697470000024
Wherein N iswIs composed of
Figure FDA0002049697470000025
Estimated sliding window width, the attenuation factor γ is calculated as follows:
Figure FDA0002049697470000026
wherein, alpha is a real number larger than 1, R represents the variance of the observation noise, and H' represents the transposition of H;
s1043: adjustment of
Figure FDA0002049697470000027
Order to
Figure FDA0002049697470000028
And recalculate innovation covariance
Figure FDA0002049697470000029
S1044: performing optimal state estimation to obtain optimal state estimation value at t moment
Figure FDA00020496974700000210
The following were used:
Figure FDA00020496974700000211
Figure FDA00020496974700000212
wherein the content of the first and second substances,
Figure FDA00020496974700000213
and calculating a state optimal estimation error covariance
Figure FDA00020496974700000214
Wherein I6Representing a 6-dimensional unit matrix;
s105: calculating the expected rotating speeds of the left wheel and the right wheel at the t +1 moment according to the pose and the sliding coefficient of the robot at the t moment acquired in the step S104
Figure FDA00020496974700000215
And
Figure FDA00020496974700000216
the following were used:
first, a set of sets of possible rotation speeds of the left and right wheels at time t +1 is randomly generated
Figure FDA00020496974700000217
Figure FDA00020496974700000218
And
Figure FDA00020496974700000219
each set has L elements, where,
Figure FDA00020496974700000220
representing the randomly generated possible rotation speed of the left wheel at time t +1,
Figure FDA00020496974700000221
Figure FDA00020496974700000222
representing the randomly generated possible rotation speed of the right wheel at time t +1, will then
Figure FDA00020496974700000223
And
Figure FDA00020496974700000224
speed pair in
Figure FDA00020496974700000225
Brought into
Figure FDA00020496974700000226
f is a state transition equation to obtain a corresponding position predicted value
Figure FDA00020496974700000227
Recording coordinate points
Figure FDA00020496974700000228
Calculate each Ot+1,iCorresponding objective function JiObjective function Ji=Ji,1+Ji,2
Figure FDA00020496974700000229
Wherein k is1、k2Respectively a rotation resistance energy consumption coefficient and a forward resistance energy consumption coefficient,
Figure FDA00020496974700000230
Figure FDA0002049697470000031
wherein, OTThe coordinates of the end point are shown,
Figure FDA0002049697470000032
and (O)t+1,i,OT) Each represents Ot+1,iAnd
Figure FDA0002049697470000033
euclidean distance of, Ot+1,iAnd OTThe Euclidean distance of (c); find JiTaking the minimum Ot+1,iI.e. the best navigation point at time t +1
Figure FDA0002049697470000034
2. The method for navigating a mobile robot according to claim 1, wherein the step S105 involves randomly generating a set of left and right wheel possible rotation speeds at time t +1 as follows:
s1051: from t-1 to t-NuTime of day setting
Figure FDA0002049697470000035
And
Figure FDA0002049697470000036
wherein v ismAn upper limit of the rotational speed of the wheel is indicated,
Figure FDA0002049697470000037
represents 0 to vmIs uniformly distributed, NuIs a positive integer greater than 1(ii) a At t>NuTime of day calculation
Figure FDA0002049697470000038
Wherein
Figure FDA0002049697470000039
And
Figure FDA00020496974700000310
is a sequence of
Figure FDA00020496974700000311
The mean and the variance of (a) is,
Figure FDA00020496974700000312
and
Figure FDA00020496974700000313
is a sequence of
Figure FDA00020496974700000314
The mean and the variance of (a) is,
Figure FDA00020496974700000315
represents t-NuAll of the desired rotational speeds of the left wheel from time +1 to time t,
Figure FDA00020496974700000316
represents t-NuAll the desired rotational speeds of the right wheel from the time +1 to the time t are set
Figure FDA00020496974700000317
And
Figure FDA00020496974700000318
wherein
Figure FDA00020496974700000319
Represents a gaussian distribution;
s1052: the profile set according to step S1051, i.e.
Figure FDA00020496974700000320
And
Figure FDA00020496974700000321
Figure FDA00020496974700000322
randomly generating a set of sets of possible rotation speeds of the left and right wheels at the time t +1
Figure FDA00020496974700000323
Figure FDA00020496974700000324
And
Figure FDA00020496974700000325
where each set has L elements.
3. A mobile robot navigation device is characterized by comprising a positioning system, an electronic compass, an odometer and a computer, wherein the positioning system, the electronic compass and the odometer are in telecommunication connection with the computer;
the positioning system is used for reading the robot position data at the time t;
the electronic compass is used for reading the robot direction data at the time t;
the odometer is used for reading the rotating speed data of the left wheel and the right wheel of the robot at the time t;
and the computer is used for processing the robot position data, the robot direction data and the rotation speed data of the left and right wheels of the robot by using the method of claims 1-2 to realize the navigation of the mobile robot.
4. A computer-readable storage medium, wherein a plurality of navigation programs are stored on the computer-readable storage medium, and the plurality of navigation programs are used for being called by a processor and executing the steps of the mobile robot navigation method according to any one of claims 1-2.
5. A differentially steered wheeled mobile robot including a navigation device, wherein the navigation device is the navigation device of claim 3.
6. A tracked mobile robot comprising a navigation device, characterized in that said navigation device is a navigation device according to claim 3.
CN201910370245.6A 2019-05-06 2019-05-06 Mobile robot navigation method and device Expired - Fee Related CN110160527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910370245.6A CN110160527B (en) 2019-05-06 2019-05-06 Mobile robot navigation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910370245.6A CN110160527B (en) 2019-05-06 2019-05-06 Mobile robot navigation method and device

Publications (2)

Publication Number Publication Date
CN110160527A CN110160527A (en) 2019-08-23
CN110160527B true CN110160527B (en) 2020-08-28

Family

ID=67633719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910370245.6A Expired - Fee Related CN110160527B (en) 2019-05-06 2019-05-06 Mobile robot navigation method and device

Country Status (1)

Country Link
CN (1) CN110160527B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112219087A (en) * 2019-08-30 2021-01-12 深圳市大疆创新科技有限公司 Pose prediction method, map construction method, movable platform and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298113A (en) * 2014-10-22 2015-01-21 五邑大学 Self-adaptive fuzzy balance controller for two-wheeled robot
CN107901917A (en) * 2017-11-16 2018-04-13 中国科学院合肥物质科学研究院 A kind of automatic driving vehicle path tracking control method based on sliding coupling estimation of trackslipping
CN107991110A (en) * 2017-11-29 2018-05-04 安徽省通信息科技有限公司 A kind of caterpillar type robot slides parameter detection method
CN108020855A (en) * 2017-11-29 2018-05-11 安徽省通信息科技有限公司 The pose and instantaneous center of rotation combined estimation method of a kind of glide steering robot
CN108036789A (en) * 2017-11-29 2018-05-15 安徽省通信息科技有限公司 A kind of field robot reckoning method
CN108051004A (en) * 2017-11-29 2018-05-18 安徽省通信息科技有限公司 Instantaneous center of rotation estimation method for four-wheel robot
CN108098770A (en) * 2017-12-14 2018-06-01 张辉 A kind of Trajectory Tracking Control method of mobile robot

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728620B2 (en) * 2002-02-08 2004-04-27 Visteon Global Technologies, Inc. Predictive control algorithm for an anti-lock braking system for an automotive vehicle
US9975547B2 (en) * 2016-08-03 2018-05-22 Ford Global Technologies, Llc Methods and systems for automatically detecting and responding to dangerous road conditions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104298113A (en) * 2014-10-22 2015-01-21 五邑大学 Self-adaptive fuzzy balance controller for two-wheeled robot
CN107901917A (en) * 2017-11-16 2018-04-13 中国科学院合肥物质科学研究院 A kind of automatic driving vehicle path tracking control method based on sliding coupling estimation of trackslipping
CN107991110A (en) * 2017-11-29 2018-05-04 安徽省通信息科技有限公司 A kind of caterpillar type robot slides parameter detection method
CN108020855A (en) * 2017-11-29 2018-05-11 安徽省通信息科技有限公司 The pose and instantaneous center of rotation combined estimation method of a kind of glide steering robot
CN108036789A (en) * 2017-11-29 2018-05-15 安徽省通信息科技有限公司 A kind of field robot reckoning method
CN108051004A (en) * 2017-11-29 2018-05-18 安徽省通信息科技有限公司 Instantaneous center of rotation estimation method for four-wheel robot
CN108098770A (en) * 2017-12-14 2018-06-01 张辉 A kind of Trajectory Tracking Control method of mobile robot

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Approximating Kinematics for Tracked Mobile Robots;Mandow A ,et al;《International Journal of Robotics Research》;20051031;第867-878页 *
Estimation of track-soil for Autonomous Tracked Vehicles;Le A T ,Rye D C ,Durrant-Whyte H F;《IEEE International》;19971231;第1388-1393页 *
Modeling and multiobjective optimization of traction performance for autonomous wheeled mobile robot in rough terrain;Ozoemena Anthony ANI, et al;《Journal of Zhejiang University-SCIENCE C》;20131231;第11-29页 *
Slip, Traction Control, and Navigation of a Lunar Rover;Kazuya Yoshida,et al;《Proceeding of the 7th International Symposium on Artificial Intelligence, Robotics and Automation in Space》;20030523;第1-8页 *

Also Published As

Publication number Publication date
CN110160527A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
Wang et al. A simple and parallel algorithm for real-time robot localization by fusing monocular vision and odometry/AHRS sensors
Anderson et al. Towards relative continuous-time SLAM
Veronese et al. A light-weight yet accurate localization system for autonomous cars in large-scale and complex environments
CN109416539A (en) The method and system of the course changing control of the autonomous vehicle of use ratio, integral and differential (PID) controller
CN108387236B (en) Polarized light SLAM method based on extended Kalman filtering
CN112965063B (en) Robot mapping and positioning method
US11506511B2 (en) Method for determining the position of a vehicle
CN110345936B (en) Track data processing method and processing system of motion device
CN106153037B (en) A kind of indoor orientation method of robot, apparatus and system
CN106556395A (en) A kind of air navigation aid of the single camera vision system based on quaternary number
CN110160527B (en) Mobile robot navigation method and device
Lin et al. Fast, robust and accurate posture detection algorithm based on Kalman filter and SSD for AGV
CN114879660A (en) Robot environment sensing method based on target driving
CN111123953A (en) Particle-based mobile robot group under artificial intelligence big data and control method thereof
Heide et al. Performance optimization of autonomous platforms in unstructured outdoor environments using a novel constrained planning approach
CN113155126A (en) Multi-machine cooperative target high-precision positioning system and method based on visual navigation
Pentzer et al. On‐line estimation of power model parameters for skid‐steer robots with applications in mission energy use prediction
CN113554705B (en) Laser radar robust positioning method under changing scene
CN113379915B (en) Driving scene construction method based on point cloud fusion
Reina Methods for wheel slip and sinkage estimation in mobile robots
KR101907611B1 (en) Localization method for autonomous vehicle
Szaj et al. Vehicle localization using laser scanner
CN114721377A (en) Improved Cartogrier based SLAM indoor blind guiding robot control method
Housein et al. Extended Kalman filter sensor fusion in practice for mobile robot localization
Lopes et al. Autonomous exploration of unknown terrain for groups of mobile robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210825

Address after: 230001 room 1601, Jinhui Sunshine Building, No. 643, Changjiang West Road, Shushan District, Hefei City, Anhui Province

Patentee after: Anhui Hongyun Network Technology Co.,Ltd.

Address before: 230000 room 611-180, R & D building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province

Patentee before: Anhui Red Bat Intelligent Technology Co.,Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200828

CF01 Termination of patent right due to non-payment of annual fee