CN113671523A - Robot positioning method, device, storage medium and robot - Google Patents

Robot positioning method, device, storage medium and robot Download PDF

Info

Publication number
CN113671523A
CN113671523A CN202110951747.5A CN202110951747A CN113671523A CN 113671523 A CN113671523 A CN 113671523A CN 202110951747 A CN202110951747 A CN 202110951747A CN 113671523 A CN113671523 A CN 113671523A
Authority
CN
China
Prior art keywords
timestamp
current timestamp
point cloud
coordinate system
pose information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110951747.5A
Other languages
Chinese (zh)
Inventor
宋亚龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110951747.5A priority Critical patent/CN113671523A/en
Publication of CN113671523A publication Critical patent/CN113671523A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a robot positioning method, a device, a storage medium and a robot, wherein the robot comprises a foot-type odometer and a single-line laser radar, and the method comprises the following steps: the method comprises the steps of obtaining pose information collected by the foot type odometer at the last timestamp, obtaining point cloud data collected by the single-line laser radar at the current timestamp, calculating an expected position of the current timestamp in a world coordinate system based on the pose information collected by the last timestamp, mapping the point cloud data collected at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix, and calculating an actual position of the current timestamp in the world coordinate based on the transformation matrix and the expected position. By adopting the embodiment, the accuracy of real-time positioning of the robot can be improved.

Description

Robot positioning method, device, storage medium and robot
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a robot positioning method, an apparatus, a storage medium, and a robot.
Background
With the development of artificial intelligence technology, the research and development of mobile robots are very rapid, and when the robots perform tasks, the robots need to be positioned in real time in order to ensure the positions of the robots or prevent the robots from being lost.
When the foot type robot carries out real-time positioning, the adopted mode is to carry out real-time positioning by utilizing the foot type odometer, but when the foot type odometer is applied, the robot is influenced by factors such as air resistance, ground friction force and the like, the obtained real-time positioning has larger errors, and the positioning accuracy is insufficient.
Disclosure of Invention
The embodiment of the application provides a robot positioning method and device, a storage medium and a robot, wherein the expected position of the robot is corrected in real time in a multi-sensor fusion mode, and the accuracy of real-time robot positioning can be improved.
The technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a robot positioning method, which is applied to a robot including a foot-type odometer and a single-line laser radar, and the method includes:
acquiring pose information acquired by the foot-type odometer at the last timestamp, and acquiring point cloud data acquired by the single-line laser radar at the current timestamp;
calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp;
mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix;
calculating an actual location of the current timestamp in the world coordinate system based on the transformation matrix and the desired location.
In a second aspect, an embodiment of the present application provides a robot positioning device applied to a robot, where the robot includes a foot-type odometer and a single-line laser radar, and the device includes:
the data acquisition module is used for acquiring pose information acquired by the foot type odometer at the last timestamp and acquiring point cloud data acquired by the single-line laser radar at the current timestamp;
an expected position calculation module, configured to calculate an expected position of the current timestamp in a world coordinate system based on the pose information acquired at the last timestamp;
the transformation matrix calculation module is used for mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix;
an actual location calculation module to calculate an actual location of the current timestamp in the world coordinate system based on the transformation matrix and the desired location.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, embodiments of the present application provide a robot, which may include: a processor and a memory;
wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in the embodiment of the application, a foot type odometer and a single-line laser radar are installed on a robot, the position and pose information acquired by the foot type odometer at the last timestamp and the point cloud data acquired by the single-line laser radar at the current timestamp are acquired, the expected position of the current timestamp in a world coordinate system is calculated based on the position and pose information acquired by the last timestamp, the point cloud data acquired at the current timestamp and the expected position of the current timestamp are mapped to the world coordinate system to obtain a transformation matrix, and the actual position of the current timestamp in the world coordinate system is calculated based on the transformation matrix and the expected position. By adopting the method, the expected position of the current timestamp in the world coordinate system is corrected, the corrected expected position of the current timestamp is obtained, the actual position of the current timestamp can be obtained, the robot is positioned in real time, and the accuracy of positioning the robot is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a robot positioning method according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a robot positioning device according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a robot positioning device according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of a robot positioning method according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a robot positioning device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a desired position calculation module according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a matrix transformation calculation module according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application, as detailed in the appended claims.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship 6 system of the associated object, indicating that there may be three relationships, e.g., a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
The present application will be described in detail with reference to specific examples.
In the wheel-type robot, a wheel-type odometer mounted on the wheel-type robot is used as a positioning device of the wheel-type robot, and only attitude information such as an X-axis direction, a Y-axis direction, a Yaw angle (Yaw) and the like in a coordinate system based on the wheel-type odometer can be acquired. Compared with a wheel-type odometer, the foot-type odometer can acquire attitude information such as an X-axis direction, a Y-axis direction, a Z-axis direction, a Yaw angle (Yaw), a Roll angle (Roll) and a Pitch angle (Pitch) based on a coordinate system of the foot-type odometer, and the acquired information is more complete. However, when the foot-type odometer is applied, the robot is influenced by factors such as air resistance, ground friction and the like, so that the obtained real-time positioning has large errors, and the positioning accuracy is insufficient.
Based on this, a robot positioning method in the embodiment of the present application is applied to a robot, which includes a foot-type odometer and a single line laser radar.
The foot-type odometer is positioning equipment of the foot-type robot and can acquire pose information of the foot-type robot.
The single-line laser radar is a laser radar with a single line of a wire harness emitted by a laser source, transmits a detection signal (laser beam) to an object around the robot, compares a reflected signal (echo) with the transmitted detection signal, and obtains information such as azimuth, height, speed, attitude, even shape and the like after appropriate processing.
A robot positioning method provided in an embodiment of the present application will be described in detail below with reference to fig. 1 to 8. The method may be implemented in dependence of a computer program, executable on a robot positioning device based on the von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application.
Referring to fig. 1, a flow chart of a robot positioning method according to an embodiment of the present disclosure is schematically shown. As shown in fig. 1, the method of the embodiment of the present application may include the following steps:
s101, acquiring pose information acquired by the foot-type odometer at the last timestamp, and acquiring point cloud data acquired by the single-line laser radar at the current timestamp;
the time stamp is data generated by using a digital signature technology, and a signed object comprises original file information, signature parameters, signature time and other information. Thus, different timestamps represent different times of day.
The position and pose information of different timestamps can be collected through the foot type odometer, and the point cloud data of different timestamps can be collected through the single-line laser radar. Note that the timestamps of the two correspond to each other.
The pose refers to a position and a posture, the pose information is a specific numerical value of a parameter describing the pose, and as shown in fig. 2, the pose information includes numerical values of an X-axis direction, a Y-axis direction, a Z-axis direction, a Pitch angle (Pitch), a Yaw angle (Yaw), and a Roll angle (Roll) of the robot in a coordinate system based on a foot-type odometer.
Wherein the X-axis represents an abscissa in the three-dimensional space, the Y-axis represents an ordinate in the three-dimensional space, and the Z-axis represents an ordinate in the three-dimensional space.
The pitch angle is the direction rotating around the X axis in the three-dimensional space, and the numerical value of the pitch angle can represent the angle of the robot or the head of the robot deflecting in the up-down direction by taking the robot as the center; the yaw angle is the direction rotating around the Y axis in the three-dimensional space, and the numerical value of the pitch angle of the yaw angle can represent the angle of the robot or the head of the robot deflecting towards the left and right directions by taking the robot as the center; the roll angle is a direction of rotation around the Z axis in a three-dimensional space, and may represent an angle at which the robot or the head of the robot is tilted in the left-right direction around the robot.
The point cloud data is echo data generated based on reflected laser when a beam of laser of the single-line laser radar irradiates the surface of an object, and the echo data comprises information such as azimuth, distance, speed, attitude, even shape and the like.
The pose information acquired by the foot-type odometer and the point cloud data acquired by the single-line laser radar both contain positioning information of the foot-type robot.
In the walking process of the foot type robot, the foot type odometer collects pose information at each time stamp, and the single-line laser radar collects point cloud data at each time stamp. The pose information acquired by the foot type odometer at the last timestamp can be acquired by taking the current timestamp of the foot type robot in the walking process as a reference, acquiring the pose information acquired at the previous timestamp of the current timestamp, and acquiring point cloud data acquired by the single-line laser radar at the current timestamp based on the current timestamp. The position and pose information of the foot type odometer in the current time stamp is acquired, the point cloud data of the single line laser radar in the current time stamp is acquired, and the position of the position and pose information of the foot type odometer in the current time stamp is the same as or close to the position of the position and pose information of the single line laser radar in the point cloud data of the current time stamp.
It should be noted that the time interval between the last timestamp and the current timestamp is preset, for example, the interval 1 s.
S102, calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp;
the expected position of the current timestamp is an ideal position of the robot in a world coordinate system, namely an estimated position, which is calculated based on the previous timestamp under the current timestamp.
Specifically, the expected position of the current timestamp is obtained by adding a position offset to a numerical value corresponding to a position in the pose information acquired by the last timestamp. As shown in fig. 3, the a position represents a position in the pose information of the last timestamp, and the B position represents an expected position of the current timestamp, which can be calculated from the position in the pose information of the last timestamp and the position offset.
The position offset is a fixed displacement value of the robot between two timestamps in an ideal state, and can be obtained by calculating the position and orientation information of one timestamp arbitrarily selected and the last timestamp of the selected timestamp in the walking process of the robot. For example, in the embodiment of the present application, the pose information acquired by the foot odometer at the last timestamp and the pose information acquired by the foot odometer at the current timestamp may be calculated.
S103, mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix;
and mapping the Point cloud data acquired at the current time stamp and the expected position of the current time stamp to a world coordinate system by adopting an Iterative Closest Point (ICP) algorithm.
The ICP algorithm is a method for realizing point set and point set registration by using a closest point search method based on a data registration method.
The point cloud data acquired at the current timestamp and the expected position of the current timestamp are mapped to the world coordinate system, so that the point cloud data acquired at the current timestamp and the expected position of the current timestamp are in one-to-one correspondence in the world coordinate system, and the purpose of information fusion is achieved.
After the point cloud data collected at the current timestamp and the expected position of the current timestamp are mapped to the world coordinate system, the nearest point of each point cloud in the point cloud data is searched in the world coordinate system.
The nearest point is a coordinate point which is mapped to the coordinate point of the world coordinate system by the expected position of the current time stamp and is closest to the coordinate point corresponding to the position in the point cloud data.
And after the nearest point of each point cloud in the point cloud data is searched, calculating a transformation matrix between each point cloud and the nearest point in the point cloud data.
And S104, calculating the actual position of the current timestamp in the world coordinate system based on the transformation matrix and the expected position.
The expected position of the current timestamp is the result of estimating the position of the robot under the current timestamp, and is not the actual position of the robot, and because the pose information of any timestamp acquired by the foot type odometer has larger error than the point cloud data of any timestamp acquired by the single-line laser radar, namely the accuracy of the position of the point cloud data acquired by the single-line laser radar is higher than the accuracy of the pose information acquired by the foot type odometer and is close to the actual position of the robot, and the expected position of the current timestamp obtained by adding the pose information of the last timestamp acquired by the foot type odometer and a fixed position offset is larger than the error of the actual position of the robot, the expected position of the current timestamp is corrected based on the point cloud data acquired by the single-line laser radar, and what is expressed by the transformation matrix is the position between the position in the point cloud data acquired by the single-line laser radar and the expected position of the current timestamp And therefore, multiplying the expected position of the current timestamp by the transformation matrix for the expected position of the current timestamp, and using the calculated result to correct the expected position of the current timestamp to obtain the actual position of the current timestamp in the world coordinate system.
In the embodiment of the application, the pose information acquired by the foot type odometer at the last timestamp is acquired, the point cloud data acquired by the single-line laser radar at the current timestamp is acquired, the expected position of the current timestamp in a world coordinate system is calculated based on the pose information acquired at the last timestamp, the point cloud data acquired at the current timestamp and the expected position of the current timestamp are mapped to the world coordinate system to obtain a transformation matrix, and the actual position of the current timestamp in the world coordinate system is calculated based on the transformation matrix and the expected position. By adopting the method, the expected position of the current timestamp in the world coordinate system is corrected, the corrected expected position of the current timestamp is obtained, the actual position of the current timestamp can be obtained, the robot is positioned in real time, and the accuracy of positioning the robot is improved.
Please refer to fig. 4, which is a flowchart illustrating a robot positioning method according to an embodiment of the present disclosure. As shown in fig. 4, the robot positioning method may include the steps of:
s201, acquiring pose information acquired by the foot-type odometer at the last timestamp, and acquiring point cloud data acquired by the single-line laser radar at the current timestamp;
and the process of acquiring the pose information acquired by the foot type odometer at the last timestamp and acquiring the point cloud data acquired by the single-line laser radar at the current timestamp is the same as S101.
Because the foot-type odometer and the single-wire laser radar are devices capable of positioning the robot, when the same timestamp is adopted, the position in the pose information acquired by the foot-type odometer and the position in the point cloud data acquired by the single-wire laser radar express similar positioning, and the position can be used as data for judging the current positioning of the robot.
S202, acquiring pose information acquired by the foot-type odometer at the current timestamp;
obtaining a position offset of the foot type odometer between the last timestamp and the current timestamp based on the pose information acquired at the last timestamp and the pose information acquired at the current timestamp;
the current timestamp can be the first timestamp, and can also be other timestamps selected at will by the robot in the walking process, and the current timestamp of the foot type odometer in the pose information collected by the current timestamp and the current timestamp of the point cloud data of the current timestamp collected by the single-line laser radar are obtained to be the same timestamp.
The position offset may be obtained based on pose information of two adjacent time stamps. For example, when the pose information acquired at the last timestamp is T1 and the pose information acquired at the current timestamp is T2, the calculation formula of the position offset between the last timestamp and the current timestamp is as follows:
Figure BDA0003217221490000071
s203, calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp and the position offset;
specifically, the expected position of the current timestamp in the world coordinate system is obtained by adding the position offset between the previous timestamp and the current timestamp to the pose information acquired by the previous timestamp.
The expected position of the current timestamp is a result of estimating the position of the robot in the current timestamp under the ideal state, and is not the position of the robot in the pose information of the current timestamp acquired by the foot type odometer or the actual position of the robot in the current timestamp.
S204, mapping the point cloud data acquired at the current time stamp and the expected position of the current time stamp to the world coordinate system; searching the world coordinate system for the nearest adjacent point of each point cloud in the point cloud data;
and mapping the point cloud data acquired at the current time stamp and the expected position of the current time stamp to the world coordinate system to generate a grid map. Wherein, the coordinate system adopted by the grid map is the world coordinate system.
The grid map is also called a raster image, and refers to an image which is discretized in space and brightness. And mapping the point cloud data acquired at the current time stamp and the expected position of the current time stamp to a grid map obtained by the world coordinate system, wherein the grid map comprises the position of the point cloud data acquired at the current time stamp and specific coordinate information of the expected position of the current time stamp in the world coordinate system.
The specific Mapping process of the grid map is to perform loop detection on the point cloud data acquired by the current timestamp and the expected position of the current timestamp, wherein the loop detection is performed in various ways, such as synchronous positioning and Mapping (SLAM) algorithm based on map optimization, bag-of-words model, similarity calculation, deep learning, and the like. When a loop is detected, the motion trajectory generated by the point cloud data and the pose information in the world coordinate system may be optimized by using a Singular Value Decomposition (SVD) algorithm or a nonlinear optimization method to obtain an optimized motion trajectory, and the following description will take an example of the process of optimizing by using the SVD algorithm as an example.
For two point sets (e.g., point cloud data collected by a single line lidar and the expected location of the current timestamp):
X={X1,X2,X3,X4,X5....-Xnx}
P={P1,P2,P3,P4,P5....-Pnx}
finding a euclidean transformation R, t such that:
Figure BDA0003217221490000081
Xi-Rpi + t solves for R, t, so that the error function is minimal:
Figure BDA0003217221490000091
the centroid positions of two sets of points are calculated:
Figure BDA0003217221490000092
Figure BDA0003217221490000093
calculating the centroid-removed coordinates of each point:
X′={Xi-ux}={Xi′}
X′={Xi-ux}={Xi′}
and (3) deriving an objective function:
Figure BDA0003217221490000094
wherein 2(Xi-ux-R (pi-up))TThe (ux-Rup-t) integral sums to 0, so the above equation can be simplified as:
Figure BDA0003217221490000095
the optimization objective function becomes:
Figure BDA0003217221490000096
observing the optimization objective function, wherein | (Xi-Ux-R (pi-up)) | survival electrically2Related only to the rotation matrix R, and | (Ux-Rup-t) | non-conducting phosphor2And R is related to t, the second term can be 0 only by the required R, and the translation matrix t is obtained, so that the actual optimization objective function is as follows:
Figure BDA0003217221490000101
can be simplified as follows:
Figure BDA0003217221490000102
Figure BDA0003217221490000103
order to
Figure BDA0003217221490000104
After the SVD is decomposed, the SVD is processed,
Figure BDA0003217221490000105
wherein
Figure BDA0003217221490000106
Forming a diagonal matrix for the singular values, the diagonal elements being arranged from large to small, and U and V being diagonal matrices, when W is full rank, the rotation matrix R is:
R=UVT
the translation matrix t is then:
t=ux-Rup
and generating a grid map based on the optimized motion trail.
And searching the nearest adjacent point of each point cloud in the point cloud data based on the generated grid map.
Compared with the pose information acquired based on the foot type odometer, the accuracy of the point cloud data acquired by the single line laser radar is higher, when the point cloud data acquired by the current timestamp is mapped to the world coordinate system, a certain position deviation is formed between the expected position of the current timestamp and the expected position of the current timestamp, and the point cloud data cannot be located at the same coordinate point of the world coordinate system, so that the nearest point of each point cloud in the point cloud data acquired by the single line laser radar is searched, and the expected position of the current timestamp in the world coordinate system can be corrected based on each point cloud in the point cloud data acquired by the single line laser radar in the following process.
Specifically, the generated grid map is to map point cloud data acquired at a current time stamp to a world coordinate system, obtain coordinate points corresponding to each point cloud in the point cloud data in the world coordinate system based on positions in each point cloud in the point cloud data, map an expected position of the current time stamp to the world coordinate system, obtain coordinate points corresponding to an expected position of the current time stamp in the world coordinate system based on an expected position of the current time stamp, search for coordinate points corresponding to the current time stamp closest to the coordinate points corresponding to each point cloud in the world coordinate system, and determine the coordinate points corresponding to the expected position of the current time stamp in the world coordinate system as nearest points. And calculating a transformation matrix between coordinate points corresponding to each point cloud in the point cloud data and the nearest adjacent points of each point cloud in the point cloud data.
S205, calculating a transformation matrix between each point cloud in the point cloud data and the nearest adjacent point;
calculating the transformation matrix between each point cloud and the Nearest neighboring point in the point cloud data may adopt a near K Search (Nearest K Search) algorithm to calculate the transformation matrix between each point cloud and the Nearest neighboring point in the point cloud data. Wherein, the nerest K Search is an algorithm in the PCL of the open source library.
The transformation matrix represents the position difference between each point cloud in the point cloud data and the nearest point of each point cloud, so that the coordinate offset degree of each point cloud in the point cloud data and the nearest point of each point cloud can be known based on the transformation matrix, and the error of the pose information acquired based on the foot-type odometer can be obtained.
And S206, multiplying the expected position by the transformation matrix to obtain the actual position of the current timestamp in the world coordinate system.
And multiplying the expected position of the current timestamp by the transformation matrix, and using the calculated result to correct the expected position of the current timestamp to obtain the actual position of the current timestamp in the world coordinate system, so as to complete the real-time positioning of the robot.
Specifically, at the next timestamp, the pose information of the foot type odometer collected at the current timestamp is obtained, the point cloud data of the single-line laser radar collected at the next timestamp is obtained, the expected position of the next timestamp in the world coordinate system is calculated based on the pose information and the position offset collected by the current timestamp, the expected position of the next timestamp in the world coordinate system is corrected, the actual position of the next timestamp in the world coordinate system is obtained, the expected position of the timestamp where the foot type robot is located in the world coordinate system is corrected at each timestamp by adopting the method, and the real-time positioning of the robot in the walking process is completed.
In the embodiment of the application, the pose information acquired by the foot-type odometer at the last timestamp is acquired, acquiring point cloud data acquired by the single-line laser radar at a current timestamp, acquiring pose information acquired by the foot type odometer at the current timestamp, calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp and the position offset, and mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system, searching the nearest adjacent point of each point cloud in the point cloud data in the world coordinate system, calculating a transformation matrix between each point cloud in the point cloud data and the nearest adjacent point, and multiplying the expected position by the transformation matrix to obtain the actual position of the current timestamp in the world coordinate system. By adopting the method, the expected position of the current timestamp in the world coordinate system is corrected, the corrected expected position of the current timestamp is obtained, the actual position of the current timestamp can be obtained, the robot is positioned in real time, and the accuracy of positioning the robot is improved.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Please refer to fig. 5, which shows a schematic structural diagram of a robot positioning device according to an exemplary embodiment of the present application. The robot positioning device may be implemented as all or part of a terminal, in software, hardware, or a combination of both. The device 1 comprises a data acquisition module 11, a desired position calculation module 12, a transformation matrix calculation module 13 and an actual position calculation module 14.
The data acquisition module 11 is configured to acquire pose information acquired by the foot-type odometer at a last timestamp and acquire point cloud data acquired by the single-line laser radar at a current timestamp;
an expected position calculation module 12, configured to calculate an expected position of the current timestamp in the world coordinate system based on the pose information acquired at the last timestamp;
a transformation matrix calculation module 13, configured to map the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system, so as to obtain a transformation matrix;
an actual position calculation module 14, configured to calculate an actual position of the current timestamp in the world coordinate system based on the transformation matrix and the expected position.
Optionally, as shown in fig. 6, the expected position calculating module 12 includes:
a current pose acquisition unit 121, configured to acquire pose information acquired by the foot-type odometer at the current timestamp;
an offset calculating unit 122, configured to obtain a position offset of the foot odometer between the last timestamp and the current timestamp based on the pose information acquired at the last timestamp and the pose information acquired at the current timestamp;
and an expected position calculating unit 123, configured to calculate an expected position of the current timestamp in the world coordinate system based on the pose information acquired at the last timestamp and the position offset.
Optionally, the offset calculating unit is specifically configured to:
Figure BDA0003217221490000121
wherein the T1 is the pose information acquired at the last timestamp;
the T2 is the pose information acquired at the current timestamp;
the Tx is a position offset between the last timestamp and the current timestamp.
Optionally, as shown in fig. 7, the transformation matrix calculating module 13 includes:
a data mapping unit 131, configured to map the point cloud data acquired at the current timestamp and the expected location of the current timestamp to the world coordinate system;
a nearest-neighbor searching unit 132 configured to search nearest neighbors of each point cloud in the point cloud data on the world coordinate system;
a transformation matrix calculation unit 133, configured to calculate a transformation matrix between each point cloud in the point cloud data and the nearest neighboring point.
Optionally, the transformation matrix calculating unit is specifically configured to calculate a transformation matrix between each point cloud in the point cloud data and the nearest neighboring point by using a near K search algorithm.
Optionally, the actual position calculating module is specifically configured to multiply the expected position by the transformation matrix to obtain an actual position of the current timestamp in the world coordinate system.
In the embodiment of the application, the pose information acquired by the foot-type odometer at the last timestamp is acquired, acquiring point cloud data acquired by the single-line laser radar at a current timestamp, acquiring pose information acquired by the foot type odometer at the current timestamp, calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp and the position offset, and mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system, searching the nearest adjacent point of each point cloud in the point cloud data in the world coordinate system, calculating a transformation matrix between each point cloud in the point cloud data and the nearest adjacent point, and multiplying the expected position by the transformation matrix to obtain the actual position of the current timestamp in the world coordinate system. By adopting the method, the expected position of the current timestamp in the world coordinate system is corrected, the corrected expected position of the current timestamp is obtained, the actual position of the current timestamp can be obtained, the robot is positioned in real time, and the accuracy of positioning the robot is improved.
It should be noted that, when the robot positioning apparatus provided in the foregoing embodiment executes the robot positioning method, only the division of the above functional modules is taken as an example, and in practical applications, the above functions may be distributed to different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. In addition, the robot positioning device provided by the above embodiment and the robot positioning method embodiment belong to the same concept, and details of implementation processes thereof are referred to the method embodiment, and are not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executing the method steps in the embodiments shown in fig. 1 to 4, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 1 to 4, which are not described herein again.
The present application further provides a robot, where at least one instruction is stored in the robot, and the at least one instruction is loaded by the processor and executes the method steps in the embodiments shown in fig. 1 to 4, where a specific execution process may refer to specific descriptions of the embodiments shown in fig. 1 to 4, and is not described herein again.
Please refer to fig. 8, which provides a schematic structural diagram of a robot according to an embodiment of the present application. As shown in fig. 8, the robot 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 interfaces various components throughout the electronic device 1000 using various interfaces and lines to perform various functions of the electronic device 1000 and to process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005 and invoking data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-programmable gate array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1001, but may be implemented by a single chip.
The memory 1005 may include a Random Access Memory (RAM) or a Read-only memory (Read-only memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium (non-transitory-reusable storage medium). The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 8, a memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a robot positioning application program therein.
In the mobile terminal 1000 shown in fig. 8, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be configured to invoke the generate robot positioning application stored in the memory 1005 and specifically perform the following operations:
acquiring pose information acquired by the foot-type odometer at the last timestamp, and acquiring point cloud data acquired by the single-line laser radar at the current timestamp;
calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp;
mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix;
calculating an actual location of the current timestamp in the world coordinate system based on the transformation matrix and the desired location.
In one embodiment, when the processor 1001 calculates the expected position of the current timestamp in the world coordinate system based on the pose information acquired at the last timestamp, specifically:
acquiring pose information acquired by the foot-type odometer at the current timestamp;
obtaining a position offset of the foot type odometer between the last timestamp and the current timestamp based on the pose information acquired at the last timestamp and the pose information acquired at the current timestamp;
and calculating the expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp and the position offset.
In one embodiment, when the processor 1001 obtains the position offset of the foot odometer between the last timestamp and the current timestamp based on the pose information acquired at the last timestamp and the pose information acquired at the current timestamp, specifically:
amount of positional deviation
Figure BDA0003217221490000161
Wherein the T1 is the pose information acquired at the last timestamp;
the T2 is the pose information acquired at the current timestamp;
the Tx is a position offset between the last timestamp and the current timestamp.
In one embodiment, the processor 1001 specifically performs the following operations when mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix:
mapping the point cloud data acquired at the current timestamp and the expected location of the current timestamp to the world coordinate system;
searching the world coordinate system for the nearest adjacent point of each point cloud in the point cloud data;
and calculating a transformation matrix between each point cloud in the point cloud data and the nearest adjacent point.
In one embodiment, the processor 1001 performs the following operations when performing the calculation of the transformation matrix between each point cloud in the point cloud data and the nearest neighboring point:
and calculating a transformation matrix between each point cloud and the nearest adjacent point in the point cloud data by adopting a near K search algorithm.
In one embodiment, the processor 1001, when performing the calculation of the actual position of the current timestamp in the world coordinate system based on the transformation matrix and the desired position, specifically performs the following operations:
and multiplying the expected position by the transformation matrix to obtain the actual position of the current timestamp in the world coordinate system.
In the embodiment of the application, the pose information acquired by the foot-type odometer at the last timestamp is acquired, acquiring point cloud data acquired by the single-line laser radar at a current timestamp, acquiring pose information acquired by the foot type odometer at the current timestamp, calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp and the position offset, and mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system, searching the nearest adjacent point of each point cloud in the point cloud data in the world coordinate system, calculating a transformation matrix between each point cloud in the point cloud data and the nearest adjacent point, and multiplying the expected position by the transformation matrix to obtain the actual position of the current timestamp in the world coordinate system. By adopting the method, the expected position of the current timestamp in the world coordinate system is corrected, the corrected expected position of the current timestamp is obtained, the actual position of the current timestamp can be obtained, the robot is positioned in real time, and the accuracy of positioning the robot is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a read-only memory or a random access memory.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (10)

1. A robot positioning method is applied to a robot, the robot comprises a foot-type odometer and a single-line laser radar, and the method comprises the following steps:
acquiring pose information acquired by the foot-type odometer at the last timestamp, and acquiring point cloud data acquired by the single-line laser radar at the current timestamp;
calculating an expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp;
mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix;
calculating an actual location of the current timestamp in the world coordinate system based on the transformation matrix and the desired location.
2. The method of claim 1, wherein the calculating the expected position of the current timestamp in a world coordinate system based on the pose information acquired at the last timestamp comprises:
acquiring pose information acquired by the foot-type odometer at the current timestamp;
obtaining a position offset of the foot type odometer between the last timestamp and the current timestamp based on the pose information acquired at the last timestamp and the pose information acquired at the current timestamp;
and calculating the expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp and the position offset.
3. The method of claim 2, wherein the deriving a position offset of the foot odometer between the last timestamp and the current timestamp based on the pose information acquired at the last timestamp and the pose information acquired at the current timestamp comprises:
Figure FDA0003217221480000011
wherein the T1 is the pose information acquired at the last timestamp, the T2 is the pose information acquired at the current timestamp, and the Tx is the position offset between the last timestamp and the current timestamp.
4. The method of claim 1, wherein mapping the point cloud data acquired at the current timestamp and the desired location of the current timestamp to the world coordinate system results in a transformation matrix comprising:
mapping the point cloud data acquired at the current timestamp and the expected location of the current timestamp to the world coordinate system;
searching the world coordinate system for the nearest adjacent point of each point cloud in the point cloud data;
and calculating a transformation matrix between each point cloud in the point cloud data and the nearest adjacent point.
5. The method of claim 4, wherein said computing a transformation matrix between each point cloud in the point cloud data and the nearest neighbor comprises:
and calculating a transformation matrix between each point cloud and the nearest adjacent point in the point cloud data by adopting a near K search algorithm.
6. The method of claim 1, wherein the calculating an actual location of the current timestamp in the world coordinate system based on the transformation matrix and the desired location comprises:
and multiplying the expected position by the transformation matrix to obtain the actual position of the current timestamp in the world coordinate system.
7. A robot positioning device applied to a robot including a foot odometer and a single line lidar, the device comprising:
the data acquisition module is used for acquiring pose information acquired by the foot type odometer at the last timestamp and acquiring point cloud data acquired by the single-line laser radar at the current timestamp;
an expected position calculation module, configured to calculate an expected position of the current timestamp in a world coordinate system based on the pose information acquired at the last timestamp;
the transformation matrix calculation module is used for mapping the point cloud data acquired at the current timestamp and the expected position of the current timestamp to the world coordinate system to obtain a transformation matrix;
an actual location calculation module to calculate an actual location of the current timestamp in the world coordinate system based on the transformation matrix and the desired location.
8. The apparatus of claim 7, wherein the expected position calculation module comprises:
the current pose acquisition unit is used for acquiring pose information acquired by the foot type odometer at the current timestamp;
an offset calculating unit which obtains a position offset of the foot type odometer between the last timestamp and the current timestamp based on the pose information acquired at the last timestamp and the pose information acquired at the current timestamp;
and the expected position calculation unit is used for calculating the expected position of the current timestamp in a world coordinate system based on the pose information acquired by the last timestamp and the position offset.
9. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to perform the method steps according to any of claims 1-6.
10. A robot, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1-6.
CN202110951747.5A 2021-08-18 2021-08-18 Robot positioning method, device, storage medium and robot Pending CN113671523A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110951747.5A CN113671523A (en) 2021-08-18 2021-08-18 Robot positioning method, device, storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110951747.5A CN113671523A (en) 2021-08-18 2021-08-18 Robot positioning method, device, storage medium and robot

Publications (1)

Publication Number Publication Date
CN113671523A true CN113671523A (en) 2021-11-19

Family

ID=78543818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110951747.5A Pending CN113671523A (en) 2021-08-18 2021-08-18 Robot positioning method, device, storage medium and robot

Country Status (1)

Country Link
CN (1) CN113671523A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359938A (en) * 2023-05-31 2023-06-30 未来机器人(深圳)有限公司 Object detection method, device and carrying device
CN116688658A (en) * 2023-08-01 2023-09-05 苏州协昌环保科技股份有限公司 Bag-leakage positioning method, device and medium for bag type dust collector

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107340522A (en) * 2017-07-10 2017-11-10 浙江国自机器人技术有限公司 A kind of method, apparatus and system of laser radar positioning
CN108332758A (en) * 2018-01-26 2018-07-27 上海思岚科技有限公司 A kind of corridor recognition method and device of mobile robot
CN110927740A (en) * 2019-12-06 2020-03-27 合肥科大智能机器人技术有限公司 Mobile robot positioning method
CN111238496A (en) * 2020-01-14 2020-06-05 深圳市锐曼智能装备有限公司 Robot posture confirming method, device, computer equipment and storage medium
CN111398984A (en) * 2020-03-22 2020-07-10 华南理工大学 Self-adaptive laser radar point cloud correction and positioning method based on sweeping robot
WO2020168742A1 (en) * 2019-02-20 2020-08-27 苏州风图智能科技有限公司 Method and device for vehicle body positioning
CN111766603A (en) * 2020-06-27 2020-10-13 长沙理工大学 Mobile robot laser SLAM method, system, medium and equipment based on AprilTag code vision auxiliary positioning
CN111812613A (en) * 2020-08-06 2020-10-23 常州市贝叶斯智能科技有限公司 Mobile robot positioning monitoring method, device, equipment and medium
CN112462769A (en) * 2020-11-25 2021-03-09 深圳市优必选科技股份有限公司 Robot positioning method and device, computer readable storage medium and robot
CN112484738A (en) * 2020-11-24 2021-03-12 深圳市优必选科技股份有限公司 Robot mapping method and device, computer readable storage medium and robot
WO2021097983A1 (en) * 2019-11-21 2021-05-27 广州文远知行科技有限公司 Positioning method, apparatus, and device, and storage medium
CN113219440A (en) * 2021-04-22 2021-08-06 电子科技大学 Laser radar point cloud data correction method based on wheel type odometer

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107340522A (en) * 2017-07-10 2017-11-10 浙江国自机器人技术有限公司 A kind of method, apparatus and system of laser radar positioning
CN108332758A (en) * 2018-01-26 2018-07-27 上海思岚科技有限公司 A kind of corridor recognition method and device of mobile robot
WO2020168742A1 (en) * 2019-02-20 2020-08-27 苏州风图智能科技有限公司 Method and device for vehicle body positioning
WO2021097983A1 (en) * 2019-11-21 2021-05-27 广州文远知行科技有限公司 Positioning method, apparatus, and device, and storage medium
CN110927740A (en) * 2019-12-06 2020-03-27 合肥科大智能机器人技术有限公司 Mobile robot positioning method
CN111238496A (en) * 2020-01-14 2020-06-05 深圳市锐曼智能装备有限公司 Robot posture confirming method, device, computer equipment and storage medium
CN111398984A (en) * 2020-03-22 2020-07-10 华南理工大学 Self-adaptive laser radar point cloud correction and positioning method based on sweeping robot
CN111766603A (en) * 2020-06-27 2020-10-13 长沙理工大学 Mobile robot laser SLAM method, system, medium and equipment based on AprilTag code vision auxiliary positioning
CN111812613A (en) * 2020-08-06 2020-10-23 常州市贝叶斯智能科技有限公司 Mobile robot positioning monitoring method, device, equipment and medium
CN112484738A (en) * 2020-11-24 2021-03-12 深圳市优必选科技股份有限公司 Robot mapping method and device, computer readable storage medium and robot
CN112462769A (en) * 2020-11-25 2021-03-09 深圳市优必选科技股份有限公司 Robot positioning method and device, computer readable storage medium and robot
CN113219440A (en) * 2021-04-22 2021-08-06 电子科技大学 Laser radar point cloud data correction method based on wheel type odometer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
庞帆: "激光雷达惯导耦合的里程计与建图方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 08, 15 August 2020 (2020-08-15) *
王潇榕;白国振;郎俊;: "基于单目SLAM的实时场景三维重建", 农业装备与车辆工程, no. 10, 10 October 2018 (2018-10-10) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116359938A (en) * 2023-05-31 2023-06-30 未来机器人(深圳)有限公司 Object detection method, device and carrying device
CN116359938B (en) * 2023-05-31 2023-08-25 未来机器人(深圳)有限公司 Object detection method, device and carrying device
CN116688658A (en) * 2023-08-01 2023-09-05 苏州协昌环保科技股份有限公司 Bag-leakage positioning method, device and medium for bag type dust collector
CN116688658B (en) * 2023-08-01 2023-10-13 苏州协昌环保科技股份有限公司 Bag-leakage positioning method, device and medium for bag type dust collector

Similar Documents

Publication Publication Date Title
CN109766878B (en) A kind of method and apparatus of lane detection
CN106940704B (en) Positioning method and device based on grid map
CN111508021A (en) Pose determination method and device, storage medium and electronic equipment
CN103886124A (en) Location correction of virtual objects
KR102212825B1 (en) Method and system for updating map for pose estimation based on images
CN106548486A (en) A kind of unmanned vehicle location tracking method based on sparse visual signature map
CN112652016A (en) Point cloud prediction model generation method, pose estimation method and device
CN113671523A (en) Robot positioning method, device, storage medium and robot
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN112166458B (en) Target detection and tracking method, system, equipment and storage medium
CN113485350A (en) Robot movement control method, device, equipment and storage medium
CN113359782A (en) Unmanned aerial vehicle autonomous addressing landing method integrating LIDAR point cloud and image data
US11373329B2 (en) Method of generating 3-dimensional model data
CN111680596A (en) Positioning truth value verification method, device, equipment and medium based on deep learning
CN115563732A (en) Spraying track simulation optimization method and device based on virtual reality
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
JP2012220271A (en) Attitude recognition apparatus, attitude recognition method, program and recording medium
CN110991085B (en) Method, medium, terminal and device for constructing robot image simulation data
CN115239899B (en) Pose map generation method, high-precision map generation method and device
CN111076724A (en) Three-dimensional laser positioning method and system
KR20210057586A (en) Method and system for camera-based visual localization using blind watermarking
KR102565444B1 (en) Method and apparatus for identifying object
CN113628284B (en) Pose calibration data set generation method, device and system, electronic equipment and medium
CN112017202A (en) Point cloud labeling method, device and system
CN113808196A (en) Plane fusion positioning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination