CN115201794A - Robot positioning error evaluation method and device based on two-dimensional laser - Google Patents

Robot positioning error evaluation method and device based on two-dimensional laser Download PDF

Info

Publication number
CN115201794A
CN115201794A CN202210757232.6A CN202210757232A CN115201794A CN 115201794 A CN115201794 A CN 115201794A CN 202210757232 A CN202210757232 A CN 202210757232A CN 115201794 A CN115201794 A CN 115201794A
Authority
CN
China
Prior art keywords
point cloud
point
laser
positioning
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210757232.6A
Other languages
Chinese (zh)
Inventor
庄文秘
周航
任强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Agv Robot Co ltd
Original Assignee
Suzhou Agv Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Agv Robot Co ltd filed Critical Suzhou Agv Robot Co ltd
Priority to CN202210757232.6A priority Critical patent/CN115201794A/en
Publication of CN115201794A publication Critical patent/CN115201794A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a robot positioning error evaluation method and device based on two-dimensional laser. Wherein, the method comprises the following steps: establishing a data storage container, and calculating an inter-frame conversion matrix for converting the current frame point cloud into the previous frame point cloud according to the global pose of the robot at the current frame time and the global pose of the robot at the previous frame time adjacent to the current frame; converting the current frame point cloud according to the interframe conversion matrix; and calculating the coincidence degree between the converted point cloud and the previous frame of point cloud, and evaluating the positioning error of the robot according to the coincidence degree. The invention can realize the real-time evaluation of the positioning capability of the robot in the running process only through the point cloud data of the adjacent frames, thereby verifying the accuracy of the positioning result of the robot. Meanwhile, the invention can output multi-level alarm information in time under the problem scenes of inaccurate positioning, lost positioning, pose drifting and the like of the robot caused by interference of various factors.

Description

Robot positioning error evaluation method and device based on two-dimensional laser
Technical Field
The embodiment of the invention relates to a robot positioning technology, in particular to a robot positioning error evaluation method and device based on two-dimensional laser.
Background
With the development of automation technology, robots are more and more commonly found in life and work of people and undertake various production and life tasks. In the process of executing the task, the positioning accuracy of the robot determines the quality of the task to be executed to a certain extent, and in order to ensure the quality of the task executed by the robot, the positioning error of the robot needs to be evaluated.
In the prior art, an additional positioning identification device is usually arranged on a robot to evaluate the positioning precision of the robot, the method is high in cost, the evaluation result is greatly influenced by an external environment, and the accuracy of the evaluation result is not high.
Disclosure of Invention
The invention provides a robot positioning error evaluation method and device based on two-dimensional laser, which can realize real-time evaluation of positioning capability in the operation process of a robot only through point cloud data of adjacent frames, thereby verifying the accuracy of a robot positioning result.
In a first aspect, an embodiment of the present invention provides a method for evaluating a positioning error of a robot based on a two-dimensional laser, including:
s1, establishing a data storage container, wherein two continuous frames of point cloud data and the global pose of the robot at two continuous frames of time are stored in the data storage container;
s2, calculating an inter-frame conversion matrix for converting the current frame point cloud into the previous frame point cloud according to the global pose of the robot at the current frame moment and the global pose of the robot at the previous frame moment adjacent to the current frame;
s3, converting the current frame point cloud according to the interframe conversion matrix to obtain a converted point cloud;
and S4, calculating the coincidence degree between the converted point cloud and the point cloud of the previous frame, and evaluating the positioning error of the robot according to the coincidence degree.
Optionally, the S4 includes:
calculating the positioning translation error and the positioning rotation error of each laser point in the converted point cloud according to the converted point cloud and the previous frame of point cloud;
and carrying out multi-zone division on the scanning data, selecting a fixed number of target laser points in each zone according to a set threshold value, wherein the weight of each target laser point is the same, and evaluating the positioning error of the robot by calculating the variance of the positioning translation error and the positioning rotation error of the target laser points.
Optionally, calculating a positioning translation error of each laser point in the converted point cloud includes:
taking each laser point in the converted point cloud as a target, searching two closest points meeting a threshold condition in the previous frame of point cloud, fitting to obtain a closest straight line, and calculating a target distance from the stress light point to the corresponding closest straight line;
and calculating the normal vector information of each laser point in the converted point cloud, performing projection decomposition on the target distance through the direction information of the effective laser point which can be associated with the fitting straight line and established by the point cloud of the previous frame in the converted point cloud so as to obtain the components of each laser point in the X and Y directions, and taking the components of each laser point in the X and Y directions as the positioning translation error of each laser point in the converted point cloud.
Optionally, calculating normal vector information of each laser point in the converted point cloud includes:
searching at least two nearest neighbor points corresponding to each laser point in the converted point cloud, and fitting a circle where each laser point is located according to a mode that a unique circle is determined by non-collinear three points in a plane;
and determining the normal vector information of each laser point according to the normal vector definition of any point on the circle in the space geometry and the coordinates of the three points related to the fitting circle.
Optionally, calculating a positioning rotation error of each laser point in the converted point cloud, including:
and calculating the positioning rotation error of each laser point in the converted point cloud according to the distance between each laser and the corresponding closest straight line and based on a triangular geometry principle and a microminiature approximation principle.
In a second aspect, an embodiment of the present invention further provides a two-dimensional laser-based robot positioning error evaluation apparatus, including:
the storage container establishing module is used for establishing a data storage container, and the data storage container stores two continuous frames of point cloud data and two continuous frames of global poses of the robot at the moment;
the inter-frame conversion matrix calculation module is used for calculating an inter-frame conversion matrix for converting the current frame point cloud into the previous frame point cloud according to the global pose of the robot at the current frame time and the global pose of the robot at the previous frame time adjacent to the current frame;
the point cloud conversion module is used for converting the current frame point cloud according to the inter-frame conversion matrix to obtain a converted point cloud;
and the positioning error evaluation module is used for calculating the coincidence degree between the converted point cloud and the point cloud of the previous frame and evaluating the positioning error of the robot according to the coincidence degree.
Optionally, the positioning error evaluation module is specifically configured to:
calculating the positioning translation error and the positioning rotation error of each laser point in the converted point cloud according to the converted point cloud and the previous frame of point cloud;
and carrying out multi-zone division on the scanning data, selecting a fixed number of target laser points in each zone according to a set threshold value, wherein the weight of each target laser point is the same, and evaluating the positioning error of the robot by calculating the variance of the positioning translation error and the positioning rotation error of the target laser points.
According to the method, the interframe conversion matrix between two continuous frames of point clouds is calculated, the current frame of point cloud is converted according to the interframe conversion matrix, and the positioning error of the robot is evaluated in real time by calculating the coincidence degree between the converted point cloud and the previous frame of point cloud, so that the real-time evaluation of the positioning capability of the robot in the operation process is realized. Meanwhile, the invention can output multi-level alarm information in time under the problem scenes of inaccurate positioning, lost positioning, pose drifting and the like of the robot caused by interference of various factors, such as position deviation, positioning failure and the like.
Drawings
Fig. 1 is a flowchart of robot positioning error evaluation based on two-dimensional laser according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data container and processing flow in an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the calculation of a point cloud normal vector according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating visualization of normal vectors in a frame of point cloud according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the coordinate system transformation of the robot at different moments in the embodiment of the present invention;
FIG. 6 is a schematic diagram of a pre-and post-frame point cloud association in an embodiment of the invention;
FIG. 7 is a schematic diagram illustrating error calculation in the translation dimension of a point cloud according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating error calculation in a rotation dimension of a point cloud according to an embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating the partitioning of the point cloud area according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Examples
Fig. 1 is a flowchart of robot positioning error evaluation based on two-dimensional laser according to an embodiment of the present invention, which specifically includes the following steps:
s1, establishing a data storage container, wherein two continuous frames of point cloud data and the global pose of the robot at two continuous frames of time are stored in the data storage container.
S11, data container and processing flow
In this embodiment, a data storage container with a size of 2 is first established, as shown in fig. 2, and two elements in the container are specified as front and back, respectively. Subscribing a laser point cloud topic, temporarily storing first frame point cloud data only by data storage, storing the point cloud data into a back position in a container, recording the point cloud data as SCANback, and storing a first frame point cloud moment robot pose Poback.
And temporarily shifting the storage position of the first frame point cloud data to a front position in the container by the second frame point cloud data, recording the first frame point cloud data as SCANfront, and converting the first frame time pose into Posefront. The second frame data is saved as SCANback, and the robot pose Poback at the second frame time is recorded.
At the moment, the data in the data storage container comprises two continuous frames of point cloud data, and the global pose of the robot at two frames of time is stored. For the laser data obtained by continuous subscription, the data of the current frame is always stored in the back position in the data container, the previous frame data is transferred to the front position in the container, and the data in the container is continuously updated circularly.
S12, calculating point cloud direction information
In this embodiment, calculating the normal vector information of each laser point in the converted point cloud includes: searching at least two nearest neighbor points corresponding to each laser point in the converted point cloud, and fitting a circle where each laser point is located according to a mode that a unique circle is determined by non-collinear three points in a plane; and determining the normal vector information of each laser point according to the normal vector definition of any point on the circle in the space geometry and the coordinates of the three points related to the fitting circle.
Specifically, in the existing mapping and positioning algorithm, the obstacle distance information and the intensity information of the point in the original data of the single-line laser radar are mainly utilized for data of the single-line laser radar, the utilization scheme of the point cloud structure is less, and more abundant information in the original laser data cannot be mined. The algorithm provides a method for extracting a point cloud normal vector from each frame of scanning data of a single-line laser radar, and normal vector information of each point in the point cloud, namely direction information of the point, is obtained.
In order to fully utilize the structural information of the point cloud, at least two nearest neighbor points are searched for each laser point in the point cloud, and a circle where each laser point is located is fitted according to a mode that a unique circle is determined by three points which are not collinear in a plane. The circular arc of the circle obtained by fitting through the three points is the accurate expression of the local point cloud structure of each laser point, so that the solution of the normal vector of one laser point can be completed according to the definition of the normal vector of any point on the circle in the space geometry and the coordinates of the three points related to the fitting circle, as shown in fig. 3, the solution process of the normal vector information of the laser point is as follows:
(1) Connecting the point B to be solved with the two nearest neighbor points C and D obtained by searching to obtain a vector
Figure BDA0003719920980000041
And calculating the modular length of the vector
Figure BDA0003719920980000042
(2) Vector by B point
Figure BDA0003719920980000043
Perpendicular to (d);
(3) Using the out-of-circle direction as the direction of the vertical vector and the corresponding vector mode length as the mode length proportion of the vertical vector to obtain a vector
Figure BDA0003719920980000044
And
Figure BDA0003719920980000045
(4) According to the vector operation principle, two perpendicular vectors are added to obtain a normal vector at the point B
Figure BDA0003719920980000046
Because the algorithm only concerns the direction of the normal vector, the unitization can be directly carried out, and the unification of the modular length of the normal vector is completed.
The process is a process of fitting a circle according to each laser point and two nearest neighbors around the laser point and solving a normal vector of the point, in order to ensure the accuracy of a result and eliminate possible interference of abnormal values, a redundant calculation mechanism is added to an algorithm, if the number of adjacent points meeting a distance threshold condition in nearest neighbor search exceeds two of the minimum requirements, the laser points are also subjected to a nearest neighbor array with the maximum number of 5, all the nearest adjacent points can participate in the normal vector calculation process, and therefore the accuracy of normal vector expression of each point is improved. Similarly, part of the miscellaneous points are sparse in the point cloud, and the nearest neighbor points with the minimum number cannot be found, so that the normal vector is not calculated, and the preliminary filtering of the point cloud is realized.
Finally, the algorithm obtains the normal vector information of the laser point in each frame of point cloud as shown in fig. 4, namely the directional information of the point, and the information is used for subsequent processing. Compared with the original point cloud which only has the distance and the intensity data of each point, the effective information of the data is further enriched.
And S2, calculating an inter-frame conversion matrix for converting the current frame point cloud into the previous frame point cloud according to the global pose of the robot at the current frame time and the global pose of the robot at the previous frame time adjacent to the current frame.
Through the cyclic updating operation of the data in the S1, the point cloud under the current frame, the global position of the robot and the information of the point cloud and the global position under the previous frame can be obtained at every moment. And the normal vector information of the point cloud internal points of each frame can be obtained when the algorithm is needed.
Further, as shown in fig. 5, a calculation formula for solving inter-frame transformation according to the global pose of the robot at two frame times is as follows:
T back_to_front =[R back_to_front t back_to_front ]
Figure BDA0003719920980000051
Figure BDA0003719920980000052
wherein R is back_to_front For the rotation parameter at the moment of conversion of the current frame to the previous one, t back_to_front The translation parameters of the rotation parameters for the conversion from the current frame to the previous time are solved to obtain a transformation matrix according to the machineThe output result of the human positioning program is calculated, and the correlation of the point cloud data and the corresponding interframe transformation is a basic premise for error quantitative analysis.
And S3, converting the current frame point cloud according to the interframe conversion matrix to obtain a converted point cloud.
Processing the current frame point cloud SCANback according to the obtained inter-frame transformation matrix, transferring the current frame point cloud SCANback to a robot coordinate system of the previous frame time through rotation and translation transformation, and obtaining a point cloud after coordinate transformation and recording the point cloud as SCAN _ transformed:
SCAN_transed=T back_to_front *SCAN_back (2)
for the newly generated point cloud SCAN _ translated, the number, intensity, relative position and the like of the laser points in the point cloud are consistent with the current frame point cloud SCAN _ back, but the coordinates of the laser points in the point cloud complete the global coordinate conversion from the current frame robot coordinate system to the previous frame robot coordinate system.
And S4, calculating the coincidence degree between the converted point cloud and the previous frame of point cloud, and evaluating the robot positioning error according to the coincidence degree.
From the actual operation process of the robot, the scanning frequency of the laser radar is high, and the mobile robot is usually operated at a low speed in consideration of factors such as safety, so that the change rate of two continuous frames of point clouds received by the robot is low under normal conditions. The difference between the two frames is only the environmental feature points newly detected by part of the current frame and the feature points which disappear in the detection range due to the movement of the robot, but for the common application scene, the occupation ratio of the part which is distinguished in the complete point cloud of one frame is extremely low.
Based on the objective premise, when the current frame point cloud is transferred to the robot coordinate system at the previous frame time through inter-frame transformation under the premise that the robot positioning function is normal and the positioning result is output correctly, the contact ratio of the converted point cloud and the previous frame point cloud is high. Because the inference is established under the background of correct robot positioning, if the robot positioning error is larger or the position information is lost, the phenomenon which can be directly shown is that the coincidence degree of the converted point cloud and the previous frame is lower, and then a positioning error evaluation model can be established by utilizing the correlation characteristics between the point clouds to deeply analyze the coincidence procedure of the converted point cloud and the previous frame point cloud. And (3) analyzing the positioning result of the robot at each moment from three dimensions (x, y, theta) of the pose expression of the robot in the 2D laser SLAM algorithm.
S41, establishing an evaluation model
In this embodiment, first, a positioning translation error and a positioning rotation error of each laser point in the converted point cloud are calculated according to the converted point cloud and the previous frame point cloud.
Wherein, calculate the positioning translation error of each laser point in the point cloud after the conversion, include:
taking each laser point in the converted point cloud as a target, searching two closest points meeting a threshold condition in the previous frame of point cloud, fitting to obtain a closest straight line, and calculating a target distance from the stress light point to the corresponding closest straight line;
and calculating normal vector information of each laser point in the converted point cloud, performing projection decomposition on the target distance through direction information of effective laser points which can be associated with a fitting straight line and established point in the converted point cloud with the previous frame point cloud so as to obtain components of each laser point in the X and Y directions, and taking the components of each laser point in the X and Y directions as positioning translation errors of each laser point in the converted point cloud.
Specifically, for the same obstacle in the environment observed in both the front frame and the rear frame, after the point cloud of the current frame is converted into the coordinate system of the robot of the previous frame under the ideal condition, the point clouds in the corresponding areas should be completely overlapped. However, the transformation matrix depended on in the transformation process comes from the pose transformation of the positioning result at two moments, and as the positioning program of the robot cannot ensure that the error is 0, a certain distance exists between the transformed point cloud and the point set of the same barrier feature in the previous frame of point cloud, and the distance is a more specific basis for measuring the positioning error. As shown in fig. 6, the graph is used to visualize the presentation, non-real data. The content difference of the two frames of point clouds is small before and after operation, and the converted current point cloud can be highly overlapped with the previous frame of point cloud under ideal conditions, such as an overlapping graph at the left lower part in fig. 6; however, when there is an error, there is always a certain deviation between the point clouds, such as the lower right overlay in fig. 6. Therefore, quantitative evaluation of the deviation is an effective solution for analysis of the positioning error.
Firstly, on X and Y dimensions related to the translation of the point cloud, the relative distance between the converted point cloud and the previous frame of point cloud is quantitatively analyzed. In short, if the corresponding relationship between the transformed point cloud and each laser point of the environmental features observed together in the previous frame of point cloud can be directly established, and the euclidean distance between the corresponding points can be calculated to be used as the data source for error evaluation. However, from the data characteristics of actual laser radars, point clouds are dense, each laser point does not have semantic information, and therefore the correspondence relationship between points cannot be directly established. The algorithm adopts a closest straight line fitting mode to establish corresponding association of each laser point in the converted point cloud and the previous frame of point cloud, takes each laser point in the converted point cloud as a target, searches two closest points meeting a threshold condition in the previous frame of point cloud, fits to obtain a closest straight line, and calculates the distance d from the point to the fitted closest straight line transed_to_front As shown in fig. 7.
According to the work of solving the normal vector in the point cloud, it can be known that the normal vector directions of the laser points in the frame point cloud are different due to various types of unstructured obstacles in the environment, and d obtained by solving all points meeting the conditions in the frame point cloud in a single dimension cannot be solved transed_to_front And carrying out uniform quantitative evaluation. The algorithm further utilizes the normal vector information of the middle point of the converted point cloud, and establishes a direction information pair d of an effective laser point associated with a fitting straight line with the point cloud of the last frame in the converted point cloud transed_to_front Performing projection decomposition to obtain components in X and Y directions, and determining the translation error of the robot according to the components in the X and Y directions:
Figure BDA0003719920980000071
therefore, the conversion from independent analysis of each point to data alignment and uniform processing of all the points meeting the conditions in one frame between the converted point cloud and the point cloud of the previous frame is completed.
Further, calculating a positioning rotation error of each laser point in the converted point cloud, including: and calculating the positioning rotation error of each laser point in the converted point cloud according to the distance between each laser and the corresponding nearest straight line and based on a triangular geometry principle and a microminiature approximation principle.
In the present embodiment, only the course angle of the single line laser beam mounted on the mobile robot changes with respect to the angle of the point cloud rotation. The same as the deviation in the translation dimension, the converted point cloud SCAN _ transferred and SCAN _ front have an angular difference, and the difference can also accurately evaluate the positioning capability of the robot.
As shown in fig. 8, the angle error solution is elaborated by taking the single laser point data in a frame point cloud as an example. A straight line which is fit by a point Ptranded in SCAN _ transported and two closest points corresponding to the point in SCAN _ front is selected, and the distance d from the Ptranded to the fit straight line can be quickly determined according to the solution of the translation error transed_to_front . According to a triangular geometric principle and a microminiature approximation principle, a perpendicular line is made through Ptransed under a vehicle body coordinate system and is perpendicular to OPtrans, and the perpendicular line is intersected with a fitting straight line at a point Ptransed ', so that an included angle theta between the OPtrans and the OPtrans' is specific expression of an angle error, namely a point cloud in SCAN _ trans can be overlapped with SCAN _ front in the direction after certain angle compensation, and the compensated angle can be used as a measurement index of a positioning rotation error. The specific calculation process for the angle error is as follows:
1) Obtaining the coordinate of Ptranded in a map coordinate system according to the coordinate transformation relation, and calculating to obtain the OPtransformed length L and the included angle alpha between the OPtransformed length L and the X axis;
2) Obtaining a perpendicular equation which passes through Ptranded and is perpendicular to the fitted straight line according to the Ptranded coordinate and the fitted nearest straight line, thereby obtaining an included angle beta between a perpendicular line in the diagram and the X axis;
3) Calculating according to the space geometric principle to obtain gamma =180 ° -beta- (90 ° -alpha) =90 ° -beta + alpha in the graph;
4) According to the angle gamma, ptransed to the distance d of the fitted straight line transed_to_front And the distance between Ptranded and Ptranded' is obtained by trigonometric function calculation
Figure BDA0003719920980000081
5) To this end, Δ OP has been calculated transed P transed Two right-angle sides in the' can calculate the angle value to be solved according to the trigonometric function
Figure BDA0003719920980000082
Furthermore, after the positioning translation error and the positioning rotation error of each laser point in the converted point cloud are calculated, the laser points in the point cloud are subjected to partition statistics, and then specific multi-region division is determined, a fixed number of target laser points are selected in each region according to a set threshold, the weight of each target laser point is the same, and the positioning error of the robot is evaluated by calculating the variance of the positioning translation error and the positioning rotation error of the target laser points.
S42, partition statistics and weight equalization
In this embodiment, through the above process, the algorithm can obtain error statistics of a large number of laser points in the frame in the X, Y, theta directions when each frame of point cloud is obtained. The errors in the translational and rotational dimensions can be obtained for each laser point in SCAN _ translated that satisfies the condition, but the positioning error at the frame time cannot be estimated as a whole only from the error data of each point. If the positioning is determined correctly only by setting the error threshold, there may be a case where the error of an individual laser spot is large and the error of the whole laser spot is small, resulting in an erroneous determination. Meanwhile, as known from the statistical principle, the variance in the data and the multi-dimensional covariance are measures of the degree of dispersion of a random variable or a set of data, and are commonly used to evaluate the degree of deviation between the random variable and its mathematical expectation (i.e., mean).
The algorithm is different from a method for calculating the variance according to all data after measuring a certain fixed variable for multiple times in mathematical statistics, the algorithm adopts a scheme that multi-point data in single measurement are used for carrying out variance calculation on multiple dimensions of errors and forming a covariance matrix, the data consistency is ensured by a scheme of carrying out error projection according to a point cloud normal vector in the previous steps, and the variance calculation on three dimensions of X, Y and Theta can be completed only through single-frame data by using a formula (4).
Figure BDA0003719920980000091
According to the principle of mathematical statistics, the process of calculating the mean value directly and then the variance for each dimension is carried out, wherein each data element shares the same weight and is brought into the calculation. In the practical application process of the robot, due to the special environment structure of part of key scenes, the error in a certain dimension is not obvious enough, for example, a large amount of point clouds in a gallery environment fall on the wall bodies at two sides, and a small amount of points fall on an obstacle with obvious characteristics and high discrimination in the scene. Therefore, even if the point clouds on the wall bodies on the two sides have errors before and after, the point clouds interfere with the process of searching the closest straight line of the SCAN _ transferred from the SCAN _ front due to the point cloud structures with similar heights, so that the real correlation cannot be established, and the error distance is calculated incorrectly. Although a small amount of high-distinguishability obstacle point clouds in the scene still keep accuracy in the calculation of the relative distance between two frames, the number of the point clouds in the scene is low in the total number of the point clouds, so that when the variance of data of each frame of point cloud is uniformly calculated, real error data is diluted by a large amount of interfered invalid data, and the error evaluation condition of the frame of point cloud at the moment cannot be reflected.
In order to solve the problem, the algorithm adopts a partition counting and weight balancing mode to improve the adaptability to different scenes. Taking single line laser radar data scanned by 360 degrees as an example, multi-area division is performed on the scanned data, as shown in fig. 9, original data fall in a plurality of areas with the same area, a red broken line in the graph is point cloud data, and division of the areas can be flexibly configured according to actual data volume. The algorithm selects a fixed number of laser points in each region according to a set threshold, the selected points become representative points in each region, and the representative points in each region share the same weight when performing subsequent statistical analysis. For example, 1000 laser point data are received in total by one scanning in a gallery environment, wherein the number of the point clouds falling on the wall bodies on two sides is 900, the number of the point clouds falling on the environment features with small volume and high discrimination in the environment is 100, the wall features account for 90% according to the original scheme, and even if a large error value is obtained on the features with high discrimination, the final appearance of the result is not obvious. However, after the partition statistics, the point cloud number of the wall body features is obviously reduced, and the features with high degree of distinction are concentrated originally and do not fluctuate greatly in number, so that the weight ratio of the features is improved in the process of calculating the error covariance by using the sampled point cloud, and the calculation result is more in line with the real situation. Based on the method, the weight balance of the point cloud in the final overall error calculation is realized, and after the regional sampling, the original large number of repetitive features can be subjected to weight normalization even in a highly structured scene such as a gallery environment, so that the distinguishing feature of each point cloud in each frame can be more accurately grasped, and the positioning error can be more objectively evaluated.
The embodiment of the invention also provides a robot positioning error evaluation device based on two-dimensional laser, which comprises:
the storage container establishing module is used for establishing a data storage container, and the data storage container stores two continuous frames of point cloud data and two continuous frames of global poses of the robot at the moment;
the inter-frame conversion matrix calculation module is used for calculating an inter-frame conversion matrix for converting the current frame point cloud into the previous frame point cloud according to the global pose of the robot at the current frame moment and the global pose of the robot at the previous frame moment adjacent to the current frame;
the point cloud conversion module is used for converting the current frame point cloud according to the inter-frame conversion matrix to obtain a converted point cloud;
and the positioning error evaluation module is used for calculating the coincidence degree between the converted point cloud and the previous frame of point cloud and evaluating the positioning error of the robot according to the coincidence degree.
Optionally, the positioning error evaluation module is specifically configured to:
calculating the positioning translation error and the positioning rotation error of each laser point in the converted point cloud according to the converted point cloud and the previous frame of point cloud;
and carrying out multi-region division on the scanning data, selecting a fixed number of target laser points in each region according to a set threshold, wherein the weight of each target laser point is the same, and evaluating the positioning error of the robot by calculating the variance of the positioning translation error and the positioning rotation error of the target laser points.
Wherein, calculate the positioning translation error of each laser point in the point cloud after the conversion, include:
taking each laser point in the converted point cloud as a target, searching two closest points meeting a threshold condition in the previous frame of point cloud, fitting to obtain a closest straight line, and calculating a target distance from the stress light point to the corresponding closest straight line;
and calculating normal vector information of each laser point in the converted point cloud, performing projection decomposition on the target distance through direction information of effective laser points which can be associated with a fitting straight line and established point in the converted point cloud with the previous frame point cloud so as to obtain components of each laser point in the X and Y directions, and taking the components of each laser point in the X and Y directions as positioning translation errors of each laser point in the converted point cloud.
Calculating the normal vector information of each laser point in the converted point cloud, wherein the calculation comprises the following steps:
searching at least two nearest neighbor points corresponding to each laser point in the converted point cloud, and fitting a circle where each laser point is located according to a mode that a unique circle is determined by non-collinear three points in a plane;
and determining the normal vector information of each laser point according to the normal vector definition of any point on the circle in the space geometry and the coordinates of the three points related to the fitting circle.
Calculating the positioning rotation error of each laser point in the converted point cloud, comprising the following steps:
and calculating the positioning rotation error of each laser point in the converted point cloud according to the distance between each laser and the corresponding nearest straight line and based on a triangular geometry principle and a microminiature approximation principle.
The robot positioning error evaluation device based on the two-dimensional laser can execute the robot positioning error evaluation method based on the two-dimensional laser provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. Those skilled in the art will appreciate that the present invention is not limited to the particular embodiments described herein, and that various obvious changes, rearrangements and substitutions will now be apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in some detail by the above embodiments, the invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the invention, and the scope of the invention is determined by the scope of the appended claims.

Claims (7)

1. A robot positioning error evaluation method based on two-dimensional laser is characterized by comprising the following steps:
s1, establishing a data storage container, wherein two continuous frames of point cloud data and two continuous frames of global poses of the robot at the moment are stored in the data storage container;
s2, calculating an inter-frame conversion matrix for converting the current frame point cloud into the previous frame point cloud according to the global pose of the robot at the current frame time and the global pose of the robot at the previous frame time adjacent to the current frame;
s3, converting the current frame point cloud according to the inter-frame conversion matrix to obtain a converted point cloud;
and S4, calculating the coincidence degree between the converted point cloud and the previous frame of point cloud, and evaluating the robot positioning error according to the coincidence degree.
2. The method according to claim 1, wherein the S4 comprises:
calculating the positioning translation error and the positioning rotation error of each laser point in the converted point cloud according to the converted point cloud and the previous frame of point cloud;
and carrying out multi-zone division on the scanning data, selecting a fixed number of target laser points in each zone according to a set threshold value, wherein the weight of each target laser point is the same, and evaluating the positioning error of the robot by calculating the variance of the positioning translation error and the positioning rotation error of the target laser points.
3. The method of claim 2, wherein calculating the translational error of the positioning of each laser point in the transformed point cloud comprises:
taking each laser point in the converted point cloud as a target, searching two closest points meeting a threshold condition in the previous frame of point cloud, fitting to obtain a closest straight line, and calculating a target distance from the stress light point to the corresponding closest straight line;
and calculating the normal vector information of each laser point in the converted point cloud, performing projection decomposition on the target distance through the direction information of the effective laser point which can be associated with the fitting straight line and established by the point cloud of the previous frame in the converted point cloud so as to obtain the components of each laser point in the X and Y directions, and taking the components of each laser point in the X and Y directions as the positioning translation error of each laser point in the converted point cloud.
4. The method of claim 3, wherein calculating the normal vector information of each laser point in the transformed point cloud comprises:
searching at least two nearest neighbor points corresponding to each laser point in the converted point cloud, and fitting a circle where each laser point is located according to a mode that a unique circle is determined by non-collinear three points in a plane;
and determining the normal vector information of each laser point according to the normal vector definition of any point on the circle in the space geometry and the coordinates of the three points related to the fitting circle.
5. The method of claim 3, wherein calculating the positioning rotation error of each laser point in the transformed point cloud comprises:
and calculating the positioning rotation error of each laser point in the converted point cloud according to the distance between each laser and the corresponding nearest straight line and based on a triangular geometry principle and a microminiature approximation principle.
6. A robot positioning error evaluation device based on two-dimensional laser is characterized by comprising:
the storage container establishing module is used for establishing a data storage container, and the data storage container stores two continuous frames of point cloud data and two continuous frames of global poses of the robot at the moment;
the inter-frame conversion matrix calculation module is used for calculating an inter-frame conversion matrix for converting the current frame point cloud into the previous frame point cloud according to the global pose of the robot at the current frame moment and the global pose of the robot at the previous frame moment adjacent to the current frame;
the point cloud conversion module is used for converting the current frame point cloud according to the inter-frame conversion matrix to obtain a converted point cloud;
and the positioning error evaluation module is used for calculating the coincidence degree between the converted point cloud and the previous frame of point cloud and evaluating the positioning error of the robot according to the coincidence degree.
7. The apparatus of claim 6, wherein the positioning error evaluation module is specifically configured to:
calculating the positioning translation error and the positioning rotation error of each laser point in the converted point cloud according to the converted point cloud and the previous frame of point cloud;
and carrying out multi-region division on the scanning data, selecting a fixed number of target laser points in each region according to a set threshold, wherein the weight of each target laser point is the same, and evaluating the positioning error of the robot by calculating the variance of the positioning translation error and the positioning rotation error of the target laser points.
CN202210757232.6A 2022-06-29 2022-06-29 Robot positioning error evaluation method and device based on two-dimensional laser Pending CN115201794A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210757232.6A CN115201794A (en) 2022-06-29 2022-06-29 Robot positioning error evaluation method and device based on two-dimensional laser

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210757232.6A CN115201794A (en) 2022-06-29 2022-06-29 Robot positioning error evaluation method and device based on two-dimensional laser

Publications (1)

Publication Number Publication Date
CN115201794A true CN115201794A (en) 2022-10-18

Family

ID=83577418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210757232.6A Pending CN115201794A (en) 2022-06-29 2022-06-29 Robot positioning error evaluation method and device based on two-dimensional laser

Country Status (1)

Country Link
CN (1) CN115201794A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116699577A (en) * 2023-08-02 2023-09-05 上海仙工智能科技有限公司 2D laser positioning quality evaluation method and system and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116699577A (en) * 2023-08-02 2023-09-05 上海仙工智能科技有限公司 2D laser positioning quality evaluation method and system and storage medium
CN116699577B (en) * 2023-08-02 2023-10-31 上海仙工智能科技有限公司 2D laser positioning quality evaluation method and system and storage medium

Similar Documents

Publication Publication Date Title
Sobreira et al. Map-matching algorithms for robot self-localization: a comparison between perfect match, iterative closest point and normal distributions transform
CN108303096B (en) Vision-assisted laser positioning system and method
Bavle et al. S-graphs+: Real-time localization and mapping leveraging hierarchical representations
Reina et al. 3D traversability awareness for rough terrain mobile robots
CN108061897B (en) Underwater structured environment line feature extraction method based on forward-looking sonar
Wei et al. Multi-sensor fusion glass detection for robot navigation and mapping
CN110806585B (en) Robot positioning method and system based on trunk clustering tracking
Razlaw et al. Evaluation of registration methods for sparse 3d laser scans
CN115201794A (en) Robot positioning error evaluation method and device based on two-dimensional laser
CN113902782A (en) Rapid registration method and system for three-dimensional point cloud of obstacles around excavator
CN113759928B (en) Mobile robot high-precision positioning method for complex large-scale indoor scene
Zhu et al. Indoor multi-robot cooperative mapping based on geometric features
CN110136186B (en) Detection target matching method for mobile robot target ranging
Yin et al. A failure detection method for 3D LiDAR based localization
Huang et al. A robust 2D lidar SLAM method in complex environment
CN114463932A (en) Non-contact construction safety distance active dynamic recognition early warning system and method
CN114092778A (en) Radar camera data fusion system and method based on characterization learning
Zhang et al. A LiDAR-intensity SLAM and loop closure detection method using an intensity cylindrical-projection shape context descriptor
US11875534B2 (en) Pose estimation method for unmanned aerial vehicle based on point line and plane feature fusion
Nielsen Robust lidar-based localization in underground mines
Adolfsson et al. A submap per perspective-selecting subsets for super mapping that afford superior localization quality
Li et al. Mobile robot map building based on laser ranging and kinect
Li et al. Real time obstacle estimation based on dense stereo vision for robotic lawn mowers
Orlov et al. Machine vision system for autonomous agricultural vehicle
Zhao et al. Dmvo: A multi-motion visual odometry for dynamic environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination