CN112053390B - Positioning method based on point cloud transformation matching - Google Patents

Positioning method based on point cloud transformation matching Download PDF

Info

Publication number
CN112053390B
CN112053390B CN202010918896.7A CN202010918896A CN112053390B CN 112053390 B CN112053390 B CN 112053390B CN 202010918896 A CN202010918896 A CN 202010918896A CN 112053390 B CN112053390 B CN 112053390B
Authority
CN
China
Prior art keywords
point cloud
cloud frame
frame
map
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010918896.7A
Other languages
Chinese (zh)
Other versions
CN112053390A (en
Inventor
刘诗聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Lanshu Intelligent Technology Co ltd
Original Assignee
Shanghai Lanshu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Lanshu Intelligent Technology Co ltd filed Critical Shanghai Lanshu Intelligent Technology Co ltd
Priority to CN202010918896.7A priority Critical patent/CN112053390B/en
Publication of CN112053390A publication Critical patent/CN112053390A/en
Application granted granted Critical
Publication of CN112053390B publication Critical patent/CN112053390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a positioning method based on point cloud transformation matching, which comprises the following steps of S1, receiving a current point cloud frame acquired by a laser radar on a mobile device, correcting the current point cloud frame, and generating a correction point Yun Zhen; s2, registering the corrected point cloud frame and the previous reference point cloud frame; s3, correcting the current reference point cloud frame; and S4, calculating and outputting the position information of the mobile device on the map according to the coordinates and the rotation angle of the current reference point cloud frame. By the method, the mobile device can be positioned only by laser radar data, and the method has the advantages of good stability, strong adaptability and high precision.

Description

Positioning method based on point cloud transformation matching
Technical Field
The invention relates to the technical field of image recognition, in particular to a positioning method based on point cloud transformation matching.
Background
SLAM (simultaneous Localization and Mapping), as the name implies, is a technology for creating a map (Mapping) in a completely unknown environment by a robot or other equipment under the condition of uncertain self-position, and simultaneously, utilizing the map to perform autonomous positioning and Navigation (Localization) to realize path planning (Navigation). The SLAM technology is mainly used for calculating the real-time pose of the mobile robot, and can acquire surrounding environment information and build a map according to the environment information, and meanwhile, the coordinate and the direction angle of the current acquisition point on a map are obtained. SLAM problems can be described as: the robot starts to move from an unknown position in an unknown environment, performs self-positioning according to position estimation and a map in the moving process, and builds an incremental map on the basis of self-positioning to realize autonomous positioning and navigation of the robot. The core steps of SLAM technology are generally described as follows: sensing, positioning and mapping.
Perception-the robot is able to obtain ambient information through sensors.
Positioning-the current and historical information acquired by a sensor, and the position and the posture of the self are deduced.
Drawing-drawing the appearance of the environment according to the pose of the self and the information acquired by the sensor.
Robots or other devices (e.g., AGV (Automated Guided Vehicle) carts) are equipped with automated guidance means, such as electromagnetic or optical, that can travel along a prescribed guidance path. The travel route and behavior can be controlled by a computer, or the travel route can be established by using an electromagnetic track (electromagnetic path-running system), which is stuck on the floor and moves and acts by means of information brought by the electromagnetic track.
The robot or other equipment scans the surrounding environment through the carried laser radar to obtain point cloud data. The laser SLAM navigation system processes and generates a map according to the point cloud depth. And in operation, the coordinates are analyzed in real time by comparing the current point cloud with the map. The core of laser SLAM navigation is to acquire positioning data in real time, i.e. coordinates and direction angles of a robot or other device on a map. The key step of laser positioning is to compare the point cloud fed back by the laser radar with a pre-acquired map in real time, so that the pose of the mobile device is obtained. The step involves a variety of techniques including laser point cloud preprocessing and matching, etc., the core technique for laser positioning navigation.
The existing laser SLAM navigation method has different choices on the positioning method and the map representation. There are various positioning methods, such as point cloud-based matching, gaussian distribution-based combinatorial optimization, violent matching, and the like. The map representation mainly uses a grid map, and the map representation also uses a point cloud map by an algorithm, so that the technical problems are as follows:
(1) One class of algorithms is based on brute force matching searches, such as adaptive Monte Carlo positioning methods. The method comprises the steps of performing traversal search on all possible positions on a current map, calculating a confidence level according to the matching degree, and taking the position with the highest confidence level. The method is simple to realize, but the calculation amount is large and the initial positioning time on a large map is long because of adopting a violent matching method. Because of the excessive number of all possible positions, sparse sowing points must be randomly sampled for calculation, which makes the positioning accuracy low. The method can almost only use a grid map, and the point cloud map is used for solving the problem that the calculated amount is seriously out of standard.
(2) The laser navigation algorithm with lower positioning accuracy such as Monte Carlo positioning method cannot achieve more accurate continuous positioning, and when the running speed of the trolley is higher, auxiliary correction of point cloud distortion by using wheel encoder equipment and the like is necessary. This is because lidar continuously scans data for each angle as it rotates, which results in some distortion of the point cloud scanned as the cart moves.
(3) One class of algorithms is based on a Gaussian distribution combination optimization method or a point cloud matching method (NDT algorithm or ICP algorithm), and priori knowledge is fully utilized in the method: namely, the transformation of the car pose is a continuous process. The method solves the problem of continuous positioning, i.e. the current position is presumed with the initial position known, more intelligently. However, the method is difficult to solve the problem of primary positioning on a large map, and a violent matching method is still needed to solve the problem of primary positioning in actual operation. Among the algorithms, the NDT-based method requires the use of a grid map, and the ICP-based method can use a point cloud map.
(4) The current laser navigation method has weaker closed loop detection capability because the point cloud matching algorithm needs to give a more accurate rough matching position. Such as: if the current point cloud matching algorithm method fails to match with the map for a period of time due to the fact that the map is not completely consistent with the actual scene, and therefore more positioning errors are accumulated, closed loop detection can not be completed even if the environment is returned to the position matched with the map, and manual repositioning is needed. The closed loop detection means that the trolley can be automatically matched with a certain area of the map after running to the area, so that the positioning error of continuous positioning is eliminated.
(5) The continuous matching algorithm cannot solve the long corridor effect well: this means that the environment scanned by the laser is highly consistent and when only a small number of columns or the like can be used to determine the car position, these algorithms will tend to consider the car not moving any way, causing a malfunction. This problem can be alleviated by brute force matching, but the computational effort is greatly increased. Such scenarios are common, such as offices, hospitals, factories, and the like, and additional devices are often required to solve the problem.
The continuous matching algorithm has higher requirement on the quality of the point cloud. In the case of too sparse point clouds, or large distance differences, or uneven distribution of point clouds, the initial values required by the ICP or NDT algorithm may be insufficiently accurate, resulting in mismatching or less accurate results, while low-accuracy matching results may result in further lower-accuracy initial values, resulting in reduced laser navigation stability and insufficient usability based on these methods.
Disclosure of Invention
The invention aims to solve the problems that the positioning is unstable and the repositioning is difficult to realize when a robot walks in a complex or special scene in the prior art. Firstly, carrying out point cloud correction by using pose change of each frame; then, a displacement relation invariance transformation method is used for processing the point cloud and a phase correlation method is used for analyzing displacement change, so that continuous positioning accuracy and stability are improved; finally, the rotation relationship invariance transformation method is used for processing the point cloud and analyzing the pose relationship between the reference frame and the map by the phase correlation method, so that the repositioning problem is solved.
In order to achieve the above object, according to the present invention, there is provided a positioning method based on point cloud transformation matching, including:
s1, receiving a current point cloud frame acquired by a laser radar on a mobile device, and correcting the current point cloud frame to generate a corrected point Yun Zhen;
s2, registering the corrected point cloud frame and the previous reference point cloud frame;
s3, correcting the current reference point cloud frame;
and S4, calculating and outputting the position information of the mobile device on the map according to the coordinates and the rotation angle of the current reference point cloud frame.
Further, in step S2, the method of registration includes the steps of:
s21, receiving the point cloud frame P1 and the point cloud frame P2 and the predicted pose change of the point cloud frame P1 relative to the point cloud frame P2, performing ICP registration on the point cloud frame P1 and the point cloud frame P2, if ICP registration is successful, acquiring the preliminary pose change of the point cloud frame P1 relative to the point cloud frame P2, and then executing a second registration method; otherwise, outputting the predicted pose change;
the second registration method includes:
s22, receiving a point Yun Zhen Pc, a point cloud frame Pt, pose changes, a preset maximum detected displacement range w, a reliability threshold T and a preset feature map size sz;
s23 takes a plurality of values from w to w.times.2, and marks the values as w 1. M, and M values are taken as the total; for each w [ m ], the following steps are performed:
a) Taking w m as hash coefficient, carrying out invariance transformation on laser point cloud displacement relation for Pt to obtain feature diagram Ia of sz size;
b) Performing the following pose transformation on each point of Pc to obtain a point cloud frame Pc':
Pc’[n].x=Pc[n].x*cos(dth)-Pc[n].y*sin(dth)+dx
Pc’[n].y=Pc[n].x*sin(dth)+Pc[n].y*cos(dth)+dy
c) Taking w m as hash coefficient, carrying out laser point cloud frame displacement invariance transformation on Pc' to obtain a feature map Ib with sz size;
d) Performing phase correlation analysis on the feature images Ia and Ib, and obtaining phase difference and phase correlation credibility by taking w [ m ] as a hash coefficient;
s24, if one of the two maximum phase correlation reliability values is smaller than a reliability threshold T, the registration fails, and a primary pose change is output; if both values are greater than or equal to the confidence threshold T, the phase difference is converted into a displacement and the displacement is output.
Further, the displacement relation invariance transformation method of the laser point cloud frame comprises the following steps:
(1) Receiving a point cloud frame, a feature graph side length sz, a hash coefficient w and a feature adjustment coefficient;
(2) Generating a full black feature map with the side length sz;
(3) For each point in the point cloud frame: the following operations are performed:
a) Performing translation relation invariant hash operation on the positions of the points by using a hash coefficient w, and obtaining points (x ', y') on the feature map;
b) Drawing a circle at the position of the characteristic diagram (x ', y') by using a bilinear interpolation method; if the drawn circle exceeds the range of the feature map, returning sz pixels of overflow content and still drawing the overflow content on the feature map;
(4) Performing exponentiation operation and histogram equalization on pixel values on the feature map, and then outputting; preferably, the power is a characteristic adjustment coefficient.
Further, the step S1 includes:
s11, receiving a current point cloud frame acquired by a laser radar on a mobile device, and registering the current point cloud frame and the previous point cloud frame through first pose change of the previous point cloud frame, wherein the registration result is the first pose change of the current point cloud frame;
and S12, correcting the current point cloud frame according to the first pose change of the current point cloud frame, and generating a corrected point cloud frame.
Further, the method of correcting in step S12 includes:
s121, receiving a point cloud frame P and a pose change delta; wherein the point cloud frame P contains distances measured by the lidar at multiple angles continuously from St angle to Ed angle;
s122, acquiring a corrected point cloud frame P' according to the following calculation formula:
p=(Ed-P[n].th)/Pi;
deltaTh=p*delta.th
P’[n].x=P[n].x*cos(deltaTh)-P[n].y*sin(deltaTh)+p*delta.x;
P’[n].y=P[n].y*sin(deltaTh)+P[n].y*cos(deltaTh)+p*delta.y
p is a proportional adjustment coefficient, deltaTh is an angle correction amount, pi is a circumference rate, pn is an nth point on a point cloud frame P, pn, x, pn, y, and Pn, th respectively represent x coordinates, y coordinates, and angles of points in the point cloud frame relative to a radar coordinate system, delta.x, delta.y, and delta.th are pose changes and angle changes on x and y axes on a map coordinate system;
Further, in the step 2, the result of the registration is a second pose change of the current point cloud frame, and a calculation formula of the pose change D' in the registration is: d' =p Front part +V, where P Front part And V is the first pose change of the current point cloud frame.
Further, in the step S3, if the registration in the step S2 is successful, the previous reference point cloud frame is modified to be the current reference point cloud frame; otherwise, the corrected point cloud frame is used as the current reference point cloud frame; the method for correcting the previous reference point cloud frame to be the current reference point cloud frame comprises the following steps:
s31, if a point which does not exist in the previous reference point cloud frame exists in the corrected point cloud frame, the point is added into the previous reference point cloud frame;
s32, if the correction point cloud frame and the previous reference point cloud frame have corresponding points, correction is carried out; preferably, an average value is calculated for the positions of the corresponding points, and the positions of the corresponding points in the cloud frame of the previous reference point are corrected to be the average value;
s33, if the point in the previous reference point cloud frame does not correspond to the point in any corrected point cloud frame within the set period, deleting the point from the previous reference point cloud frame.
Further, the method further comprises the following steps:
S5, if the abscissa or the ordinate of the current reference point cloud frame is larger than the reference frame generation distance or the number of point clouds in the current reference point cloud frame is larger than a preset first threshold value, comparing the current reference point cloud frame with the map to generate a new map.
Further, the method for comparing includes:
s51, matching a current reference point cloud frame with a plurality of map frames closest to the current reference point cloud frame according to the sequence from small to large;
s52, if the matching is successful, updating the current reference point cloud frame coordinates, wherein the updating formula is as follows:
Rx=x2+cos(th2)*dx-sin(th2)*dy
Ry=y2+sin(th2)*dx+cos(th2)*dy
Rth=th2+dth
wherein Rx, ry and Rth are coordinates of a current point cloud frame, x2, y2 and th2 are coordinates of a matched map frame, dx, dy and dth are pose changes of the current reference point cloud frame output by the matching method relative to the map frame;
and S53, copying the updated current reference point cloud frame to be a new map frame, and adding the new map frame into the map so as to generate a new map.
Further, in the step S51, the matching method includes:
(1) Receiving two point clouds P3[1..N ] and P4[1..m ], a matched maximum displacement w and an angular resolution nth; performing rotation relation invariance transformation on the point cloud P3, taking w as a hash coefficient, and taking nth as the angle number to obtain a feature map Ic; performing rotation relation invariance transformation on the point cloud P4, taking w as a hash coefficient, and taking nth as the angle number to obtain a feature map Id;
(2) Performing phase correlation analysis on the feature graphs Ic and Id in the vertical direction, and obtaining the reliability and rotation angle difference of phase difference and phase correlation by taking w as a hash coefficient;
(3) If the reliability of the phase correlation is smaller than a preset second threshold value, the matching is failed; otherwise, continuing the following steps;
(4) Generating pose changes between the point clouds P3 and P4 by using pose changes (0, rotation angle difference) through a second registration method; if registration fails, generating pose change between the point clouds P3 and P4 by using pose change (0, rotation angle difference +Pi) through a second registration method, wherein Pi is a circumference ratio; if registration still fails, matching fails, and ending the method;
(5) If the registration is successful, the point clouds P3 and P4 are subjected to ICP registration by using the pose change between the point clouds P3 and P4, and the pose change of the point cloud frame P3 relative to the point cloud frame P4 is output.
Further, the method for invariance transformation of the rotation relation of the laser point cloud comprises the following steps:
(1) Receiving points Yun Zhen P, wherein the number of angles is nth, the characteristic length of each angle is sz, the hash coefficient w and the characteristic adjustment coefficient;
(2) Creating a feature map of a black-and-white image with a height nth and a width sz;
(3) Performing nth equal division on the 180-degree angles to obtain an angle set, and calculating a graph feature vector for each angle in the angle set to be used as one row of a feature graph;
(4) For each angle in the set of angles, each point P [ n ] in the point cloud frame operates as follows:
1) And (3) taking th 'as a current angle, rotating P [ n ] around an original point according to the th' angle, and calculating an abscissa value after rotation:
X=cos(th’)*P[n].x-sin(th’)*P[n].y
2) Performing hash operation with w as a coefficient on X, and obtaining a result h;
3) Drawing a rectangle on the (h, th'/180 nth) position of the feature map, if the rectangle exceeds the range of the feature map, turning back sz pixels of overflow content and still drawing on the feature map;
(5) And performing exponentiation operation on the pixel values on the feature map and outputting, wherein the exponentiation is a feature adjustment coefficient.
Further, the method for calculating the position information of the mobile device on the map comprises the following steps:
x=Rx+cos(Rth)*P.x-sin(Rth)*P.y
y=Ry+sin(Rth)*P.x+cos(Rth)*P.y
th=Rth+P.th
wherein Rx, ry are coordinates of a current reference point cloud frame, rth is a rotation angle of the current reference point cloud frame, P is a second pose change of the current point cloud frame, x and y represent horizontal and vertical coordinates of the mobile device in a map coordinate system, and th is an angle.
The positioning method based on the point cloud transformation matching has the following beneficial effects:
(1) The continuous matching algorithm has the advantages of good stability, strong adaptability and high precision.
(2) The self-correcting function is achieved, and positioning can be achieved only by laser radar data.
(3) The feature map obtained through the point cloud transformation has higher robustness and can adapt to poorer point cloud data, thereby improving the environmental adaptability of laser navigation.
(4) The problem of long corridor effect can be solved well.
(5) The method has better initial positioning capability and smaller calculation amount (only the point cloud frame is needed to be traversed without traversing all possible positions on the map).
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from these drawings without undue effort.
FIG. 1 is a flow chart of a positioning method based on point cloud transformation according to an embodiment of the invention;
fig. 2 is a flow chart of a second registration method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a mobile device according to an embodiment of the invention;
FIG. 4 is a flow chart of a positioning method based on point cloud transformation according to an embodiment of the present invention;
FIG. 5 is a flow chart of a positioning method based on point cloud transformation according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a reference point cloud frame variation according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention is further described below with reference to the drawings and the detailed description.
The invention provides a laser positioning method based on point cloud transformation matching, which is characterized in that a laser radar arranged on a mobile device is used for directly acquiring a current point cloud frame, then the laser point cloud correction method is used for calculating a previous point cloud frame and a first pose and a second pose of the previous point cloud frame to obtain a corrected point cloud frame, the first pose and the second pose of the current point cloud frame are corrected, and then the previous reference point cloud frame is corrected to form the current reference point cloud frame for the use of a next point cloud frame. The current reference point cloud frame has coordinates on a map, the pose of the mobile device can be calculated through the current reference point cloud frame position, and the cycle is performed, so that the coordinate data of the mobile device can be output in real time for practical application such as comparison with a preset route.
In the present invention, the term "frame" is the prior art, which is made up of a point cloud, called a point cloud frame. The coordinate system of the point cloud frame is a map coordinate system, and comprises an abscissa and an ordinate and an angle; the coordinate system of the point cloud or the point in the frame is a radar coordinate system, including an abscissa and an ordinate and an angle; the pose change comprises the change of the abscissa and the change of the ordinate and the rotation angle or the speeds of the three, and as the pose change is the change, the requirement of a coordinate system is avoided.
As shown in fig. 1, the laser positioning method includes the steps of:
s1, receiving a current point cloud frame acquired by a laser radar on a mobile device, and correcting the current point cloud frame to generate a corrected point Yun Zhen;
s2, registering the corrected point cloud frame and the previous reference point cloud frame;
s3, correcting the current reference point cloud frame;
and S4, calculating and outputting the position information of the mobile device on the map according to the coordinates and the rotation angle of the current reference point cloud frame. Preferably, the method for calculating the position information of the mobile device on the map comprises the following steps:
x=Rx+cos(Rth)*P.x-sin(Rth)*P.y
y=Ry+sin(Rth)*P.x+cos(Rth)*P.y
th=Rth+P.th
wherein Rx, ry are coordinates of a current reference point cloud frame, rth is a rotation angle of the current reference point cloud frame, P is a second pose change of the current point cloud frame, x and y represent horizontal and vertical coordinates of the mobile device in a map coordinate system, and th is an angle.
By the method, the point cloud frame acquired by the laser radar can be corrected without connecting other equipment, a position prediction function with good robustness can be provided, and the method can work under the conditions that the map is built and the map is not built at the same time.
In step S2, the registration method is robust and is capable of resisting the long corridor effect, and specifically includes the following steps:
s21, receiving point cloud frames P1 and P2 and expected pose changes of the point cloud frames P1 relative to the point cloud frames P2, performing ICP registration on the point cloud frames P1 and P2, if ICP registration is successful, obtaining preliminary pose changes of the point cloud frames P1 relative to the point cloud frames P2, and then executing a second registration method, wherein parameters are the point cloud frames P1 and P2 and the preliminary pose changes; otherwise, outputting the predicted pose change;
the ICP method is an iterative closest point pair matching method and is used for calculating the pose changes of two point clouds, and the precision of the original pose changes can be improved. The ICP method is prior art and will not be described in detail herein. When ICP registration fails, the output pose changes to the received pose changes, namely the pose change precision is unchanged. When the registration in step S21 is successful, the result is that the pose of the mobile device changes approximately, which is not accurate enough, and the second registration method provided by the invention is needed to adjust the pose change accuracy again. For example, the range error of the initial value is 1m, and the angle error is 10 degrees; if ICP registration correction is successful, the angle error is 0.5 degree, the range error may be reduced to 10cm, or may not be reduced; and then through a second registration method, the range error can be reduced to 1cm. The adjusted pose change information is accurate, and the method can also be directly used for laser radar point cloud correction in the step S1 without other equipment.
The second registration method is a displacement detection method based on Hash transformation and matching of displacement of the laser point Yun Zhen, and is used for converting the point clouds in the laser point cloud frame into a feature map with a specified size by performing displacement relation invariance transformation on the laser point cloud frame under the condition that the angle between the two point clouds is obtained, and performing phase correlation analysis on the feature map to be judged for displacement, so that a robust displacement result resisting long corridor effect is obtained.
As shown in fig. 2, the specific steps of the second registration method are as follows:
s22 receives the predicted pose relationship dx, dy, dth between the two point clouds Pc 1. N and Pt 1. M, pc and Pt, and the preset maximum displacement range w and confidence threshold T for detection.
The w setting is based on the acceleration of the mobile device, and typically the mobile device has a maximum displacement because the power limitation cannot accelerate too fast, and can be set according to the actual situation.
The greater the confidence threshold, the easier the registration failure, but the less likely the misregistration occurs, the smaller the threshold, the easier the success, but the likely the misregistration occurs, and the setting can be performed according to the actual situation.
S23 takes a plurality of values from w to w.times.2, and marks the values as w 1. M, and M values are taken as the total; for each w [ M ], carrying out laser point cloud displacement invariance transformation on the point cloud by taking the w [ M ] as a parameter, and recording the result as px [1..M ], py [1..M ] and conf [1..M ], wherein the method specifically comprises the following steps:
a) And taking w [ m ] as a hash coefficient, performing laser point cloud displacement relation invariance transformation on Pt to obtain a feature map Ia [ i, j ] with sz size.
b) Performing the following pose transformation on each point of Pc to obtain a point cloud Pc':
Pc’[n].x=Pc[n].x*cos(dth)-Pc[n].y*sin(dth)+dx
Pc’[n].y=Pc[n].x*sin(dth)+Pc[n].y*cos(dth)+dy
c) And (3) performing laser point cloud displacement invariance transformation on Pc' to obtain a feature map Ib [ i, j ] with sz size.
d) Phase correlation analysis is carried out on the feature diagrams Ia [ i, j ] and Ib [ i, j ], w is used as a hash coefficient, and phase difference (px, py) is obtained, wherein the phase difference is obtained by the prior art, and the steps are as follows:
i. performing two-dimensional Fourier transform on Ia [ i, j ] to obtain Fa [ u, v ]
ii, performing two-dimensional Fourier transform on Ib [ i, j ] to obtain Fb [ u, v ]
And thirdly, performing phase correlation:
Ph[u,v]=Fa[u,v]*conj(Fb[u,v])/(|Fa[u,v]|*|Fb[u,v]|)
conj (xxx): represents conjugation to xxx;
performing two-dimensional inverse Fourier transform on Phu, v to obtain Mag i, j
Taking the (i, j) pair with Mag [ i, j ] as the maximum value, and performing bilinear interpolation, thereby improving the phase detection precision:
x=-Mag[i-1,j-1]-Mag[i-1,j]-Mag[i-1,j+1]+Mag[i+1,j+1]+Mag[i+1,j]+Mag[i+1,j+1]
y=-Mag[i-1,j-1]]-Mag[i,j-1]-Mag[i+1,j-1]+Mag[i-1,j+1]+Mag[i,j+1]+Mag[i+1,j+1]
obtaining the confidence level c= (Max (Mag) -Mean (Mag))/Std (Mag) of the current phase correlation
Where Max represents maximum, mean represents average, std represents standard deviation.
e) Collecting the results px [ m ] =x×w [ m ]/sz, py [ m ] =y×w [ m ]/sz, conf [ m ] =c
S24, taking two maximum groups of values of conf, and respectively marking the two maximum groups of values as m1 and m2; if conf [ m1] < T or conf [ m2] < T, the displacement cannot be detected, the registration fails, and the method ends. This means that there may be no common part of the two point clouds, and there may be a problem with the point cloud angle. If conf [ m1] > =t and conf [ m2] > =t, it is explained that the displacement detection is successful, the phase difference is converted into the displacement, and the result is output. Preferably, the conversion method is as follows:
a) The mode is selected from four values of px [ m1], px [ m1] -sz [ w [ m1], px [ m2], px [ m2] -sz [ w [ m2], and is the displacement between Pt and Pc in the x direction.
b) The mode is selected from four values of py m1, py m 1-sz w m1, py m 2-sz w m2, and is the y-direction displacement between Pt and Pc.
In step S23, the displacement relationship invariance transformation method of the laser point cloud is to use a hash method to process the point cloud, and output a feature map with a specified size. The generated characteristic diagram can be subjected to displacement calculation by using a phase correlation method, and under the transformation, the displacement relation of the point cloud is consistent with the displacement relation of the characteristic diagram, so that the characteristic diagram has good noise immunity and can solve the long corridor effect. The specific treatment method is as follows:
(1) Receiving point Yun Zhen P [ 1..n ], feature map side length sz, hash coefficient w, and feature adjustment coefficient s. A full black square black-and-white image Im i, j is created, with side lengths sz. The settings of the feature map side length sz, the hash coefficient w and the feature adjustment coefficient s belong to the prior art, and can be specifically set according to practical situations, for example, the feature map side length sz can be set to 16 or 32, etc., so that the fourier transform speed is improved.
(2) For each point P [ n ] in the point cloud P: the following operations are performed
A) The position of the point cloud P [ n ] is subjected to translation relationship invariance hash by a hash coefficient w, such as modulo, namely: x '=mod (P [ n ]. X, w) y' =mod (P [ n ]. Y, w) to obtain points (x ', y') on the feature map;
b) At the (x ', y') position on the feature map, a circle, for example, a circle of 1 pixel in size is drawn using a bilinear interpolation method. If the circle drawn exceeds the range of the feature map, then the overflow content is turned back by sz pixels and still drawn on the feature map (i.e., if overflowed on the left of the map, then the overflow content should be drawn on the right)
(3) The pixel values of the feature map are subjected to a power operation, preferably, the power is a feature adjustment coefficient s, i.e., im [ i, j ] =im [ i, j ] ≡s.
(4) And carrying out histogram equalization processing on the feature map, and then outputting a final feature map Im.
As shown in fig. 3, the mobile device may be a laser cart, which has a laser radar thereon, and is capable of scanning the surrounding environment to generate a point cloud frame (i.e., a point cloud comprising a plurality of points in the frame).
In one embodiment, as shown in fig. 4, step S1 includes:
s11, receiving a current point cloud frame acquired by a laser radar on a mobile device, and registering the current point cloud frame and the previous point cloud frame through first pose change of the previous point cloud frame, wherein the registration result is the first pose change of the current point cloud frame; the registration method used here is the same as the registration method in step S2 (i.e., registration by ICP registration and the second registration method);
And S12, correcting the current point cloud frame according to the first pose change of the current point cloud frame, and generating a corrected point cloud frame.
Because the radar obtains the point cloud data by rotating and scanning the surrounding environment, the point cloud is distorted (for example, a right-angle wall becomes an obtuse angle wall or an acute angle wall) during the movement of the radar, so that the point cloud cannot be matched with a map. Therefore, the current point cloud frame needs to be corrected, so that the registration accuracy is higher, and the positioning stability can be improved. The correction method is a laser point cloud correction method, and the method assumes that the displacement of the mobile device is approximately linear motion and the rotation is approximately uniform rotation within the period of one circle of laser radar scanning, so that the laser point cloud correction result can be solved only by using one frame of more accurate pose change.
The specific correction method comprises the following steps:
s121, receiving the point cloud frame P and the pose change delta. The point cloud frame P contains the distances measured by the lidar over N angles, continuously from St degrees to Ed degrees (St, ed are parameters of the radar, representing the angle of radar scan). And Pn is the nth point on the point cloud, and Pn, x, pn, y and Pn, th respectively represent the x coordinate, y coordinate and th coordinate of the point relative to the radar coordinate system. The pose change parameters represent the pose change delta, delta.x, delta.y and delta.th of the radar when the frame is captured, and the change and the angle change of the x coordinate and the y coordinate are respectively.
S122, for each point P [ N ], correcting the point P [ N ] to be the time when the laser radar scans at the point P [ N ], wherein the point is relative to the position P '[ N ] of the laser radar, and then outputting point clouds P' [1 ] to N ]:
a) Calculating the scaling factor p= (Ed-P [ n ]. Th)/Pi to be the circumference ratio
b) Calculating an angle correction amount: deltath=p.delta.th
c) Calculating corrected point positions:
P’[n].x=P[n].x*cos(deltaTh)-P[n].y*sin(deltaTh)+p*delta.x;
P’[n].y=P[n].y*sin(deltaTh)+P[n].y*cos(deltaTh)+p*delta.y
p is a proportional adjustment coefficient, deltaTh is an angle correction amount, and Pi is a circumference ratio;
the point clouds P' [1 to N ] are corrected point clouds, and the point clouds form a correction frame.
In this embodiment, in step 2, the result of the registration is the second pose change of the current point cloud frame, and the calculation formula of the registered parameter pose change D' is: d' =p Front part +V, where P Front part And V is the first pose change of the current point cloud frame.
In this embodiment, when the mobile device starts walking, the data needs to be initialized. After initialization, the mobile device can obtain its positioning information on the map by circularly executing steps S1-S4. The first time S1-S4 are performed, the initialization information is as follows:
1> the first pose change of frame 0 is (0, 0); i.e., the initial time, when there is no previous frame origin cloud,
The pose change of the original point cloud and the original point cloud of the previous frame is 0.
2> the second pose change of frame 0 is (0, 0); for indicating that the cart initialization position is (0, 0).
3> coordinates Rx, ry, rth of the 0 th reference point cloud frame are specified values;
4> reference frame generation distance is a predetermined value, for example 5 meters.
Wherein the pose change comprises displacement and rotation, and three components in the ternary number (0, 0) sign represent displacement or speed in the x direction, displacement or speed in the y direction, rotation or rotation speed respectively.
In step S3, since the trolley is moving, the previous reference point cloud frame does not necessarily include all points in the scannable environment in the next scan cycle calculation, and thus the correction of the previous reference point cloud frame is required by using the corrected point cloud frame. Therefore, the registration accuracy is higher, and the positioning stability can be improved. The method specifically comprises the following steps:
if the step S2 is successful in registration, correcting the previous reference point cloud frame to be the current reference point cloud frame; otherwise, the corrected point cloud frame is used as the current reference point cloud frame; the method for correcting the previous reference point cloud frame to be the current reference point cloud frame comprises the following steps:
s31, if a point which does not exist in the previous reference point cloud frame exists in the corrected point cloud frame, the point is added into the previous reference point cloud frame;
S32, if the correction point cloud frame and the previous reference point cloud frame have corresponding points, correction is carried out; preferably, an average value is calculated for the positions of the corresponding points, and the positions of the corresponding points in the cloud frame of the previous reference point are corrected to be the average value;
s33, if the point in the previous reference point cloud frame does not correspond to the point in any corrected point cloud frame within the set period, deleting the point from the previous reference point cloud frame.
The previous reference point cloud frame corrected by S31-S33 is the current reference point cloud frame and is used as the previous reference point cloud frame in the next scanning cycle calculation.
In the process of correcting the point cloud frame, the change of the point cloud frame is shown in fig. 5.
In one embodiment, as shown in fig. 6, the positioning method further includes: s5, if the abscissa or the ordinate of the current reference point cloud frame is larger than the reference frame generation distance (the reference frame generation distance can be set, for example, 5 meters), or the number of point clouds in the current reference point cloud frame is larger than a preset first threshold value, comparing the current reference point cloud frame with the map to generate a new map.
x or y is greater than the reference frame generation distance, indicating that the possible reference point cloud frame is farther from the current mobile device's location; if the point number of the current reference point cloud frame reaches a certain threshold value, the map at the position is supplemented with a high probability, and in the two cases, the current reference point cloud frame is compared with the map to generate a new map. The map is made up of map frames.
The comparison with the map is to perform closed loop detection on the current reference point cloud frame on the map so as to eliminate the positioning error of the mobile device which increases along with the movement.
Closed loop detection corresponds to determining where on the map the current environment corresponds. If the position is not compared with the map, the current position is calculated according to the position of the last time in the positioning process, which is equivalent to inertial navigation, and the error is in direct proportion to the time length and the moving distance. With closed loop detection, this error can be eliminated. The specific method is as follows:
s51, matching the current reference point cloud frame with a plurality of map frames closest to the frame position according to the sequence from small to large. The matching is to check the position of the current point cloud frame on the map, build a map, locate a similar to a jigsaw puzzle, the current reference point cloud frame is a block of a jigsaw puzzle, and the map matching is to determine where the position of the jigsaw puzzle is. Distance ordering is utilized because it is initially known where the tile is, and therefore, the trial tile is preferred to that location.
S52, if the matching is unsuccessful, the position of the current reference point cloud frame is unchanged; if the matching is successful, the reference point cloud frame coordinates (Rx, ry, rth) are updated as follows:
Rx=x2+cos(th2)*dx-sin(th2)*dy
Ry=y2+sin(th2)*dx+cos(th2)*dy
Rth=th2+dth
Wherein x2, y2 and th2 are coordinates of the matched map frame, dx, dy and dth are pose changes of the current reference point cloud frame output by the matching method relative to the map frame; with this data, it is equal to knowing how the tile was spelled.
At this time, the updated current reference point cloud frame is copied as a new map frame, and the new map frame is added to the map, thereby generating a new map.
In step S51, the matching method is a point cloud matching method based on laser point cloud ripple transformation, and is used for blind matching when an angle and an unknown displacement between two point clouds are unknown. The method calculates the rotation and displacement of two point clouds by using very low calculated amount, and has good precision and high stability. The method solves the problems of closed loop and initialization positioning of laser positioning on the map. Firstly, obtaining an included angle between two point clouds by using invariance change of a rotation relation; and then the translation of the rotated point cloud is tried out by using the pose change calculation method, and finally, the result is optimized by using an ICP method, so that a matching result is obtained. The method comprises the following specific steps:
(1) Receiving two point clouds P3[1..N ] and P4[1..m ], wherein the maximum displacement of the matching is w, the angular resolution is nth, the rotation relation of the laser point clouds is unchanged and transformed for the point clouds P3, w is a hash coefficient, and nth is the angular quantity, and a characteristic diagram Ic is obtained; performing rotation relation invariance transformation on the point cloud P4, taking w as a hash coefficient, and taking nth as the angle number to obtain a feature map Id;
(2) And (3) carrying out phase correlation analysis on the feature graphs Ic [ i, j ] and Id [ i, j ] in the vertical direction, and obtaining a phase difference (px, py), the reliability of phase correlation and a rotation angle difference dth by taking w as a hash coefficient, wherein the steps are as follows:
1) Performing two-dimensional Fourier transform on the feature map Ic [ i, j ] to obtain Fc [ u, v ];
2) Performing two-dimensional Fourier transform on the feature map Id [ i, j ] to obtain Fd [ u, v ];
3) Performing phase correlation operation: ph [ u, v ] =fc [ u, v ]. Conj (Fd [ u, v ])/(i Fc [ u, v ] |fd [ u, v ] |);
4) Performing two-dimensional inverse Fourier transform on Ph [ u, v ] to obtain Mag [ i, j ], taking the first column to obtain M1.
5) Taking i at the position where M [ i ] is the maximum value, and performing square interpolation, thereby improving the angle detection precision:
dth=(M[i+1]-M[i-1])/2/(M[i+1]+M[i-1]-2*M[i])
6) The confidence level c= (Max (M) -Mean (M))/Std (M) of the current phase correlation is found, where Max represents the maximum, mean represents the average, and Std represents the standard deviation.
(3) If the reliability of the phase correlation is smaller than a preset second threshold value, the matching is failed; otherwise, continuing the following steps;
(4) Generating pose changes between the point clouds P3 and P4 by a second registration method using the pose changes (0, dth); if registration fails, generating pose changes between the point clouds P3 and P4 by using pose changes (0, dth+Pi) through a second registration method, wherein Pi is a circumference ratio; if registration still fails, matching fails, and ending the method;
(5) If registration is successful, performing ICP registration on the point clouds P3 and P4 by using pose change, and outputting the pose change of the point cloud frame P3 relative to the point cloud frame P4.
And (3) performing invariance transformation on the rotation relation of the laser point cloud in the step (1), and processing the point cloud by using a hash method. The result of the processing is a feature map of a specified size. Under this transformation, the rotational relationship of the origin cloud is mapped as a vertical translational relationship of the feature map. The method comprises the following specific steps:
(1) And receiving point clouds P [ 1..n ], wherein the number of angles is nth, the characteristic length of each angle is sz, the hash coefficient w and the characteristic adjustment coefficient s.
(2) Creating a feature map Im [ i, j ] of a black-and-white image of full black, high nth and wide sz;
(3) And (3) carrying out nth equal division on the angle of 180 degrees to obtain an angle set, namely 180/nth and 2 x 180/nth. For each angle in the set of angles, for each point P [ n ] in P, the following is done:
1) Recording the current angle as th ', rotating P [ n ] around the original point according to the th' angle, and calculating the abscissa value after rotation:
X=cos(th’)*P[n].x-sin(th’)*P[n].y
2) Hash operations with w as a factor, e.g. modulo operation h=mod (X, w)
3) A rectangle, e.g. 1 pixel high and 1 pixel wide, is drawn at the (h, th'/180 nth) position of the feature map Im. If the drawn rectangle exceeds the range of the feature map, then the overflow content should be folded back by sz pixels and still be drawn on the feature map (e.g., if overflowed on the left of the map, then the overflow content should be drawn on the right)
(4) Exponentiation of the pixel value, i.e., im [ i, j ] =im [ i, j ] ≡s
(5) The feature map Im is output.
According to the method, the translation relation is unchanged or the rotation relation is unchanged through the point cloud to be converted into the feature map, and then the pose conversion of the original point cloud is reversely solved through translation or rotation on the feature map, so that stable and accurate point cloud matching is performed, and the long corridor problem, the loop detection problem and the initialization positioning problem are effectively solved. The laser positioning method provided by the invention has the characteristics of higher robustness, better environmental adaptability and better anti-interference capability.
Those of ordinary skill in the art will appreciate that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions, which are defined by the scope of the appended claims.

Claims (12)

1. A positioning method based on point cloud transformation matching, the method comprising:
S1, receiving a current point cloud frame acquired by a laser radar on a mobile device, and correcting the current point cloud frame to generate a corrected point Yun Zhen;
s2, registering the corrected point cloud frame and the previous reference point cloud frame;
s3, correcting the current reference point cloud frame;
s4, calculating and outputting position information of the mobile device on the map according to the coordinates and the rotation angle of the current reference point cloud frame;
in step S2, the method of registration comprises the steps of:
s21, receiving the point cloud frame P1 and the point cloud frame P2 and the predicted pose change of the point cloud frame P1 relative to the point cloud frame P2, performing ICP registration on the point cloud frame P1 and the point cloud frame P2, if ICP registration is successful, acquiring the preliminary pose change of the point cloud frame P1 relative to the point cloud frame P2, and then executing a second registration method; otherwise, outputting the predicted pose change;
the second registration method includes:
s22, receiving a point Yun Zhen Pc, a point cloud frame Pt, pose changes, a preset maximum detected displacement range w, a reliability threshold T and a preset feature map size sz;
s23 takes a plurality of values from w to w.times.2, and marks the values as w 1. M, and M values are taken as the total; for each w [ m ], the following steps are performed:
a) Taking w m as hash coefficient, carrying out invariance transformation on laser point cloud displacement relation for Pt to obtain feature diagram Ia of sz size;
b) Performing the following pose transformation on each point of Pc to obtain a point cloud frame Pc':
Pc’[n].x=Pc[n].x*cos(dth)-Pc[n].y*sin(dth)+dx
Pc’[n].y=Pc[n].x*sin(dth)+Pc[n].y*cos(dth)+dy
c) Taking w m as hash coefficient, carrying out laser point cloud frame displacement invariance transformation on Pc' to obtain a feature map Ib with sz size;
d) Performing phase correlation analysis on the feature images Ia and Ib, and obtaining phase difference and phase correlation credibility by taking w [ m ] as a hash coefficient;
s24, if one of the two maximum phase correlation reliability values is smaller than a reliability threshold T, the registration fails, and a primary pose change is output; if both values are greater than or equal to the confidence threshold T, the phase difference is converted into a displacement and the displacement is output.
2. The positioning method according to claim 1, wherein the step of the displacement relation invariance transformation method of the laser point cloud frame includes:
(1) Receiving a point cloud frame, a feature graph side length sz, a hash coefficient w and a feature adjustment coefficient;
(2) Generating a full black feature map with the side length sz;
(3) For each point in the point cloud frame, performing the following operations:
a) Performing translation relation invariant hash operation on the positions of the points by using a hash coefficient w, and obtaining points (x ', y') on the feature map;
b) Drawing a circle at the position of the characteristic diagram (x ', y') by using a bilinear interpolation method; if the drawn circle exceeds the range of the feature map, returning sz pixels of overflow content and still drawing the overflow content on the feature map;
(4) Performing exponentiation operation and histogram equalization on pixel values on the feature map, and then outputting; preferably, the power is a characteristic adjustment coefficient.
3. The positioning method according to claim 1, wherein the step S1 includes:
s11, receiving a current point cloud frame acquired by a laser radar on a mobile device, and registering the current point cloud frame and the previous point cloud frame through first pose change of the previous point cloud frame, wherein the registration result is the first pose change of the current point cloud frame;
and S12, correcting the current point cloud frame according to the first pose change of the current point cloud frame, and generating a corrected point cloud frame.
4. A positioning method as claimed in claim 3, wherein the method of correction in step S12 comprises:
s121, receiving a point cloud frame P and a pose change delta; wherein the point cloud frame P contains distances measured by the lidar at multiple angles continuously from St angle to Ed angle;
s122, acquiring a corrected point cloud frame P' according to the following calculation formula:
p=(Ed-P[n].th)/Pi;
deltaTh=p*delta.th
P’[n].x=P[n].x*cos(deltaTh)-P[n].y*sin(deltaTh)+p*delta.x
P’[n].y=P[n].y*sin(deltaTh)+P[n].y*cos(deltaTh)+p*delta.y
p is a proportional adjustment coefficient, deltaTh is an angle correction amount, pi is a circumference rate, pn is an nth point on a point cloud frame P, pn, x, pn, y, and Pn, th respectively represent an x coordinate, a y coordinate, and an angle of a point in the point cloud frame relative to a radar coordinate system, and delta.x, delta.y, and delta.th are pose changes and angle changes on x and y axes on a map coordinate system.
5. The positioning method according to claim 4, wherein in the step 2, the result of the registration is a second pose change of the current point cloud frame, and the calculation formula of the pose change D' in the registration is: d' =p Front part +V, where P Front part And V is the first pose change of the current point cloud frame.
6. The positioning method according to claim 1, wherein in the step S3, if the registration in the step S2 is successful, the previous reference point cloud frame is modified to be the current reference point cloud frame; otherwise, the corrected point cloud frame is used as the current reference point cloud frame; the method for correcting the previous reference point cloud frame to be the current reference point cloud frame comprises the following steps:
s31, if a point which does not exist in the previous reference point cloud frame exists in the corrected point cloud frame, the point is added into the previous reference point cloud frame;
s32, if the correction point cloud frame and the previous reference point cloud frame have corresponding points, correction is carried out;
s33, if the point in the previous reference point cloud frame does not correspond to the point in any corrected point cloud frame within the set period, deleting the point from the previous reference point cloud frame.
7. The positioning method according to claim 6, characterized in that in step S32, an average value is calculated for the corresponding point positions, and the positions of the corresponding points in the previous reference point cloud frame are corrected to the average value.
8. The positioning method according to claim 1, characterized in that the positioning method further comprises:
s5, if the abscissa or the ordinate of the current reference point cloud frame is larger than the reference frame generation distance or the number of point clouds in the current reference point cloud frame is larger than a preset first threshold value, comparing the current reference point cloud frame with the map to generate a new map.
9. The positioning method of claim 8 wherein the method of comparing comprises:
s51, matching a current reference point cloud frame with a plurality of map frames closest to the current reference point cloud frame according to the sequence from small to large;
s52, if the matching is successful, updating the current reference point cloud frame coordinates, wherein the updating formula is as follows:
Rx=x2+cos(th2)*dx-sin(th2)*dy
Ry=y2+sin(th2)*dx+cos(th2)*dy
Rth=th2+dth
wherein Rx, ry and Rth are coordinates of a current point cloud frame, x2, y2 and th2 are coordinates of a matched map frame, dx, dy and dth are pose changes of the current reference point cloud frame output by the matching method relative to the map frame;
and S53, copying the updated current reference point cloud frame to be a new map frame, and adding the new map frame into the map so as to generate a new map.
10. The positioning method according to claim 9, wherein in the step S51, the matching method includes:
(1) Receiving two point clouds P3 and P4, a matched maximum displacement w and an angle resolution nth; performing rotation relation invariance transformation on the point cloud P3, taking w as a hash coefficient, and taking nth as the angle number to obtain a feature map Ic; performing rotation relation invariance transformation on the point cloud P4, taking w as a hash coefficient, and taking nth as the angle number to obtain a feature map Id;
(2) Performing phase correlation analysis on the feature graphs Ic and Id in the vertical direction, and obtaining the reliability and rotation angle difference of phase difference and phase correlation by taking w as a hash coefficient;
(3) If the reliability of the phase correlation is smaller than a preset second threshold value, the matching is failed; otherwise, continuing the following steps;
(4) Generating pose changes between the point clouds P3 and P4 by using pose changes (0, rotation angle difference) through a second registration method; if registration fails, generating pose change between the point clouds P3 and P4 by using pose change (0, rotation angle difference +Pi) through a second registration method, wherein Pi is a circumference ratio; if registration still fails, then matching fails;
(5) If the registration is successful, the point clouds P3 and P4 are subjected to ICP registration by using the pose change between the point clouds P3 and P4, and the pose change of the point cloud frame P3 relative to the point cloud frame P4 is output.
11. The positioning method according to claim 10, wherein the method of rotational relationship invariance transformation of the laser point cloud comprises:
(1) Receiving points Yun Zhen P, wherein the number of angles is nth, the characteristic length of each angle is sz, the hash coefficient w and the characteristic adjustment coefficient;
(2) Creating a feature map of a black-and-white image with a height nth and a width sz;
(3) Performing nth equal division on the 180-degree angles to obtain an angle set, and calculating a graph feature vector for each angle in the angle set to be used as one row of a feature graph;
(4) For each angle in the set of angles, each point P [ n ] in the point cloud frame operates as follows:
1) Recording the current angle as th ', rotating P [ n ] around the original point according to the th' angle, and calculating the abscissa value after rotation: x=cos (th ') ×p [ n ]. X-sin (th') ×p [ n ]. Y
2) Performing hash operation with w as a coefficient on X, and obtaining a result h;
3) Drawing a rectangle on the (h, th'/180 nth) position of the feature map, if the rectangle exceeds the range of the feature map, turning back sz pixels of overflow content and still drawing on the feature map;
(5) And performing exponentiation operation on the pixel values on the feature map and outputting, wherein the exponentiation is a feature adjustment coefficient.
12. The positioning method according to claim 1, wherein the method for calculating the position information of the mobile device on the map is:
x=Rx+cos(Rth)*P.x-sin(Rth)*P.y
y=Ry+sin(Rth)*P.x+cos(Rth)*P.y
th=Rth+P.th
Wherein Rx, ry are coordinates of a current reference point cloud frame, rth is a rotation angle of the current reference point cloud frame, P is a second pose change of the current point cloud frame, x and y represent horizontal and vertical coordinates of the mobile device in a map coordinate system, and th is an angle.
CN202010918896.7A 2020-09-04 2020-09-04 Positioning method based on point cloud transformation matching Active CN112053390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010918896.7A CN112053390B (en) 2020-09-04 2020-09-04 Positioning method based on point cloud transformation matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010918896.7A CN112053390B (en) 2020-09-04 2020-09-04 Positioning method based on point cloud transformation matching

Publications (2)

Publication Number Publication Date
CN112053390A CN112053390A (en) 2020-12-08
CN112053390B true CN112053390B (en) 2023-12-22

Family

ID=73607291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010918896.7A Active CN112053390B (en) 2020-09-04 2020-09-04 Positioning method based on point cloud transformation matching

Country Status (1)

Country Link
CN (1) CN112053390B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379841B (en) * 2021-06-21 2024-04-30 上海仙工智能科技有限公司 Laser SLAM method based on phase correlation method and factor graph and readable storage medium thereof
CN116359938B (en) * 2023-05-31 2023-08-25 未来机器人(深圳)有限公司 Object detection method, device and carrying device
CN117031443B (en) * 2023-10-09 2024-01-19 天津云圣智能科技有限责任公司 Point cloud data construction method and system and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN110211129A (en) * 2019-05-17 2019-09-06 西安财经学院 Low covering point cloud registration algorithm based on region segmentation
CN111415390A (en) * 2020-03-18 2020-07-14 上海懒书智能科技有限公司 Positioning navigation method and device based on ground texture
CN111429344A (en) * 2020-02-19 2020-07-17 上海交通大学 Laser S L AM closed loop detection method and system based on perceptual hashing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107817503B (en) * 2016-09-14 2018-12-21 北京百度网讯科技有限公司 Motion compensation process and device applied to laser point cloud data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803267A (en) * 2017-01-10 2017-06-06 西安电子科技大学 Indoor scene three-dimensional rebuilding method based on Kinect
CN110211129A (en) * 2019-05-17 2019-09-06 西安财经学院 Low covering point cloud registration algorithm based on region segmentation
CN111429344A (en) * 2020-02-19 2020-07-17 上海交通大学 Laser S L AM closed loop detection method and system based on perceptual hashing
CN111415390A (en) * 2020-03-18 2020-07-14 上海懒书智能科技有限公司 Positioning navigation method and device based on ground texture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄槿.《硕士电子期刊》.2020,全文. *

Also Published As

Publication number Publication date
CN112053390A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN112053390B (en) Positioning method based on point cloud transformation matching
Vizzo et al. Kiss-icp: In defense of point-to-point icp–simple, accurate, and robust registration if done the right way
US9864927B2 (en) Method of detecting structural parts of a scene
Raposo et al. Fast and accurate calibration of a kinect sensor
CN110146099B (en) Synchronous positioning and map construction method based on deep learning
US9912764B2 (en) Method and apparatus for robust localization in outdoor environments
EP2144131A2 (en) Apparatus and method of building map for mobile robot
US20120306847A1 (en) Online environment mapping
JP2008250906A (en) Mobile robot, and self-location correction method and program
CN111522022B (en) Dynamic target detection method of robot based on laser radar
KR20200030738A (en) Moving robot and method for estiating location of moving robot
JP2008250905A (en) Mobile robot, self-location correction method and self-location correction program
US20180067199A1 (en) Method and System for Calibrating a Network of Multiple Horizontally Scanning Range Finders
CN117824667B (en) Fusion positioning method and medium based on two-dimensional code and laser
JP2010112836A (en) Self-position identification device and mobile robot provided with same
Sheng et al. Mobile robot localization and map building based on laser ranging and PTAM
Donoso-Aguirre et al. Mobile robot localization using the Hausdorff distance
KR101222009B1 (en) System and method for lens distortion compensation of camera based on projector and epipolar geometry
CN116907477A (en) High-precision laser SLAM method and device based on visual road sign assistance
EP1307705A1 (en) Height measurement apparatus
CN116577801A (en) Positioning and mapping method and system based on laser radar and IMU
KR101896183B1 (en) 3-d straight lines detection method for camera motion estimation
Christodoulou et al. Image-based method for the pairwise registration of Mobile Laser Scanning Point Clouds
Wang et al. Mobile robot pose estimation using laser scan matching based on fourier transform
Iida et al. High-accuracy Range Image Generation by Fusing Binocular and Motion Stereo Using Fisheye Stereo Camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant