CN114270406A - Parameter calibration method, device and equipment - Google Patents

Parameter calibration method, device and equipment Download PDF

Info

Publication number
CN114270406A
CN114270406A CN201980033276.0A CN201980033276A CN114270406A CN 114270406 A CN114270406 A CN 114270406A CN 201980033276 A CN201980033276 A CN 201980033276A CN 114270406 A CN114270406 A CN 114270406A
Authority
CN
China
Prior art keywords
point
feature points
resampled
points
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980033276.0A
Other languages
Chinese (zh)
Inventor
潘志琛
李延召
张富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN114270406A publication Critical patent/CN114270406A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/30Polynomial surface description

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Algebra (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

A method, a device and equipment for parameter calibration are provided, wherein the method comprises the following steps: the initial point cloud obtained by the at least two distance measuring devices is resampled to obtain a resampled point cloud (S102), and calibration parameters between the at least two distance measuring devices are calculated based on the resampled point cloud (S104). The parameter calibration method is not limited by the specific form of the point cloud, has strong universality, can solve the problem that good point cloud corresponding relation is difficult to obtain when the initial point cloud is matched, and has higher precision of the calculated calibration parameter.

Description

Parameter calibration method, device and equipment
Copyright declaration
The disclosure of this patent document contains material which is subject to copyright protection. The copyright is owned by the copyright owner. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office official records and records.
Technical Field
The present application relates to the field of communications technologies, and in particular, to a parameter calibration method, apparatus, and device.
Background
Currently, many distance measuring devices use electromagnetic waves to measure the distance to a target object. Taking the laser radar as an example, the laser radar emits a laser beam, the laser beam is reflected when encountering a target object, and the distance of the target object can be calculated by the time when the laser radar emits the laser beam and receives the reflected laser beam. Generally, the laser radar represents information of each object in a scanned space in a point cloud form, the point cloud data comprises information of longitude and latitude coordinates, intensity, multiple echoes, colors and the like of each point, three-dimensional reconstruction of each object in the space can be achieved through the point cloud data, and the laser radar has wide application in the fields of surveying and mapping, unmanned driving, unmanned flying and the like.
Generally, when point cloud data is used for three-dimensional reconstruction, a plurality of laser radars are used in combination, and the point cloud data obtained by scanning a plurality of laser radars is combined to reproduce each object in space. Because the point cloud data obtained by different laser radars are data based on the coordinate systems of the point cloud data, when the point cloud data scanned by a plurality of laser radars are combined to restore each object in a three-dimensional space, parameters among the laser radars need to be calibrated, a transformation matrix of the coordinate system of one laser radar relative to the coordinate system of the other laser radar is obtained, and the process of obtaining the transformation matrix is a parameter calibration process.
Since the current parameter calibration method is only suitable for the laser radar with uniform scanning, but not suitable for the laser radar with non-uniform scanning, it is necessary to design a parameter calibration method for the laser radar with non-uniform scanning.
Disclosure of Invention
In view of this, the present application provides a parameter calibration method, apparatus and device, which can implement parameter calibration for a laser radar with non-uniform scanning.
According to a first aspect of an embodiment of the present invention, there is provided a parameter calibration method, including:
resampling the initial point cloud obtained by the at least two distance measuring devices to obtain a resampled point cloud;
and calculating to obtain calibration parameters between the at least two distance measuring devices based on the resampling point cloud.
According to a second aspect of the embodiments of the present invention, there is provided a parameter calibration apparatus, the apparatus includes a processor, a memory, the memory is used for storing a computer program, and the processor is used for reading the computer program stored in the memory and executing the following steps:
resampling the initial point cloud obtained by the at least two distance measuring devices to obtain a resampled point cloud;
and calculating to obtain calibration parameters between the at least two distance measuring devices based on the resampling point cloud.
According to a third aspect of embodiments of the present invention, there is provided an apparatus comprising two or more ranging devices, a processor, a memory for storing a computer program, the ranging devices for detecting a target scene to generate an initial point cloud; the processor is used for reading a computer program stored by the memory to execute the method provided by the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium for storing program instructions, which when executed by a computer, the computer performs the method provided by the first aspect.
According to a fifth aspect of embodiments of the present invention, there is provided a computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method provided by the first aspect.
By applying the scheme provided by the embodiment of the application, when the calibration parameters of at least two distance measuring devices with the initial point cloud obtained by scanning being non-uniform point clouds are calculated, the initial point cloud can be firstly resampled to obtain uniform resampled point clouds, and then the calibration parameters between the distance measuring devices are calculated based on the resampled point clouds. The parameter calibration method provided by the embodiment of the application is not limited by a specific form of point cloud, has strong universality, can solve the problem that when non-uniform point cloud is matched, good point cloud corresponding relation is difficult to obtain, and the precision of the calibrated parameter obtained by calculation is higher, so that a good parameter calibration strategy is provided for the application fields of laser radar such as laser radar surveying and mapping, unmanned driving and the like which need to use the laser radar.
Drawings
The drawings that are required to be used in the description of the embodiments will be briefly described below.
Fig. 1 is a flowchart of a parameter calibration method according to an exemplary embodiment of the present application.
FIG. 2 is a schematic diagram of an initial point cloud according to an exemplary embodiment of the present application.
FIG. 3 is a schematic diagram of a resampled point cloud after resampling according to an exemplary embodiment of the present application.
FIG. 4 is a method for calculating calibration parameters based on a resampled point cloud according to an exemplary embodiment of the present application.
FIG. 5 is a schematic diagram of a two-dimensional lattice formed by projecting a resampled point cloud onto a two-dimensional plane according to an exemplary embodiment of the present application.
FIG. 6 is a schematic illustration of a reference plane of an exemplary embodiment of the present application.
Fig. 7 is a schematic diagram of feature point extraction from different directions according to an exemplary embodiment of the present application.
FIG. 8 is a schematic diagram of a method for calculating curvature of points in a point cloud according to an exemplary embodiment of the present application.
Fig. 9 is a schematic structural diagram of a parameter calibration apparatus according to an exemplary embodiment of the present application.
FIG. 10 is a schematic diagram of an apparatus according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
In the fields of unmanned driving, surveying and mapping, unmanned flight and the like, a distance measuring device is required to measure the depth information of each object in a three-dimensional space so as to restore the three-dimensional space. At present, many distance measuring devices detect a target object by using electromagnetic waves, such as laser radar, millimeter wave radar, and the like. The data obtained by the distance measuring device scanning the three-dimensional space can be point cloud data, the point cloud data is a massive point set which represents the space distribution of the target and the surface characteristics of the target under the same space reference system, and the point cloud data contains the information of longitude and latitude coordinates, intensity, multiple echoes, colors and the like of each point. When reconstructing a three-dimensional space, a plurality of distance measuring devices are commonly used in combination to obtain point cloud data scanned at different angles. Because the point cloud data acquired by different distance measuring devices are based on different coordinate systems, parameter calibration needs to be performed on different distance measuring devices to obtain a transformation matrix between the coordinate systems of the two distance measuring devices.
The calibration parameters between different distance measuring devices are calculated, i.e. the following 6 variables, X, Y, Z, Roll, Pitch, Yaw, are calculated. Wherein the first 3 values represent distances translated in the x, y, z directions, respectively; the last 3 values represent the angle of rotation in the x, y, z directions, respectively. Namely, the coordinate system of one distance measuring device is translated for a certain distance along the directions of x, y and z respectively, and then the coordinate system of the other distance measuring device can be converted into the coordinate system of the other distance measuring device by rotating for a certain angle along the directions of x, y and z respectively.
Because most of the currently used distance measuring devices have uniform scanning density in a scanning view field, and Point cloud data obtained by scanning is relatively uniform, calibration parameters can be directly calculated according to the Point cloud data and Point cloud matching algorithms such as an Iterative Closest Point (ICP) algorithm, a Normal Distribution Transformation (NDT) algorithm and the like. However, for some distance measuring devices, the scanning mode is non-uniform scanning, the point cloud data obtained by scanning is non-uniform, and if calibration parameters are calculated according to the initial point cloud data and the current point cloud matching algorithm, the error of the calculated calibration parameters is very large.
In order to solve the above problem, an embodiment of the present application provides a parameter calibration method, which can accurately calculate calibration parameters for one or more distance measurement devices in which the scanned point cloud is non-uniform. As shown in fig. 1, the parameter calibration method includes the following steps:
s102, resampling initial point clouds obtained by at least two distance measuring devices to obtain resampled point clouds, wherein the distance measuring devices have non-uniform scanning density in a scanning field, and the distribution uniformity of the resampled point clouds is higher than that of the initial point clouds;
and S104, determining calibration parameters between the at least two distance measuring devices based on the resampled point cloud.
The distance measuring device of the embodiment of the present application may be a device such as a laser radar or a millimeter wave radar, and the distance measuring device may scan external environment information, for example, distance information, azimuth information, reflection intensity information, speed information, and the like of a target object in the external environment. In one implementation, the ranging device may detect the distance of the probe to the ranging device by measuring the Time of Flight (TOF), which is the Time-of-Flight Time, of light traveling between the ranging device and the probe. Alternatively, the distance measuring device may detect the distance from the probe to the distance measuring device by other techniques, such as a distance measuring method based on phase shift (phase shift) measurement or a distance measuring method based on frequency shift (frequency shift) measurement, which is not limited herein.
The distance measuring device in the embodiment of the application can obtain point cloud data after scanning an external space environment, the point cloud data is a massive point set of target space distribution and target surface characteristics, and the point cloud data can comprise space three-dimensional coordinate information of each point and other information such as intensity, multiple echoes, color and the like. The point cloud obtained by the distance measuring device is non-uniform in spatial distribution, the density of the point cloud in some spatial areas is high, and the density of the point cloud in some spatial areas is low. As shown in fig. 2, for the initial point clouds collected by the distance measuring device and distributed non-uniformly, it can be seen from the figure that the density of the point cloud images in different areas is not consistent, and some areas are dense and some areas are sparse.
In some embodiments, the distance measuring device may be non-uniform scanning, such as a distance measuring device with non-uniform scanning density over the scan field of view. The point cloud obtained by the distance measuring device in a non-uniform scanning mode is non-uniform in distribution in space. For convenience of description, in the embodiment of the present application, the point cloud obtained by the distance measuring apparatus in the non-uniform scanning manner is referred to as an initial point cloud. Taking a laser radar as an example, when the laser radar scans, the emission angles of laser are constantly changed, but the emission angles of the laser are not necessarily uniformly distributed in a scanning field of the laser radar, and the difference can be several times to several hundred times, even more, and the uneven scanning angle causes uneven density of point cloud images in different areas, so that the scanning field has uneven scanning density, that is, an irregular sampling mode, and the uniformity of the acquired point cloud is poor. Generally, the scanning density of the central area is higher than that of other areas, and the density of the point cloud image of the central area is also higher than that of other areas.
At present, point cloud matching algorithms, such as an ICP algorithm and an NDT algorithm, are generally used for calibrating parameters between distance measuring devices. However, the algorithms have high requirements on the uniformity of the point cloud, and the initial point cloud obtained by the distance measuring device in the non-uniform scanning mode is a non-uniform point cloud, so that the uniformity is poor. If the uniformity is too poor, the corresponding relation of each point in the point cloud cannot be accurately searched when the point cloud of different distance measuring devices is matched, so that the calibration parameters cannot be accurately calculated. Therefore, in the embodiment of the present application, after the initial point cloud obtained by scanning the at least two distance measuring devices is obtained, the initial point cloud may be resampled to obtain a resampled point cloud. And then calculating calibration parameters among the ranging devices based on the resampled point cloud obtained by resampling. And the distribution uniformity of the resampled point cloud obtained by resampling is higher than that of the initial point cloud.
The parameter calibration method of the embodiment of the application can be used for parameter calibration between two or more distance measuring devices. Wherein, a plurality may include 3, 4, 5 or more values, which is not limited in the embodiments of the present application. For a plurality of distance measuring devices, the parameter calibration method provided by the embodiment of the application can be adopted to obtain the calibration parameters between any two distance measuring devices. Or, the calibration parameters between any plurality of distance measuring devices are obtained by using the parameter calibration method provided by the embodiment of the application. Or, by using the parameter calibration method provided by the embodiment of the application, the calibration parameter between every two distance measuring devices can be obtained first, and then the calibration parameter between the distance measuring device and the third distance measuring device can be obtained. For example, there are A, B, C three distance measuring devices, after the calibration parameters between a and B, and a and C are obtained, the calibration parameters between B and C can be obtained naturally. In this specification, the initial point clouds obtained by each distance measuring device are at least one group, and may be multiple groups, which is not limited herein. One or more groups of resampled point clouds can be obtained after resampling for each group of initial point clouds.
The resampled point cloud obtained after resampling has better uniformity than the initial point cloud, and is the resampled point cloud obtained after resampling the initial point cloud shown in fig. 2 as shown in fig. 3. Compared with the initial point cloud, the resampled point cloud after resampling is more suitable for subsequent processing, such as object identification, image fusion and the like, has the advantages of better display effect, hiding a specific hardware sampling mode and the like, and is more suitable for calculating calibration parameters. The parameters characterizing the quality of the point cloud include, in addition to distribution uniformity, parameters such as density and noise. The point cloud with better distribution uniformity, higher density and lower noise is more beneficial to matching of each point in the point cloud when the calibration parameters are calculated, so that more accurate calibration parameters can be obtained. Therefore, in some embodiments, compared to the original point cloud, the resampled point cloud may have a higher point cloud density or a lower noise, or both, in addition to having a better uniformity, and specifically, the resampling mode may be selected according to actual requirements to obtain the resampled point cloud having the target characteristic.
In some embodiments, resampling the initial point cloud obtained by the ranging device may be performed according to the following steps:
1. the acquired non-uniform initial point cloud can be perspective projected onto a plane perpendicular to the axis of the testing device;
2. meshing the two-dimensional plane obtained by projection to form an image, wherein the pixel value is characteristic information such as depth information and reflectivity information of a point cloud point falling into the pixel;
3. judging whether the empty pixel with the number of 0 points corresponds to the sky or not; if the sky is found, the pixel value is set to 0; if the sky is not the sky, the sky is interpolated by adopting a common interpolation method such as nearest neighbor interpolation or linear interpolation, so that a sub-sampled uniform image can be obtained, and based on the sampled uniform image, point cloud resampling in any mode can be carried out according to actual requirements so as to obtain resampled point clouds with various characteristics, such as point clouds with better uniformity, point clouds with higher or lower density, point clouds with lower noise and the like. Several different resampling modes are listed below:
(1) angle uniform resampling: scanning direction of distance measuring device
Figure PCTCN2019109700-APPB-000001
And respectively carrying out uniform sampling, then calculating the intersection point of the sampling ray and the image plane for each sampling, and obtaining the value of the intersection point in an interpolation mode, so that the depth value of the sampling direction can be obtained, and a point cloud sampling point is further determined. In particular, nearest neighbor interpolation may be adopted, and the value of the pixel where the intersection point is located is taken as the depth value of the sampling direction.
(2) Planar uniform resampling: traversing each point in the image, if the depth value of the point is not zero, generating a resampling point which is a point located on a connecting line of an original point and the center of a pixel of the image, wherein the depth value is the depth represented by the pixel value; thus, a planar uniform resampled point cloud can be obtained. More generally, a planar uniform point to be sampled can be generated in the manner of (1), and then a resampling point can be generated by calculating an intersection point and interpolating.
(3) Specific mode resampling: similar to the processing method in (1), firstly, a sampling direction set is generated according to the requirement of the target mode, the intersection point of one sampling direction and the image plane is calculated, and then the depth value and the like of the intersection point are calculated, so that the point cloud point is obtained. These patterns include, but are not limited to, uniform circular sampling, spiral sampling, and the like.
(4) Increasing point cloud density and resampling: and adding new samples in the original point cloud sampling direction set to increase the sampling density. For the new sampling direction, calculation of the depth value and the like may be performed in the manner described in (1). Typically, the sampling densities of (1) to (4) may be increased and added to the original point cloud to form a new high density point cloud.
(5) Denoising and resampling, comprising the following two key steps:
firstly, carrying out noise filtering processing on the obtained plane image, wherein the processing has the edge protection property, such as adopting a bilateral filter;
secondly, traversing each point cloud point, calculating the corresponding image pixel, and adjusting the depth of the point cloud point to the depth value of the corresponding pixel; or adjusted to some interpolation result of its neighboring pixels at the projected position of the image plane, or thresholded to the original value with these new values, e.g. if the original depth value is not in one of the neighbourhoods of the new value, the point is discarded.
Through the two steps of processing, two types of noise points caused by poor detection precision can be filtered, including a plurality of cloud points (wherein the larger depth can be basically considered as noise) possibly existing in the original point cloud in the same sampling direction and points with roughened plane surfaces.
(6) Equalization and downsampling: in order to control the sampling density of the non-uniform sampling in a local area (such as a central area) not to be too large, partial elimination can be performed on the points in the area, so that the distribution of the point cloud sampling density is relatively balanced. For this purpose, the point cloud points corresponding to a part of pixels in an image area with too high density can be randomly or uniformly discarded, or the number of points falling into each pixel in the area is limited, and the part of points can be randomly or uniformly discarded until the number of points in the pixel is within a threshold value.
The above lists only the modes of resampling a part of the point clouds, and specifically, the corresponding resampling mode may be selected according to the requirements on the characteristics of the resampled point clouds in the actual application scene.
After the resampled point cloud after the resampling process is obtained, the resampled point cloud can be adopted to calculate calibration parameters between the testing devices. In some implementations, after the resampled point cloud is obtained, the calibration parameters can be calculated directly by using the existing point cloud matching algorithm, such as ICP algorithm and NDT algorithm. In order to further improve the accuracy of the calculated calibration parameters between the ranging devices, in some embodiments, the resampled point cloud obtained by resampling at least includes two groups, which respectively correspond to the two ranging devices, and when calculating the calibration parameters, the calibration parameters may be calculated by adopting the steps shown in fig. 4:
s402, calculating to obtain initial calibration parameters between the at least two distance measuring devices based on the at least two groups of resampled point clouds and a preset point cloud matching algorithm;
s404, extracting feature points from the at least two groups of resampled point clouds respectively;
s406, calculating calibration parameters between the at least two distance measuring devices based on the characteristic points and the initial calibration parameters.
The method can be used for carrying out rough matching on the resampled point clouds based on at least two groups of resampled point clouds obtained by resampling and a preset point cloud matching algorithm, and calculating to obtain an initial calibration parameter between two distance measuring devices, wherein the initial calibration parameter has low accuracy, so that the initial calibration parameter needs to be further optimized. In some embodiments, the point cloud matching algorithm includes an ICP algorithm, an NDT algorithm, and the like, but the present application is not limited to the above algorithm, and any algorithm that can obtain calibration parameters between test devices through point cloud calculation is applicable.
After the two groups of resampled point clouds are roughly matched to obtain an initial calibration parameter, the resampled point clouds can be further accurately matched. Usually, when point cloud matching is performed, feature points can be used for matching, so that feature points of point cloud can be extracted from at least two groups of resampled point clouds obtained by resampling respectively, and the resampled point clouds are accurately matched based on the extracted feature points and the calculated initial calibration parameters, so as to calculate and obtain more accurate calibration parameters between two distance measuring devices. It should be noted that the sequence between the step S402 and the step S404 is not limited, and either one of the steps may be performed first, or both steps may be performed simultaneously.
In some embodiments, whether each point in the point cloud is a feature point may be determined from the curvature of each point in the point cloud. For example, a point whose curvature satisfies a specific requirement may be taken as the feature point. Of course, curvature is only one determination method, and the present application does not exclude other determination methods. In some embodiments, the feature points may be divided into planar feature points and edge feature points, and the planar feature points may be points located on the scanned object plane, and usually have relatively small curvature values, which are approximately 0, so that a first preset threshold may be set, and points having a curve smaller than the first preset threshold may be used as the planar feature points. The edge feature point may be a feature point located at an edge of the scanned object, and usually, curvature values of the edge feature point are relatively large, so that a second preset threshold may be set, and a point where the curvature is larger than the second preset threshold is used as the edge feature point.
Because the number of the points in the point cloud is large, when the feature point extraction is carried out on the resampled point cloud after the resampling, the feature point extraction can be preferably carried out in order, so that the feature points can be prevented from being omitted, and all the feature points can be extracted as completely as possible. Therefore, in some embodiments, all the points in the point cloud may be divided, for example, the points in the point cloud may be divided into different lines according to their coordinate characteristics in the three-dimensional space, and the points on different lines may be points whose coordinates have a certain characteristic, for example, the points with the same X coordinate are divided into one line, the points with the same Y coordinate are divided into one line, and the points with the same Z coordinate are divided into one line. Certainly, the point cloud may also be projected onto a two-dimensional plane to form a two-dimensional lattice, and each point in the point cloud is divided according to the two-dimensional lattice. After the points in the point cloud are divided into different lines, the characteristic points can be sequentially extracted from the different lines respectively. For example, in some embodiments, at least two groups of resampled point clouds after resampling may be divided into N lines and M lines according to the coordinates of each point, where N and M are arbitrary integers. Wherein any one of the N lines is referred to as an "nth line", and any one of the M lines is referred to as an "mth line", wherein the division basis for dividing the resampled point cloud into the N lines and the M lines according to the coordinates of each point can be determined according to actual requirements. In some embodiments, the points on the nth line and the points on the mth line may be divided in the following manner. It can be assumed that the resampled point cloud is projected onto a two-dimensional reference surface to form an N × M two-dimensional lattice, where N represents the number of rows of the two-dimensional lattice and M represents the number of columns of the two-dimensional lattice, as shown in fig. 5, the point cloud of the left image is the two-dimensional lattice formed after the point cloud of the right image is projected onto a plane. When the uniform projection is carried out on the reference surface, the point projected to the Nth row in the NxM two-dimensional lattice is divided into the point on the Nth line, and the point projected to the Mth row in the NxM two-dimensional lattice is divided into the point on the Mth line. In some embodiments, the central axis of the light pulse sequence emitted by the distance measuring device is an X axis, and the other two directions perpendicular to each other and perpendicular to the X axis are a Y axis and a Z axis, respectively, so that the point cloud can be projected onto a two-dimensional plane formed by the Y-Z axes to form an N × M two-dimensional lattice, the point projected onto the nth row in the N × M two-dimensional lattice is divided into nth lines, and the point projected onto the mth row in the N × M two-dimensional lattice is divided into points on the mth line. As shown in fig. 6, the reference plane is a plane perpendicular to the axis of the test device.
Of course, in some embodiments, the reference surface may also be a curved surface, such as a spherical surface, a cylindrical surface, etc. After the resampled point cloud is projected onto the reference surface, a plurality of curves or a plurality of concentric rings, such as concentric circular rings, concentric square rings, concentric triangular rings, concentric polygonal rings, etc., may also be obtained. And then extracting characteristic points based on the curve, the concentric ring and the spiral line obtained after projection.
During resampling, sampling may be performed according to a specific sampling mode, and the obtained point cloud may be a point cloud with certain characteristics, for example, the point cloud may be uniformly distributed in a specific dimension. For example, after dividing the points in the point cloud into the nth line and the mth line according to the nxm two-dimensional lattice, the points of the nth line and the mth line may be uniformly distributed points, and thus the feature points may be extracted from the nth line and the mth line, respectively. In some embodiments, feature points of the point cloud may be extracted from the nth line, or feature points may be extracted from the mth line, or feature points may be extracted from both the nth line and the mth line. Of course, the specific feature points to be extracted from which kind of lines may be set according to the actual scene, for example, for a scene where the scanned space includes a longitudinal target object such as a multi-pole, the feature points of the point cloud obtained by scanning are mainly on the nth line, and therefore the feature points may be extracted from the nth line, and for a scene where the scanned space includes a transverse target object such as a multi-step, the feature points of the point cloud obtained by scanning are mainly on the mth line, and therefore the feature points may be extracted from the mth line. In this way, feature points can be extracted from one direction or a plurality of directions, and the feature points can be extracted as completely as possible to obtain more feature points for accurate matching.
In some embodiments, the edge feature points may be extracted from the nth line, then the mth line is determined according to the edge feature points, where the mth line is a line where the edge feature is located, and then other edge feature points are extracted from the determined mth line, for example, after the resampled point cloud is projected onto a two-dimensional plane, an N × M two-dimensional lattice may be obtained, a point projected on the nth line in the point cloud is divided into points of the nth line, and a point projected on the mth line in the point cloud is divided into points of the mth line. Therefore, edge feature points on the N lines can be extracted one by one along the 1 st row to the N th row of the two-dimensional lattice, then the columns where the feature points are located are determined, the M lines where the feature points are located are determined according to the columns where the feature points are located, and then other edge feature points are extracted along the M lines. For example, after the edge feature point a in a certain point cloud is projected onto a two-dimensional plane, it falls on the 3 rd row and 4 th column of the two-dimensional lattice, and after the feature point a is extracted along the 3 rd line corresponding to the 3 rd row as the edge feature point, it may be determined that the feature point a is projected and then located on the 4 th column of the two-dimensional lattice, so that other edge feature points may be extracted from the points in the 4 th line corresponding to the 4 th column. As shown in fig. 7, after the edge feature points are extracted from the N lines (left image), other edge feature points may be continuously extracted along the M lines where the edge feature points are located (right image). In this way, feature points can be extracted from one direction or a plurality of directions, and the feature points can be extracted as completely as possible to obtain more feature points for accurate matching.
In some embodiments, when extracting the feature point from the nth line of the at least two groups of resampled point clouds, for any one of the at least two groups of resampled point clouds, the feature point may be sequentially extracted from each point on the nth line in the order from left to right or from right to left. In some embodiments, when feature points are extracted from the mth line of the at least two sets of resampled point clouds, the feature points may be sequentially extracted from each point on the mth line in an order from top to bottom or from bottom to top.
In some embodiments, the curvature of each point in the point cloud may be determined from a sum vector of neighboring points that are collinear with the point. In some embodiments, the adjacent point may be one or more points on the nth line and located to the left or right of the point, or one or more points on the mth line and located above or below the point. For example, to calculate the curvature of a certain point on the nth line, the sum vector Vl of N adjacent points on the left side of the certain point may be calculated first, then the sum vector V2 of N adjacent points on the right side of the certain point may be calculated, and then the modulo length of the sum vector of V1 and V2 may be taken as the curvature of the certain point. As shown in fig. 8, which is a schematic diagram of curvature calculation, for the nth line of the point cloud, curvature values ρ of each point are sequentially calculated in order from left to right. As shown in fig. 8, for a point X, assuming that a sum vector of the point X and i points on the left side is Vl and a sum vector of the point X and i points on the right side is Vr, a curvature value of the point X is defined as a module length of a sum vector of Vl and Vr:
ρ=|Vc|=|Vl+Vr|
a lower curvature threshold Tflat and an upper curvature threshold Tsharp are set. If the curvature value of the point X satisfies ρ < Tflat, the point X is considered as a planar feature point, and the sum of the vector modulo lengths of Vl and Vr at this point is 0 as shown by a point X1 in fig. 8.
If the curvature value of the point X satisfies ρ > Tsharp, the point X is considered as an edge feature point, and the sum vector modulo length of Vl and Vr at the point is large as shown by a point X2 in fig. 8.
After feature points are extracted from at least two groups of resampled point clouds obtained through resampling, and initial calibration parameters between two distance measuring devices are obtained through a preset point cloud matching algorithm, residual errors can be determined according to the initial calibration parameters and the feature points, wherein the residual errors can be used for representing the matching degree of the feature points in the two resampled point clouds, then the residual errors are optimized through a preset optimization algorithm, and calibration parameters are obtained through calculation. In the process of optimizing the residual error, when the residual error is minimum, the value of the initial calibration parameter is the calibration parameter to be calculated.
The residual error can be defined in various ways as long as the matching degree of the two groups of point cloud feature points can be represented, and the method is not limited in the application. For example, in some implementations, two groups of resampled point clouds, which are respectively referred to as a first resampled point cloud and a second resampled point cloud, may be obtained by resampling, and the feature points of the first resampled point cloud may be determined, then the matching feature points of the feature points in the second resampled point cloud may be determined according to the initial calibration parameters, and then the residuals may be defined according to the feature points of the first resampled point cloud and the matching degrees of the feature points in the matching feature points corresponding to the second resampled point cloud. For example, one or more target feature points with the closest distance to the matching feature points in the second resampled point cloud may be found from the matching feature points, and then a residual may be constructed from the matching feature points and the target feature points.
In some embodiments, the residuals may be determined based on the distances of the matching feature points to the straight lines of the target feature points and the distances of the matching feature points to the planes of the target feature points.
In some embodiments, if the feature point of the first resampled point cloud is a planar feature point, the matching feature point in the second resampled point cloud may be calculated according to the initial calibration parameter, then three target planar feature points closest to the matching feature point may be found in the second resampled point cloud, and the distance from the matching feature point to a plane formed by the three target planar feature points may be calculated as a part of the residual error. If the feature point of the first resampled point cloud is the edge feature point, the matching feature point of the second resampled point cloud can be obtained through calculation according to the initial calibration parameter, then two edge target feature points which are closest to the matching feature point are found in the second resampled point cloud, and then the distance between the matching feature point and a straight line formed by the two target edge feature points is calculated to serve as the other part of the residual error.
For example, for a planar feature point in the first resampled point cloud, it is denoted as X1'. The initial calibration parameters obtained by the ICP algorithm are denoted as T0, and the matching feature points in the second resampled point cloud are calculated by T0 as X1, i.e., X1 is T0X 1'. From all the plane feature points of the second resampled point cloud, two plane feature points which are closest to X1 and are on the same line are searched, and the two plane feature points are marked as Xj and Xk, and one plane feature point Xm which is closest to X1 and is on a different line from Xj and Xk. The distance of X1 from the plane formed by Xj, Xk, Xm can be defined as:
Figure PCTCN2019109700-APPB-000002
for one edge feature point in the first resampled point cloud, it is denoted X2'. The initial calibration parameters obtained by the ICP algorithm are denoted as T0, and the matching feature points in the second resampled point cloud are calculated by T0 as X2, i.e., X2 is T0X 2'. And searching two edge feature points which are closest to X2 and are respectively positioned on different lines from all the edge feature points of the second resampling point cloud, and marking the two edge feature points as Xj and Xk. The distance of X2 from the line formed by Xj and Xk can be defined as:
Figure PCTCN2019109700-APPB-000003
setting f and s as the number of plane feature points and the number of edge feature points in the first resampling point cloud respectively, and defining a residual error r:
Figure PCTCN2019109700-APPB-000004
and (3) solving by using an objective optimization algorithm, so that the calibration parameter when the residual error r obtains the minimum value is the accurate external parameter Tacc between the two ranging devices, namely:
Tacc=argmin(r)
in certain embodiments, the target optimization algorithm may be one or more of a Newton algorithm, a Gauss-Newton algorithm, a Levenberg-Marquardt algorithm.
To further explain the parameter calibration method of the present application, a specific embodiment is explained below.
For two non-uniform scanning laser radars, the scanning field of the two non-uniform scanning laser radars has non-uniform scanning density, so that initial point cloud obtained by the two non-uniform scanning laser radars is non-uniform point cloud, and in order to accurately calculate calibration parameters of the two laser radars, the following method can be adopted:
(1) and carrying out plane uniform resampling treatment on the initial point cloud obtained by each laser radar to obtain two groups of resampled point clouds. The density of the resampled point cloud obtained by resampling is higher than that of the initial point cloud, and the noise generation is lower than that of the initial point cloud.
(2) When each point in the resampled point cloud is divided, if the resampled point cloud is projected to a plane vertical to the axis of the laser radar, an NxM dot matrix can be obtained, the point projected on the Nth line is defined as the point on the Nth line, and the point projected on the Mth line is defined as the point on the Mth line.
(3) And directly performing primary matching on the resampled point clouds of the two laser radars by an ICP (inductively coupled plasma) algorithm to obtain initial values of calibration parameters between the two laser radars, and recording the initial values as Ticp.
(4) Extracting point cloud characteristic points:
(a) and for the nth line of the point cloud, sequentially calculating the curvature value rho of each point in the order from left to right. As shown in fig. 8, for a point X, assuming that a sum vector of the point X and i points on the left side is Vl and a sum vector of the point X and i points on the right side is Vr, a curvature value of the point X is defined as a module length of a sum vector of Vl and Vr:
ρ=|Vc|=|V l+V r|
(b) a lower curvature threshold Tflat and an upper curvature threshold Tsharp are set. If the curvature value of the point X satisfies ρ < Tflat, the point X is considered as a planar feature point, and the sum of the vector modulo lengths of Vl and Vr at this point is 0 as shown by a point X1 in fig. 8.
(c) If the curvature value of the point X satisfies ρ > Tsharp, the point X is considered as an edge feature point, and the sum vector modulo length of Vl and Vr at the point is large as shown by a point X2 in fig. 8.
(d) For the edge feature points, the extraction of the edge feature points is continued for the m-th line according to the sequence from top to bottom. The processing mode can enable the extraction of the edge feature points to be more thorough, greatly improves the success rate of feature point matching, and adds more constraint items for point cloud parameter calibration, thereby enabling the result of parameter calibration to be more accurate.
(5) Matching the characteristic points:
(a) matching plane feature points: marking as X 'for one plane feature point in the first laser radar point cloud'1. And (5) recording the points of the initial values of the external parameters obtained by ICP and transformed into the second radar coordinate system as X1I.e. X1=Ticp·X′ 1. From all the plane feature points of the second laser radar point cloud, two plane feature points which are closest to X1 and located on the same line are searched, and the two plane feature points are marked as Xj and Xk, and one plane feature point Xm which is closest to X1 and located on a different line from Xj and Xk. The distance of X1 from the plane formed by Xj, Xk, XmThe ion can be defined as:
Figure PCTCN2019109700-APPB-000005
(b) matching edge characteristic points: marking as X 'for one edge feature point in the first laser radar point cloud'2. And (5) recording the points of the initial values of the external parameters obtained by ICP and transformed into the second radar coordinate system as X2I.e. X2=Ticp·X′ 2. Searching two and X from all edge feature points of the second laser radar point cloud2And the edge characteristic points which are closest to each other and are respectively positioned on different lines are marked as Xj and Xk. Then X2The distance to the straight line made up of Xj and Xk can be defined as:
Figure PCTCN2019109700-APPB-000006
(6) and (3) accurate external reference calibration: setting f and s as the number of plane feature points and the number of edge feature points in the first laser radar, respectively, and defining a residual error r:
Figure PCTCN2019109700-APPB-000007
then, the L-M algorithm is used for solving, so that an external parameter matrix when the residual error r obtains the minimum value is the accurate external parameter Tacc between the two laser radars, namely:
Tacc=argmin Ticp(r)
the parameter calibration method provided by the embodiment of the application is not limited by the specific form of the point cloud, has strong universality, and can solve the problem that good point cloud corresponding relation is difficult to obtain when non-uniform initial point cloud is matched. By means of the ICP rough matching and feature point accurate matching method, the problem that the traditional point cloud matching algorithm (ICP, NDT and the like) is high in requirements for point cloud density and point cloud uniformity is solved, external parameter calibration accuracy is high, and a good external parameter calibration strategy is provided for the application fields of laser radars, such as laser radar surveying and mapping and unmanned driving, which need to be used.
The embodiment of the present application further provides a parameter calibration apparatus, as shown in fig. 9, where the parameter calibration apparatus includes a processor 902 and a memory 904, the memory 904 is used to store a computer program, and the processor 902 is used to read the computer program stored in the memory 904 and implement the following steps:
resampling initial point clouds obtained by at least two distance measuring devices to obtain resampled point clouds, wherein the distance measuring devices have non-uniform scanning density in a scanning field of view, and the distribution uniformity of the resampled point clouds is higher than that of the initial point clouds;
and calculating to obtain calibration parameters between the at least two distance measuring devices based on the resampling point cloud.
In one embodiment, the resampled point clouds include at least two groups corresponding to the at least two ranging devices, respectively; the processor is configured to calculate calibration parameters between the at least two distance measuring devices based on the resampled point cloud, and specifically includes:
calculating initial calibration parameters between the at least two distance measuring devices based on the at least two groups of resampled point clouds and a preset point cloud matching algorithm;
extracting feature points from the at least two groups of resampled point clouds respectively;
and calculating calibration parameters between the at least two distance measuring devices based on the characteristic points and the initial calibration parameters.
In an embodiment, when the processor is configured to extract feature points from the at least two groups of resampled point clouds, the method specifically includes:
extracting the feature points from the Nth lines in the at least two groups of the resampled point clouds respectively; and/or extracting the characteristic points from the Mth lines in the at least two groups of the resampled point clouds respectively.
In one embodiment, when the processor is configured to extract the feature points from the mth line in the at least two sets of resampled point clouds, the method specifically includes:
extracting edge feature points from the Nth line in the at least two groups of resampled point clouds;
determining an Mth line based on the edge feature point, wherein the Mth line is a line where the edge feature point is located;
and extracting other edge characteristic points from the Mth line.
In one embodiment, when the resampled point cloud is projected onto a reference surface to obtain an N × M two-dimensional lattice, each point on the nth line is projected onto an nth row in the N × M two-dimensional lattice, and each point on the mth line is projected onto an mth column in the N × M two-dimensional lattice.
In one embodiment, the reference plane is a plane perpendicular to the axis of the distance measuring device.
In an embodiment, when the processor is configured to extract the feature points from the nth lines in the at least two sets of resampled point clouds, the method specifically includes:
for any one of the at least two sets of resampled point clouds: and sequentially extracting the feature points from each point on the Nth line according to the sequence from left to right or from right to left.
In one embodiment, when the processor is configured to extract the feature points from the mth line in the at least two sets of resampled point clouds, the method specifically includes:
for any one of the at least two sets of resampled point clouds: and sequentially extracting the feature points from the points on the Mth line from top to bottom or from bottom to top.
In one embodiment, the feature points are determined based on the curvature of each point in the resampled point cloud.
In one embodiment, the feature points include a plane feature point and an edge feature point, the plane feature point is a point having a curvature smaller than a first preset threshold, and the edge feature point is a point having a curvature larger than a second preset threshold.
In one embodiment, the curvature of each point is determined based on a resultant vector of neighboring points that are collinear with the point.
In one embodiment, the neighboring points include:
one or more points on the same nth line as the points and located to the left or right of the points; or
One or more points that are collinear with the point and above or below the point.
In one embodiment, the preset point cloud matching algorithm comprises one or more of an ICP algorithm or an NDT algorithm.
In an embodiment, when the processor is configured to calculate the calibration parameter based on the feature point and the initial calibration parameter, the method specifically includes:
determining a residual error based on the feature points and the initial calibration parameters, wherein the residual error is used for representing the matching degree of the feature points in the two re-sampling point clouds;
and calculating the calibration parameters based on a preset optimization algorithm and the residual error.
In an embodiment, when the processor is configured to determine a residual based on the feature point and the initial calibration parameter, the method specifically includes:
determining matching feature points of feature points in a first re-sampled point cloud in a second re-sampled point cloud based on the initial calibration parameters, wherein the first re-sampled point cloud and the second re-sampled point cloud are any two groups of the at least two groups of re-sampled point clouds;
determining one or more target feature points of the second resampled point cloud that are closest to the matching feature point;
determining the residual based on the matching feature points and the target feature points.
In one embodiment, the residual is determined based on a distance between the matching feature point and a plane formed by the target feature points and a distance between the matching feature point and a straight line formed by the target feature points.
In one embodiment, if the feature points are planar feature points, the target feature points are three planar feature points closest to the matching feature points in the feature points of the second resample point cloud, and the plane is a plane formed by the three planar feature points.
In one embodiment, if the feature point is an edge feature point, the target feature point is two edge feature points closest to the matching feature point in the feature points of the second resample point cloud, and the straight line is a straight line formed by the two edge feature points.
In one embodiment, the optimization algorithm includes one or more of a Newton algorithm, a Gauss-Newton algorithm, a Levenberg-Marquardt algorithm.
In one embodiment, the resampled point cloud has a higher sampling density than the initial point cloud; and/or the resampled point cloud is less noisy than the initial point cloud.
In one embodiment, the ranging device is a lidar.
For the specific processing details of the parameter calibration device during parameter calibration, reference may be made to the parameter calibration method, which is not described herein again.
The embodiment of the present application further provides an apparatus, as shown in fig. 10, the apparatus includes two or more distance measuring devices 1020, a processor 1040 and a memory 1060, where the memory 1060 is used for storing a computer program, the distance measuring devices 1020 are used for detecting a target scene to generate an initial point cloud, and the processor executes the computer program to implement the parameter calibration method according to any one of the embodiments of the present application.
In one embodiment, the distance measuring device 1020 is a lidar.
For specific processing details of the device during parameter calibration, reference may be made to the parameter calibration method, which is not described herein again.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method and apparatus provided by the embodiments of the present invention are described in detail above, and the principle and the embodiments of the present invention are explained in detail herein by using specific examples, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (45)

  1. A parameter calibration method is characterized by comprising the following steps:
    resampling initial point clouds obtained by at least two distance measuring devices to obtain resampled point clouds, wherein the distance measuring devices have non-uniform scanning density in a scanning field of view, and the distribution uniformity of the resampled point clouds is higher than that of the initial point clouds;
    determining calibration parameters between the at least two ranging devices based on the resampled point cloud.
  2. The parameter calibration method according to claim 1, wherein the resampled point clouds include at least two groups, and the at least two groups of the resampled point clouds respectively correspond to the at least two distance measuring devices; the determining calibration parameters between the at least two ranging devices based on the resampled point cloud comprises:
    calculating initial calibration parameters between the at least two distance measuring devices based on the at least two groups of resampled point clouds and a preset point cloud matching algorithm;
    extracting feature points from the at least two groups of resampled point clouds respectively;
    and calculating calibration parameters between the at least two distance measuring devices based on the characteristic points and the initial calibration parameters.
  3. The parameter calibration method according to claim 2, wherein the extracting feature points from the at least two groups of resampled point clouds respectively comprises:
    extracting the feature points from the Nth lines in the at least two groups of the resampled point clouds respectively; and/or extracting the characteristic points from the Mth lines in the at least two groups of the resampled point clouds respectively.
  4. The parameter calibration method according to claim 2, wherein said extracting the feature points from the M-th lines in the at least two groups of resampled point clouds respectively comprises:
    extracting edge feature points from the Nth line in the at least two groups of resampled point clouds;
    determining an Mth line based on the edge feature point, wherein the Mth line is a line where the edge feature point is located;
    and extracting other edge characteristic points from the Mth line.
  5. The method of claim 3 or 4, wherein when the resampled point cloud is projected onto a reference surface to form an nxm two-dimensional lattice, each point on the nth line is projected onto an nth row of the nxm two-dimensional lattice and each point on the mth line is projected onto an mth column of the nxm two-dimensional lattice.
  6. The method of claim 5, wherein the reference plane is a plane perpendicular to the ranging device axis.
  7. The parameter calibration method according to claim 3, wherein said extracting the feature points from the Nth lines of the at least two groups of resampled point clouds respectively comprises:
    for any one of the at least two sets of resampled point clouds: and sequentially extracting the feature points from each point on the Nth line according to the sequence from left to right or from right to left.
  8. The parameter calibration method according to claim 3, wherein said extracting the feature points from the Mth lines in the at least two groups of resampled point clouds respectively comprises:
    for any one of the at least two sets of resampled point clouds: and sequentially extracting the feature points from the points on the Mth line from top to bottom or from bottom to top.
  9. The method for parameter calibration according to any one of claims 2 to 7, wherein the feature points are determined based on curvatures of respective points in the resampled point cloud.
  10. The parameter calibration method according to any one of claims 2 to 8, wherein the feature points include a plane feature point and an edge feature point, the plane feature point is a point whose curvature is smaller than a first preset threshold, and the edge feature point is a point whose curvature is larger than a second preset threshold.
  11. A method for parameter calibration according to any one of claims 9-10, wherein the curvature of each point is determined based on a sum vector of neighboring points on the same line as the point.
  12. The parameter calibration method according to claim 11, wherein the neighboring points comprise:
    one or more points on the same nth line as the points and located to the left or right of the points; or
    One or more points that are collinear with the point and above or below the point.
  13. The parameter calibration method according to claim 2, wherein the preset point cloud matching algorithm comprises one or more of an ICP algorithm or an NDT algorithm.
  14. The parameter calibration method according to claim 2, wherein the calculating the calibration parameters based on the feature points and the initial calibration parameters comprises:
    determining a residual error based on the feature points and the initial calibration parameters, wherein the residual error is used for representing the matching degree of the feature points in the two re-sampling point clouds;
    and calculating the calibration parameters based on a preset optimization algorithm and the residual error.
  15. The parameter calibration method according to claim 14, wherein said determining a residual based on said feature point and said initial calibration parameter comprises:
    determining matching feature points of feature points in a first re-sampled point cloud in a second re-sampled point cloud based on the initial calibration parameters, wherein the first re-sampled point cloud and the second re-sampled point cloud are any two groups of the at least two groups of re-sampled point clouds;
    determining one or more target feature points of the second resampled point cloud that are closest to the matching feature point;
    determining the residual based on the matching feature points and the target feature points.
  16. The parameter calibration method according to claim 15, wherein the residual error is determined based on a distance between the matching feature point and a plane formed by the target feature points and a distance between the matching feature point and a straight line formed by the target feature points.
  17. The parameter calibration method according to claim 16, wherein if the feature points are planar feature points, the target feature points are three planar feature points closest to the matching feature points in the feature points of the second resampled point cloud, and the plane is a plane formed by the three planar feature points.
  18. The parameter calibration method according to claim 16, wherein if the feature point is an edge feature point, the target feature point is two edge feature points that are closest to the matching feature point in the feature points of the second resampled point cloud, and the straight line is a straight line formed by the two edge feature points.
  19. The parameter calibration method according to claim 14, wherein the optimization algorithm comprises one or more of a Newton algorithm, a Gauss-Newton algorithm, a Levenberg-Marquardt algorithm.
  20. The parameter calibration method according to any one of claims 1 to 19, wherein the sampling density of the resampled point cloud is higher than that of the initial point cloud; and/or the resampled point cloud is less noisy than the initial point cloud.
  21. The parameter calibration method according to claims 1-20, wherein the distance measurement device is a laser radar.
  22. A parameter calibration apparatus, comprising a processor and a memory, wherein the memory stores a computer program, and the processor is configured to read the computer program stored in the memory and execute the following processes:
    resampling initial point clouds obtained by at least two distance measuring devices to obtain resampled point clouds, wherein the distance measuring devices have non-uniform scanning density in a scanning field of view, and the distribution uniformity of the resampled point clouds is higher than that of the initial point clouds;
    and calculating to obtain calibration parameters between the at least two distance measuring devices based on the resampling point cloud.
  23. The parameter calibration device according to claim 22, wherein the resampled point clouds include at least two groups, and the at least two groups of the resampled point clouds respectively correspond to the at least two distance measuring devices; the processor is configured to calculate calibration parameters between the at least two ranging devices based on the resampled point cloud, and includes:
    calculating initial calibration parameters between the at least two distance measuring devices based on the at least two groups of resampled point clouds and a preset point cloud matching algorithm;
    extracting feature points from the at least two groups of resampled point clouds respectively;
    and calculating calibration parameters between the at least two distance measuring devices based on the characteristic points and the initial calibration parameters.
  24. The parameter calibration device according to claim 23, wherein the processor is configured to extract feature points from the at least two groups of resampled point clouds, specifically:
    extracting the feature points from the Nth lines in the at least two groups of the resampled point clouds respectively; and/or extracting the characteristic points from the Mth lines in the at least two groups of the resampled point clouds respectively.
  25. The parameter calibration apparatus according to claim 24, wherein the processor is configured to extract the feature points from the mth lines in the at least two sets of resampled point clouds, respectively, and includes:
    extracting edge feature points from the Nth line in the at least two groups of resampled point clouds;
    determining an Mth line based on the edge feature point, wherein the Mth line is a line where the edge feature point is located;
    and extracting other edge characteristic points from the Mth line.
  26. The parameter calibration device according to any one of claims 24 or 25, wherein when the resampled point cloud is projected onto a reference plane to obtain an nxm two-dimensional lattice, each point on the nth line is projected onto an nth row in the nxm two-dimensional lattice, and each point on the mth line is projected onto an mth column in the nxm two-dimensional lattice.
  27. The parameter calibration device according to claim 26, wherein the reference plane is a plane perpendicular to the axis of the distance measuring device.
  28. The parameter calibration device according to claim 24, wherein the processor, when being configured to extract the feature points from the nth line in the at least two sets of resampled point clouds, specifically includes:
    for any one of the at least two sets of resampled point clouds: and sequentially extracting the feature points from each point on the Nth line according to the sequence from left to right or from right to left.
  29. The apparatus for parameter calibration according to claim 25, wherein the processor, when being configured to extract the feature points from the mth lines in the at least two sets of resampled point clouds, comprises:
    for any one of the at least two sets of resampled point clouds: and sequentially extracting the feature points from the points on the Mth line from top to bottom or from bottom to top.
  30. The parameter calibration device according to any one of claims 23 to 29, wherein the characteristic points are determined based on the curvature of each point in the resampled point cloud.
  31. The parameter calibration device according to any one of claims 23 to 30, wherein the feature points include a plane feature point and an edge feature point, the plane feature point is a point whose curvature is smaller than a first preset threshold, and the edge feature point is a point whose curvature is larger than a second preset threshold.
  32. A parameter calibration arrangement according to any one of claims 30 to 31, wherein the curvature of each point is determined based on the resultant vector of adjacent points in the same line as the point.
  33. The parameter calibration apparatus according to claim 32, wherein the neighboring points comprise:
    one or more points on the same nth line as the points and located to the left or right of the points; or
    One or more points that are collinear with the point and above or below the point.
  34. The parameter calibration device according to claim 23, wherein the preset point cloud matching algorithm comprises one or more of an ICP algorithm or an NDT algorithm.
  35. The parameter calibration device according to claim 23, wherein the processor, when being configured to calculate the calibration parameter based on the feature point and the initial calibration parameter, specifically comprises:
    determining a residual error based on the feature points and the initial calibration parameters, wherein the residual error is used for representing the matching degree of the feature points in the two re-sampling point clouds;
    and calculating the calibration parameters based on a preset optimization algorithm and the residual error.
  36. The parameter calibration apparatus according to claim 35, wherein the processor, when determining the residual based on the feature point and the initial calibration parameter, specifically comprises:
    determining matching feature points of feature points in a first re-sampled point cloud in a second re-sampled point cloud based on the initial calibration parameters, wherein the first re-sampled point cloud and the second re-sampled point cloud are any two groups of the at least two groups of re-sampled point clouds;
    determining one or more target feature points of the second resampled point cloud that are closest to the matching feature point;
    determining the residual based on the matching feature points and the target feature points.
  37. The parameter calibration device according to claim 36, wherein the residual error is determined based on a distance between the matching feature point and a plane formed by the target feature point and a distance between the matching feature point and a straight line formed by the target feature point.
  38. The parameter calibration device according to claim 37, wherein if the feature point is a planar feature point, the target feature point is three planar feature points closest to the matching feature point in the feature points of the second resampled point cloud, and the plane is a plane formed by the three planar feature points.
  39. The parameter calibration device according to claim 37, wherein if the feature point is an edge feature point, the target feature point is two edge feature points that are closest to the matching feature point in the feature points of the second resampled point cloud, and the straight line is a straight line formed by the two edge feature points.
  40. The parameter calibration apparatus of claim 35, wherein the optimization algorithm comprises one or more of a Newton algorithm, a Gauss-Newton algorithm, a Levenberg-Marquardt algorithm.
  41. The parameter calibration device according to any one of claims 22 to 40, wherein the sampling density of the resampled point cloud is higher than that of the initial point cloud; and/or the resampled point cloud is less noisy than the initial point cloud.
  42. A parameter calibration arrangement according to any one of claims 22 to 41, wherein the ranging apparatus is a lidar.
  43. An apparatus comprising two or more ranging devices for detecting a target scene to generate an initial point cloud, a processor and a memory, the memory storing a computer program, the processor being configured to read the computer program to perform the method of any one of claims 1 to 21.
  44. The apparatus of claim 43, wherein the ranging device is a lidar.
  45. A computer-readable storage medium for storing program instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1 to 21.
CN201980033276.0A 2019-09-30 2019-09-30 Parameter calibration method, device and equipment Pending CN114270406A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/109700 WO2021062776A1 (en) 2019-09-30 2019-09-30 Parameter calibration method and apparatus, and device

Publications (1)

Publication Number Publication Date
CN114270406A true CN114270406A (en) 2022-04-01

Family

ID=75336782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980033276.0A Pending CN114270406A (en) 2019-09-30 2019-09-30 Parameter calibration method, device and equipment

Country Status (2)

Country Link
CN (1) CN114270406A (en)
WO (1) WO2021062776A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113513988B (en) * 2021-07-12 2023-03-31 广州小鹏自动驾驶科技有限公司 Laser radar target detection method and device, vehicle and storage medium
CN115980694A (en) * 2021-10-15 2023-04-18 华为技术有限公司 Data processing and transmitting method and device
CN115993089B (en) * 2022-11-10 2023-08-15 山东大学 PL-ICP-based online four-steering-wheel AGV internal and external parameter calibration method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140063189A1 (en) * 2012-08-28 2014-03-06 Digital Signal Corporation System and Method for Refining Coordinate-Based Three-Dimensional Images Obtained from a Three-Dimensional Measurement System
CN103940369A (en) * 2014-04-09 2014-07-23 大连理工大学 Quick morphology vision measuring method in multi-laser synergic scanning mode
CN104091321B (en) * 2014-04-14 2016-10-19 北京师范大学 It is applicable to the extracting method of the multi-level point set feature of ground laser radar point cloud classifications
CN106886980B (en) * 2015-12-11 2020-08-07 北京智行者科技有限公司 Point cloud density enhancement method based on three-dimensional laser radar target identification
CN107270810B (en) * 2017-04-28 2018-06-22 深圳大学 The projector calibrating method and device of multi-faceted projection

Also Published As

Publication number Publication date
WO2021062776A1 (en) 2021-04-08

Similar Documents

Publication Publication Date Title
CN110574071B (en) Apparatus, method and system for aligning 3D data sets
Guerneve et al. Three‐dimensional reconstruction of underwater objects using wide‐aperture imaging SONAR
US10438408B2 (en) Resolution adaptive mesh for performing 3-D metrology of an object
Preston Automated acoustic seabed classification of multibeam images of Stanton Banks
CN114270406A (en) Parameter calibration method, device and equipment
CN108875804B (en) Data processing method based on laser point cloud data and related device
CN108169751B (en) Three-dimensional rasterization method for weather radar base data, computer-readable storage medium and electronic device
CN111123242B (en) Combined calibration method based on laser radar and camera and computer readable storage medium
KR102186733B1 (en) 3D modeling method for undersea topography
Sac et al. 2D high-frequency forward-looking sonar simulator based on continuous surfaces approach.
CN113534077A (en) Radar radiation source power inversion method and device and electronic equipment
US20210256740A1 (en) Method for increasing point cloud sampling density, point cloud processing system, and readable storage medium
CN107966702B (en) construction method and device of environment map
CN117095038A (en) Point cloud filtering method and system for laser scanner
CN111948658A (en) Deep water area positioning method for identifying and matching underwater landform images
CN116740151A (en) InSAR point cloud registration method and terminal equipment
Zienkiewicz et al. Matrix strengthening the identification of observations with split functional models in the squared Msplit (q) estimation process
RU2683626C1 (en) Method of identification of supporting points on space images of terrain during transformation thereof
CN116559883A (en) Correction method of side-scan sonar image and side-scan sonar mosaic image
US20110280473A1 (en) Rotation estimation device, rotation estimation method, and record medium
WO2022165793A1 (en) Extrinsic parameter calibration method and apparatus and computer readable storage medium
CN115291179A (en) Strabismus SAR two-dimensional resolution analysis method, electronic device and storage medium
CN111736157B (en) PPI data-based prediction method and device for nowcasting
CN115205354A (en) Phased array laser radar imaging method based on RANSAC and ICP point cloud registration
CN114494020A (en) Data splicing method for cable channel point cloud data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination