CN113706710A - Virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate flash) feature difference - Google Patents

Virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate flash) feature difference Download PDF

Info

Publication number
CN113706710A
CN113706710A CN202110919653.XA CN202110919653A CN113706710A CN 113706710 A CN113706710 A CN 113706710A CN 202110919653 A CN202110919653 A CN 202110919653A CN 113706710 A CN113706710 A CN 113706710A
Authority
CN
China
Prior art keywords
point cloud
point
points
fpfh
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110919653.XA
Other languages
Chinese (zh)
Other versions
CN113706710B (en
Inventor
郑莉
李烛焜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110919653.XA priority Critical patent/CN113706710B/en
Publication of CN113706710A publication Critical patent/CN113706710A/en
Application granted granted Critical
Publication of CN113706710B publication Critical patent/CN113706710B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate hydrographic) feature difference comprises traversing low-precision point cloud, searching virtual homonymy points, taking the low-precision point cloud as a source point cloud, taking a relatively high-precision point cloud as a target point cloud, converting points in the source point cloud into the target point cloud by using an initial conversion matrix, reserving conversion points with Euclidean distances to nearest neighbor points less than or equal to a threshold value, calculating FPFH (local pixel hydrographic) features of the reserved conversion points in the source point cloud, searching neighbors in the target point cloud, dividing voxels to obtain voxel center points, calculating FPFH features of the voxel center points, calculating F2 distance, learning feature difference by a CNN (CNN network), and fusing coordinates to obtain registration corresponding points; and carrying out point cloud rigid registration on the source point cloud and the target point cloud until iteration is finished to obtain an optimal rigid registration matrix, and improving the precision of the low-precision point cloud through point cloud updating. The method supports multi-source point clouds with noise, different sampling resolutions and local distortion, and improves the point cloud registration fusion precision.

Description

Virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate flash) feature difference
Technical Field
The invention belongs to the field of point cloud registration and fusion of different sources, and relates to a technology for obtaining registered virtual homonymous points through learning characteristic difference for point cloud registration and fusion.
Background
The point cloud registration is a very important link in the point cloud fusion process, and is a process of performing operations such as rotation and translation on point clouds in two different coordinate systems to enable the two point clouds to be in the same coordinate system. Point cloud data are applied to more fields in recent years, such as robots, automatic driving, face recognition, gesture recognition and the like, in an automatic driving system facing a level higher than L3, a high-precision map becomes an indispensable component, and the high-precision map is a special map with centimeter-level precision and detailed lane information compared with a general navigation map, can describe road information more abundantly, meticulously and comprehensively, and can reflect the real condition of a road more accurately. The methods for acquiring high-precision map point cloud data are roughly three: the method comprises the steps of mobile surveying and mapping vehicle acquisition, unmanned aerial vehicle aerial survey and 1:500 topographic map surveying and mapping, wherein sensors in various acquisition schemes are different, due to the isomerism of the sensors, the obtained point cloud data are greatly different in precision, range and data set, specifically, the point clouds obtained by some sensors are high in precision but small in range, the point clouds obtained by other sensors are low in precision but large in range, and the like, so that how to fuse the point cloud data obtained by different sensors is the key for generating a high-precision map by combining the advantages of the sensors. (references: Jingnan, L.; Hangbin, W.; Chi, G.; Hongmin, Z.; Wenwei, Z.; Cheng, Y.progress and consistency of High precision access map. Eng. Sci.2018; Zongjuan, C.; Erxin, S.; Dandan, L.; Congcong, Z.; Xu, C.analysis of Status query of High-precision Maps and Research on evaluation schemes.computer KnowIedge and technology Iogy.2018.)
Most of the existing research on Point cloud fusion is realized by reference Point cloud fine registration, and the existing related mature methods are an Iterative Closest Point (ICP) algorithm proposed by Besl, Mckay and the like and a Normal Distribution Transformation (NDT) algorithm proposed by Biber and the like. Fundamentally, the iterative closest point algorithm is to search corresponding points in two existing point cloud data, but due to various factors such as a sensor and a scanning visual angle, the corresponding points cannot be corresponding points in a true sense, and a registration result has certain errors; meanwhile, the iterative closest point algorithm can only carry out rigid transformation on the target point cloud as a whole, but cannot correct all points in the target point cloud, and the iterative closest point algorithm is also a defect in multi-source point cloud registration. (reference: Besl, P.J.; Mckay, H.D.A. method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell.1992; Biber, P.; Stra β er, W.the normal distribution transform: A new approach to laser scanning transmission proceedings2003IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2003) (Cat. No. 03CH37453.) IEEE.2003.)
The technology is rapidly developed at present, and with continuous research in the field of artificial intelligence, the early-history fire and heat period is deeply learned. Deep learning allows a machine to learn human activities to achieve a simulation effect, thereby solving some complex problems. Some scholars have studied point cloud matching in combination with deep learning techniques. Elbaz G, Avraham T and the like gather local features through a convolutional neural network to complete point cloud matching, and a first end-to-end high-precision point cloud registration network is proposed in a hundred-degree unmanned vehicle in 2019, but the point cloud registration network mainly acts on point cloud data obtained by two same sensors, focuses on eliminating key points selected on dynamic ground objects, and is not suitable for point cloud data fusion which is obtained by different sensors and has different precision, different resolution and noise. Therefore, the research method mainly aims at the point cloud data which are obtained based on the sensor and have various differences, and aims to improve the precision of point cloud registration and fusion. (reference: Chen, X.research on algorithm and application of deep approximation of horizontal neural network, Zhenjiang Gongshang university.2013; Elbaz, G.; Avraham, T.; Fischer, A.3D point closure for localization using a deep approximation of horizontal neural network, proceedings of the IEEE Conference on Computer vision and paper registration.2017; Lu, W.; Wan, G.; Zhou, Y.; Fun, X.; Yuan, P.; Song, S.Deepvcp: IEEE end-to-end approximation of video compression F.9. the term "I.C.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a virtual homonymous Point multi-source Point cloud data fusion method based on FPFH (Fast Point Feature histogram) Feature difference, which introduces Fast Point Feature histogram features (FPFH (Fast Point Feature Histograms) to multi-source Point clouds with noise, different sampling resolutions and local distortion, establishes Point cloud space information by voxel and introduces CNN (network learning Feature difference) and improves the precision of the Point cloud registration fusion process.
The technical scheme adopted by the invention is a virtual point multi-source point cloud fusion method based on FPFH (field programmable gate flash) feature difference, which comprises the following steps,
step 1, traversing low-precision point clouds, searching virtual homonymy points, and realizing the following method,
regarding two point cloud data sets with different precisions, taking a low-precision point cloud as a source point cloud and a relatively high-precision point cloud as a target point cloud, and converting points in the source point cloud into the target point cloud by using an initial conversion matrix;
calculating Euclidean distances between each conversion point and the nearest neighbor point in the target point cloud, comparing the calculated distances with a preset threshold value, deleting the conversion points if the calculated distances are larger than the threshold value, and reserving the conversion points smaller than or equal to the threshold value;
computing FPFH features in a source cloud for retained transition points
Figure BDA0003206991770000033
Finding neighborhood in the target point cloud and dividing the voxels to obtain voxel center points, and respectively calculating the FPFH characteristics of the voxel center points
Figure BDA0003206991770000034
Wherein the FPFH features represent fast point feature histogram features;
to FPFH characteristic
Figure BDA0003206991770000035
And
Figure BDA0003206991770000036
calculating F2 distance, inputting into CNN network for learning characteristic difference, and outputting the summary of registration corresponding pointsObtaining registration corresponding points by coordinate fusion by using the coordinate of the center point of the voxel;
step 2, carrying out rigid point cloud registration on the source point cloud and the target point cloud by using the registration corresponding points, returning to the step 1 to repeat until an iteration end condition is met, obtaining an optimal rigid registration matrix, and entering step 3;
and 3, further improving the precision of the low-precision point cloud through point cloud updating, including searching virtual points by adopting the implementation mode of the step 1 according to the optimal rigid registration matrix obtained in the step 2, and updating the points in the source point cloud by using the virtual points.
Furthermore, the FPFH characteristic is defined as follows,
Figure BDA0003206991770000031
the SPFH is the PFH characteristic of the current point, k is the number of the neighborhood points, and wtWeight of the t-th point in the neighborhood, SPFH (p)t) PFH characteristics of the t point in the neighborhood; the PFH feature is a histogram of point features.
Moreover, the neighborhood is searched in the target point cloud and the voxel is divided to obtain the voxel central point, the realization method is as follows,
dividing the neighborhood of the conversion point into neighborhoods in the target point cloud by neighborhood searching
Figure BDA0003206991770000032
And each voxel, wherein r is the width of the neighborhood and s is the size of the voxel.
Also, the size s of the voxel is initially set to half the neighborhood width r, and when step 3 is performed, the size s of the voxel is updated to a smaller value.
Also, when step 3 is performed, the voxel size s is updated to a smaller value, including 1/3 or 1/4 set to the neighborhood width r.
Moreover, the coordinate of the center point of the voxel is utilized to obtain the registration corresponding point through coordinate fusion, the realization method is as follows,
utilizing CNN networksProbability w output after learning of feature differencejCalculating a transition point qiCorresponding virtual homologous point q'iAnd obtaining the registration corresponding points, and the calculation is realized as follows,
Figure BDA0003206991770000041
wherein q isijIs the jth voxel center point, J1.
On the other hand, the invention also provides a virtual point multi-source point cloud fusion system based on the FPFH characteristic difference, which is used for realizing the virtual point multi-source point cloud fusion method based on the FPFH characteristic difference.
And, including the following modules,
the first module is used for traversing low-precision point clouds to search virtual homonymy points, and the realization method is as follows,
regarding two point cloud data sets with different precisions, taking a low-precision point cloud as a source point cloud and a relatively high-precision point cloud as a target point cloud, and converting points in the source point cloud into the target point cloud by using an initial conversion matrix;
calculating Euclidean distances between each conversion point and the nearest neighbor point in the target point cloud, comparing the calculated distances with a preset threshold value, deleting the conversion points if the calculated distances are larger than the threshold value, and reserving the conversion points smaller than or equal to the threshold value;
computing FPFH features in a source cloud for retained transition points
Figure BDA0003206991770000042
Finding neighborhood in the target point cloud and dividing the voxels to obtain voxel center points, and respectively calculating the FPFH characteristics of the voxel center points
Figure BDA0003206991770000043
Wherein the FPFH features represent fast point feature histogram features;
to FPFH characteristic
Figure BDA0003206991770000044
And
Figure BDA0003206991770000045
calculating the F2 distance, inputting the distance into a CNN network for feature difference learning, outputting the probability of the registration corresponding point, and obtaining the registration corresponding point by coordinate fusion by using the coordinate of the voxel center point;
the second module is used for carrying out point cloud rigid registration on the source point cloud and the target point cloud by using the registration corresponding points, commanding the first module to work until an iteration end condition is met, obtaining an optimal rigid registration matrix and commanding the third module to work;
and the third module is used for further improving the precision of the low-precision point cloud through point cloud updating, and comprises the steps of searching virtual points by adopting the working mode of the first module according to the optimal rigid registration matrix obtained by the second module, and updating the points in the source point cloud by utilizing the virtual points.
Or, the virtual point multi-source point cloud fusion method based on the FPFH characteristic difference comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the storage instructions in the memory to execute the virtual point multi-source point cloud fusion method based on the FPFH characteristic difference.
Or, the method comprises a readable storage medium, wherein a computer program is stored on the readable storage medium, and when the computer program is executed, the method for fusing the virtual point multi-source point cloud based on the FPFH feature difference is realized.
The invention provides a technical scheme for fusing virtual homonymy point multi-source point cloud data based on FPFH (field programmable gate hydrographic) feature difference, which introduces FPFH (field programmable gate hydrographic) features, establishes voxel utilization point cloud space information, introduces CNN (computer network learning) feature difference to multi-source point clouds with noise, different sampling resolutions and local distortion, and improves the precision of a point cloud registration fusion process through virtual homonymy point searching and point cloud updating. When the virtual homonym point is searched, the focus is put on searching the corresponding point, and the process of synthesizing the virtual homonym point by using the voxel and the probability solves the problem of searching the corresponding point with low accuracy from the existing points in the existing method through learning the FPFH characteristic difference. The point cloud updating has the effect of improving the precision of low-precision point cloud, so that the precision of the point cloud after the point cloud with different precision is fused is consistent, precision errors are eliminated, the correction direction and the correction size of each point can be corrected along with the characteristics of the point, small-range fine fitting is realized, the correction amount between all regions can be integrally kept without large jump, and the continuity of the whole point cloud region is ensured.
The invention has the beneficial effects that: the method avoids the traditional method of directly searching corresponding points from the existing point cloud, and realizes the synthesis of virtual homonymous points by utilizing probability according to the characteristic difference; the problem of accuracy reduction after low-accuracy point cloud high-accuracy fusion is avoided.
Drawings
FIG. 1 is a flowchart of an overall method of an embodiment of the invention;
FIG. 2 is a schematic diagram of local virtual homonym search according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a CNN network according to an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the virtual homonymy point multi-source point cloud data fusion method based on FPFH feature difference provided by the embodiment of the present invention includes the following steps:
step 1, traversing low-precision point clouds and searching virtual homonymy points;
the step is an improvement of searching for corresponding points in a classical ICP algorithm. However, when directly searching for corresponding points in the existing point cloud, due to various factors such as sensors and scanning view angles, the corresponding points cannot be corresponding points in a true sense, and a certain error exists in a registration result. Different from the classical ICP algorithm in which corresponding points are directly searched in the existing point cloud, the virtual homonymy point searching module increases the robustness to existing noise by introducing the FPFH (fast Fourier transform) feature.
It is not enough to only utilize the FPFH characteristics to retain neighborhood space information of the point cloud, and the virtual homonymy point finding module introduces voxels to further retain neighborhood space information of the learning point cloud on the basis of utilizing the FPFH characteristics.
The invention also utilizes the CNN network to learn the characteristic difference, can automatically learn deeper and more abstract characteristic information by utilizing the strong expression capability of the CNN network in similarity learning through the CNN network, can better learn the probability of corresponding points, can obtain a virtual homonym with higher precision through the probability on the basis of utilizing point cloud space information through voxel coordinate fusion, and improves the precision of registration fusion.
The step 1 of the invention comprises the following contents:
step 1), regarding two point cloud data sets with different precisions, taking a low-precision point cloud as a source point cloud and a relatively high-precision point cloud as a target point cloud, and converting points in the source point cloud into the target point cloud by using an initial conversion matrix;
step 2), setting a threshold, calculating the Euclidean distance between the converted point and the nearest neighbor point in the target point cloud, comparing the calculated distance with the threshold, if the calculated distance is larger than the threshold, indicating that the converted point is not overlapped in the source point cloud, deleting the converted point, and reserving the converted point smaller than the threshold;
step 3), calculating FPFH characteristics of the conversion points reserved in the step 2) in the source point cloud, searching neighborhoods in the target point cloud, dividing voxels to obtain voxel center points, and calculating the FPFH characteristics of the voxel center points;
in order to consider the surrounding neighborhood space information of the point cloud, the invention adopts the form of quoting the voxels, does not lose the space information of the point cloud, and increases the stability and the precision of searching the registration corresponding points.
Step 4), calculating the F2 distance between the FPFH characteristics obtained in the step 3), feeding the FPFH characteristics into a CNN network for characteristic difference learning, outputting the probability of registration corresponding points, and obtaining the registration corresponding points through coordinate fusion by using the voxel center point coordinates based on the step 3);
in consideration of the superiority of the CNN network learning feature difference, the F2 distance of the CNN network learning FPFH feature is introduced in the invention, so that the probability of the registration corresponding point can be more accurately obtained.
For ease of reference, the following provides a detailed implementation of the steps of the embodiments. Referring to fig. 2, the specific implementation of the virtual homonymy point search in the embodiment of the present invention includes the following sub-steps, which may be provided as a virtual homonymy point search module during the specific implementation:
step 1.1, firstly, selecting two point clouds P and Q with different precisions, taking the point cloud P with low precision as a source point cloud and the point cloud Q with relatively high precision as a target point cloud, and taking the point P in the source point cloud as a target point cloudi,pi∈R3,i=1,...,NpAs candidate points, R3Is a three-dimensional space, NpThe number of points of the point cloud P is shown, and i is a point identifier;
wherein R is3Representing the three-dimensional space in which the points are located.
Step 1.2, all points p in the source point cloudiConverting all the points p in the source point cloud by using the initial conversion matrix R, TiConverting to obtain a conversion point pi′,pi′∈R3,i=1,...,Np. Where R is the transformation matrix and T is the translation matrix, the initial transformation matrix may simply be the identity matrix.
Step 1.3, all the obtained conversion points piPutting the target point cloud Q into the cloud, searching a neighborhood point of a conversion point in the target point cloud Q, setting a threshold value r, calculating the Euclidean distance d between the conversion point and the nearest neighborhood point in the target point cloud, comparing the calculated distance with the threshold value, if the calculated distance is larger than the threshold value, indicating that the conversion point is not overlapped in the source point cloud, deleting the conversion point, and reserving the conversion point Q which is smaller than or equal to the threshold valuei,qi∈R3,i=1,...,Nq
Because neighborhood point query is performed locally on the point cloud, the suggested threshold r is set to be 10 times of the resolution of the target point cloud: this step serves to search for virtual homologous points in the following steps, so the distance between nearest neighbors should be about two or three times of the value of the target point cloud resolution, but if the point cloud resolutions of the target point cloud and the source point cloud are different, the values can be properly widened to 10 times, and the specific values can be set as required.
Step 1.4, the obtained conversion point q is processediCalculating the FPFH characteristic value in the source point cloud to obtain
Figure BDA0003206991770000072
The invention improves the searching of corresponding points in the classical ICP algorithm. When corresponding points are directly searched in the existing point cloud, the corresponding points cannot be corresponding points in a true sense due to various factors such as a sensor and a scanning visual angle, and a registration result has certain errors. Unlike the classical ICP algorithm, which directly finds the corresponding points in the existing point cloud, the present invention increases the robustness to the presence of noise by introducing the FPFH feature.
Figure BDA0003206991770000071
The SPFH is the PFH characteristic of the current point, k is the number of the neighborhood points, and wtThe weight of the t-th point in the neighborhood, specifically the Euclidean distance between the point pairs, SPFH (p)t) PFH characteristics of the t point in the neighborhood;
PFH features are Point Feature Histograms. The invention introduces Fast Point Feature histogram features (FPFH) on the basis.
Step 1.5, converting a point q in the target point cloudiSearching neighborhood and dividing voxels to obtain voxel center point qijJ1.. J; where i is the index of the transition point and j is the index of the voxel center point within the corresponding neighborhood.
It is not enough to only use the FPFH feature to retain neighborhood space information of the point cloud, and on the basis of using the FPFH feature, voxels are introduced to further retain neighborhood space information of the learning point cloud. When the FPFH characteristics are extracted, a multi-core multi-thread OpenMP parallel processing mode can be embedded, and the rapid characteristic histogram extraction of key points is accelerated.
After the point cloud in the source point cloud is converted into the target point cloud to obtain a conversion point, the conversion point is taken as an origin center, and segmentation with a certain distance is carried out on three coordinate axes of x, y and z. In the embodiment, the neighborhood of the conversion point is divided into the neighborhoods of the conversion points in the target point cloud by a neighborhood searching method
Figure BDA0003206991770000081
And (3) each voxel, wherein r is the width of the neighborhood, the threshold value set in the step 1.3 is adopted, and s is the size of the voxel, and each voxel comprises some points in the target point cloud.
In the embodiment, when the voxel size s is half of the neighborhood width r, the number J of voxels is 8, which is the minimum value, and when the result accuracy needs to be further improved in the subsequent step 3, the number of voxels may be increased, the voxel size s may be correspondingly decreased, and the CNN parameter in the CNN network module may be correspondingly changed at the same time.
Step 1.6, calculating voxel center point qijJ1.. j.
Figure BDA0003206991770000082
J1, J, respectively with
Figure BDA0003206991770000083
Calculating the F2 distance;
the F2 distance represents a euclidean distance, which is a euclidean distance between two three-dimensional points in space in the present invention, and the calculation method is the prior art, which is not repeated herein.
Step 1.7, feeding the distance F2 obtained in step 1.6 into a CNN network to obtain the probability w of the center point of each voxelj,j=1,...,J;
To obtain more accurate probability wjThe CNN network is utilized during feature difference learning, the CNN network can utilize the strong expression ability of the CNN network in similarity learning, deeper and more abstract feature information can be automatically learned, the probability of corresponding points can be better learned, and the point cloud space can be utilized through voxel coordinate fusionOn the basis of information, virtual homonyms with higher precision are obtained through probability, and the precision of registration fusion is improved.
Referring to fig. 3, the CNN network preferably used in the embodiment of the present invention is a three-layer convolutional neural network, the first layer is an 8 × 1 data input layer, and the input data is the difference between the FPFH characteristics of the voxel center point and the points in the source point cloud; the second layer is a 4 x 1 convolution layer; finally, the fully connected layers of 1 x 1 are connected. And outputting the probability corresponding to each voxel central point through a SoftMax function after the fully connected layer, and finally performing coordinate weighting synthesis by using the voxel central point coordinates and the probability to obtain virtual homonymy point coordinates.
The specific implementation of the CNN network of the embodiment of the invention comprises the following substeps:
step 1.7.1, inputting the obtained F2 distance into a neural convolution network;
step 1.7.2, outputting the network to SoftMax to obtain the probability wj,j=1,...,J;
Step 1.8, obtaining registration corresponding points through coordinate fusion by using the coordinates of the voxel center point:
using the probability w in step 1.7jJ1.. J calculates the transition point qiCorresponding virtual homologous point q'iObtaining registration corresponding points;
Figure BDA0003206991770000091
wherein q isijIs the jth voxel center point;
step 2, carrying out point cloud rigid registration on the source point cloud and the target point cloud by using the registration corresponding points to obtain a new rotation matrix R 'and a translation matrix T', repeating the iteration step 1 based on the new rotation matrix R 'and the translation matrix T' until an iteration end condition (such as cycle number or error convergence) is reached, obtaining an optimal rigid registration matrix, and entering step 3;
the rigid registration of the point cloud is realized in such a way that after corresponding points are obtained, registration parameters of a conversion matrix, namely a new rotation matrix R ', a translation matrix T' are calculated according to the least square, and the calculation error principle is to minimize a target function;
Figure BDA0003206991770000092
when the iteration is finished, the rotation matrix R 'calculated currently and the translation matrix T' are the optimal rigid registration matrix.
Step 3, further improving the precision of the low-precision point cloud through point cloud updating;
in order to further improve the precision of the source point cloud, after the optimal rigid registration matrix is obtained in the previous step, the corresponding points of the source point cloud are obtained based on the step 1, and the corresponding points are used for updating the points in the source point cloud to obtain the final result.
Because the source point cloud is relatively low-precision point cloud, the points obtained after point cloud registration still cannot reach the precision of high-precision point cloud, and the points in the source point cloud can be updated through the step 3 to improve the precision of the point cloud.
In specific implementation, step 3 may be provided as a point cloud updating module. The point cloud updating module of the embodiment of the invention comprises the following substeps:
step 3.1, setting a voxel size s of the precision update;
in step 1, the value of the voxel size s is defaulted to half of the neighborhood when the accuracy is not required to be too high, the number of voxels is 8, when the accuracy is required to be higher in the step, the value of the voxel size s is updated to be set to be a smaller value, for example, 1/3 or 1/4 of the neighborhood width r, and the corresponding number of voxels J will become 9 or 64;
step 3.2, based on the updated voxel size s and the corresponding voxel number J, searching for a virtual point by using the virtual homonymy point searching process (module) in the step 1, and replacing the initial transformation matrix R and T with the optimal rigid registration matrix R 'and T' at the moment;
and 3.3, updating points in the source point cloud by using the virtual points.
And 3.2, the virtual point refers to a virtual homonymous point corresponding to the conversion point, and then the original point is replaced by the virtual point after updating.
The invention aims to improve the precision of low-precision point cloud by using a point cloud updating module, so that the precision of the point cloud after two point clouds with different precisions are fused is consistent, and some precision errors are eliminated.
Finally, for the sake of distinguishing effect, precision index calculation can be carried out. In an embodiment, the accuracy index adopts a rotation error, a translation error and a CD distance.
In specific implementation, the automatic operation of the processes can be realized by adopting a computer software technology.
The technical scheme of the embodiment of the invention is utilized to carry out experiments, and a visual map after point cloud registration is generated:
data, namely four types of marked point cloud Data, are respectively Bunny, Sign Board, Sculpture and Chair. By adopting the above process provided by the invention, the technical scheme result of the embodiment of the invention, namely the point cloud registration result graph obtained by the method can be finally obtained. The effectiveness of the present invention can be confirmed by comparing the rotation error, the translation error and the CD distance.
The rotation error, translation error and CD distance are defined as follows:
Figure BDA0003206991770000111
Figure BDA0003206991770000112
wherein R isGTIs a true rotation matrix, RiTo needRotation matrix of evaluation, trace (R)id) Is a matrix RidError (R) is the rotation error expressed in degrees.
Error(T)=||tGT-ti||2
Wherein, tGTAs a true translation matrix, tiFor the translation matrix to be evaluated, error (t) is the translation error.
Figure BDA0003206991770000113
Wherein S is1Is the number of points of the first point cloud, x is the point cloud S1At any point in (1), S2Is the number of points in the second point cloud, and y is the point cloud S2At any point in time, CD (S)1,S2) Is the CD distance.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
In specific implementation, a person skilled in the art can implement the automatic operation process by using a computer software technology, and a system device for implementing the method, such as a computer-readable storage medium storing a corresponding computer program according to the technical solution of the present invention and a computer device including a corresponding computer program for operating the computer program, should also be within the scope of the present invention.
In some possible embodiments, a virtual point multi-source point cloud fusion system based on FPFH feature difference is provided, which comprises the following modules,
the first module is used for traversing low-precision point clouds to search virtual homonymy points, and the realization method is as follows,
regarding two point cloud data sets with different precisions, taking a low-precision point cloud as a source point cloud and a relatively high-precision point cloud as a target point cloud, and converting points in the source point cloud into the target point cloud by using an initial conversion matrix;
calculating Euclidean distances between each conversion point and the nearest neighbor point in the target point cloud, comparing the calculated distances with a preset threshold value, deleting the conversion points if the calculated distances are larger than the threshold value, and reserving the conversion points smaller than or equal to the threshold value;
computing FPFH features in a source cloud for retained transition points
Figure BDA0003206991770000114
Finding neighborhood in the target point cloud and dividing the voxels to obtain voxel center points, and respectively calculating the FPFH characteristics of the voxel center points
Figure BDA0003206991770000115
Wherein the FPFH features represent fast point feature histogram features;
to FPFH characteristic
Figure BDA0003206991770000122
And
Figure BDA0003206991770000121
calculating the F2 distance, inputting the distance into a CNN network for feature difference learning, outputting the probability of the registration corresponding point, and obtaining the registration corresponding point by coordinate fusion by using the coordinate of the voxel center point;
the second module is used for carrying out point cloud rigid registration on the source point cloud and the target point cloud by using the registration corresponding points, commanding the first module to work until an iteration end condition is met, obtaining an optimal rigid registration matrix and commanding the third module to work;
and the third module is used for further improving the precision of the low-precision point cloud through point cloud updating, and comprises the steps of searching virtual points by adopting the working mode of the first module according to the optimal rigid registration matrix obtained by the second module, and updating the points in the source point cloud by utilizing the virtual points.
In some possible embodiments, a virtual point multi-source point cloud fusion system based on FPFH feature differences is provided, which includes a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the storage instructions in the memory to execute a virtual point multi-source point cloud fusion method based on FPFH feature differences as described above.
In some possible embodiments, a virtual point multi-source point cloud fusion system based on FPFH feature difference is provided, which includes a readable storage medium, on which a computer program is stored, and when the computer program is executed, the virtual point multi-source point cloud fusion system based on FPFH feature difference is implemented as described above.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A virtual point multi-source point cloud fusion method based on FPFH (field programmable gate flash) feature difference is characterized in that: comprises the following steps of 1, traversing low-precision point cloud, searching virtual homonymy points, realizing the following method,
regarding two point cloud data sets with different precisions, taking a low-precision point cloud as a source point cloud and a relatively high-precision point cloud as a target point cloud, and converting points in the source point cloud into the target point cloud by using an initial conversion matrix;
calculating Euclidean distances between each conversion point and the nearest neighbor point in the target point cloud, comparing the calculated distances with a preset threshold value, deleting the conversion points if the calculated distances are larger than the threshold value, and reserving the conversion points smaller than or equal to the threshold value;
computing FPFH features in a source cloud for retained transition points
Figure FDA0003206991760000013
Finding neighborhood in the target point cloud and dividing the voxels to obtain voxel center points, and respectively calculating the FPFH characteristics of the voxel center points
Figure FDA0003206991760000016
Wherein the FPFH features represent fast point feature histogram features;
to FPFH characteristic
Figure FDA0003206991760000014
And
Figure FDA0003206991760000015
calculating the F2 distance, inputting the distance into a CNN network for feature difference learning, outputting the probability of the registration corresponding point, and obtaining the registration corresponding point by coordinate fusion by using the coordinate of the voxel center point;
step 2, carrying out rigid point cloud registration on the source point cloud and the target point cloud by using the registration corresponding points, returning to the step 1 to repeat until an iteration end condition is met, obtaining an optimal rigid registration matrix, and entering step 3;
and 3, further improving the precision of the low-precision point cloud through point cloud updating, including searching virtual points by adopting the implementation mode of the step 1 according to the optimal rigid registration matrix obtained in the step 2, and updating the points in the source point cloud by using the virtual points.
2. The FPFH feature difference-based virtual point multi-source point cloud fusion method of claim 1, wherein:
the FPFH characteristic is defined as follows,
Figure FDA0003206991760000011
the SPFH is the PFH characteristic of the current point, k is the number of the neighborhood points, and wtWeight of the t-th point in the neighborhood, SPFH (p)t) PFH characteristics of the t point in the neighborhood; the PFH feature is a histogram of point features.
3. The FPFH feature difference-based virtual point multi-source point cloud fusion method of claim 1, wherein: the neighborhood is searched in the target point cloud and the voxel is divided to obtain the voxel center point, the realization method is as follows,
by neighborhood finding at the target pointPartitioning neighborhoods of conversion points in a cloud
Figure FDA0003206991760000012
And each voxel, wherein r is the width of the neighborhood and s is the size of the voxel.
4. The FPFH feature difference-based virtual point multi-source point cloud fusion method of claim 3, wherein: the size s of the voxel is initially set to half the neighborhood width r, and when step 3 is performed, the size s of the voxel is updated to a smaller value.
5. The FPFH feature difference-based virtual point multi-source point cloud fusion method of claim 4, wherein: when step 3 is performed, the voxel size s is updated to a smaller value, including 1/3 or 1/4 set to the neighborhood width r.
6. The FPFH feature difference-based virtual point multi-source point cloud fusion method of claim 1 or 2 or 3 or 4 or 5, wherein: the coordinate of the center point of the voxel is utilized to obtain the registration corresponding point through coordinate fusion, the realization method is as follows,
probability w output after learning of feature difference by using CNN networkjCalculating a transition point qiCorresponding virtual homologous point q'iAnd obtaining the registration corresponding points, and the calculation is realized as follows,
Figure FDA0003206991760000021
wherein q isijIs the jth voxel center point, J1.
7. The utility model provides a virtual point multisource point cloud fusion system based on FPFH characteristic difference which characterized in that: the method for realizing the virtual point multi-source point cloud fusion based on FPFH feature difference as recited in any one of claims 1 to 6.
8. The FPFH feature difference-based virtual point multi-source point cloud fusion system of claim 7, wherein: comprises the following modules which are used for realizing the functions of the system,
the first module is used for traversing low-precision point clouds to search virtual homonymy points, and the realization method is as follows,
regarding two point cloud data sets with different precisions, taking a low-precision point cloud as a source point cloud and a relatively high-precision point cloud as a target point cloud, and converting points in the source point cloud into the target point cloud by using an initial conversion matrix;
calculating Euclidean distances between each conversion point and the nearest neighbor point in the target point cloud, comparing the calculated distances with a preset threshold value, deleting the conversion points if the calculated distances are larger than the threshold value, and reserving the conversion points smaller than or equal to the threshold value;
computing FPFH features in a source cloud for retained transition points
Figure FDA0003206991760000022
Finding neighborhood in the target point cloud and dividing the voxels to obtain voxel center points, and respectively calculating the FPFH characteristics of the voxel center points
Figure FDA0003206991760000025
Wherein the FPFH features represent fast point feature histogram features;
to FPFH characteristic
Figure FDA0003206991760000023
And
Figure FDA0003206991760000024
calculating the F2 distance, inputting the distance into a CNN network for feature difference learning, outputting the probability of the registration corresponding point, and obtaining the registration corresponding point by coordinate fusion by using the coordinate of the voxel center point;
the second module is used for carrying out point cloud rigid registration on the source point cloud and the target point cloud by using the registration corresponding points, commanding the first module to work until an iteration end condition is met, obtaining an optimal rigid registration matrix and commanding the third module to work;
and the third module is used for further improving the precision of the low-precision point cloud through point cloud updating, and comprises the steps of searching virtual points by adopting the working mode of the first module according to the optimal rigid registration matrix obtained by the second module, and updating the points in the source point cloud by utilizing the virtual points.
9. The FPFH feature difference-based virtual point multi-source point cloud fusion system of claim 7, wherein: the FPFH feature difference-based virtual point multi-source point cloud fusion method comprises a processor and a memory, wherein the memory is used for storing program instructions, and the processor is used for calling the stored instructions in the memory to execute the virtual point multi-source point cloud fusion method based on the FPFH feature difference.
10. The FPFH feature difference-based virtual point multi-source point cloud fusion system of claim 7, wherein: comprising a readable storage medium, on which a computer program is stored, which, when executed, implements a method for virtual point multi-source point cloud fusion based on FPFH feature differences as claimed in any one of claims 1 to 6.
CN202110919653.XA 2021-08-11 2021-08-11 Virtual point multi-source point cloud fusion method and system based on FPFH characteristic difference Active CN113706710B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110919653.XA CN113706710B (en) 2021-08-11 2021-08-11 Virtual point multi-source point cloud fusion method and system based on FPFH characteristic difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110919653.XA CN113706710B (en) 2021-08-11 2021-08-11 Virtual point multi-source point cloud fusion method and system based on FPFH characteristic difference

Publications (2)

Publication Number Publication Date
CN113706710A true CN113706710A (en) 2021-11-26
CN113706710B CN113706710B (en) 2024-03-08

Family

ID=78652409

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110919653.XA Active CN113706710B (en) 2021-08-11 2021-08-11 Virtual point multi-source point cloud fusion method and system based on FPFH characteristic difference

Country Status (1)

Country Link
CN (1) CN113706710B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion
CN114972460A (en) * 2022-06-02 2022-08-30 福州大学 Point cloud registration method combined with image feature context matching
CN115272433A (en) * 2022-09-23 2022-11-01 武汉图科智能科技有限公司 Light-weight point cloud registration method and system for automatic obstacle avoidance of unmanned aerial vehicle
CN115797418A (en) * 2022-09-27 2023-03-14 湖南科技大学 Complex mechanical part measurement point cloud registration method and system based on improved ICP
CN116188543A (en) * 2022-12-27 2023-05-30 中国人民解放军61363部队 Point cloud registration method and system based on deep learning unsupervised
CN117495932A (en) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118564A (en) * 2018-08-01 2019-01-01 湖南拓视觉信息技术有限公司 A kind of three-dimensional point cloud labeling method and device based on fusion voxel
CN110490915A (en) * 2019-08-19 2019-11-22 重庆大学 A kind of point cloud registration method being limited Boltzmann machine based on convolution
US20200334897A1 (en) * 2019-04-18 2020-10-22 Zebra Medical Vision Ltd. Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images
CN112700479A (en) * 2020-12-23 2021-04-23 北京超星未来科技有限公司 Registration method based on CNN point cloud target detection
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization
CN113192112A (en) * 2021-04-29 2021-07-30 浙江大学计算机创新技术研究院 Partial corresponding point cloud registration method based on learning sampling

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118564A (en) * 2018-08-01 2019-01-01 湖南拓视觉信息技术有限公司 A kind of three-dimensional point cloud labeling method and device based on fusion voxel
US20200334897A1 (en) * 2019-04-18 2020-10-22 Zebra Medical Vision Ltd. Systems and methods for reconstruction of 3d anatomical images from 2d anatomical images
CN110490915A (en) * 2019-08-19 2019-11-22 重庆大学 A kind of point cloud registration method being limited Boltzmann machine based on convolution
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization
CN112700479A (en) * 2020-12-23 2021-04-23 北京超星未来科技有限公司 Registration method based on CNN point cloud target detection
CN113192112A (en) * 2021-04-29 2021-07-30 浙江大学计算机创新技术研究院 Partial corresponding point cloud registration method based on learning sampling

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAO GUO ET AL.: ""Correspondence estimation for non-rigid point clouds with automatic part discovery"", 《VISUAL COMPUTER》, vol. 32, pages 1511 - 1524, XP036086904, DOI: 10.1007/s00371-015-1136-5 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion
CN114972460A (en) * 2022-06-02 2022-08-30 福州大学 Point cloud registration method combined with image feature context matching
CN115272433A (en) * 2022-09-23 2022-11-01 武汉图科智能科技有限公司 Light-weight point cloud registration method and system for automatic obstacle avoidance of unmanned aerial vehicle
CN115272433B (en) * 2022-09-23 2022-12-09 武汉图科智能科技有限公司 Light-weight point cloud registration method and system for automatic obstacle avoidance of unmanned aerial vehicle
CN115797418A (en) * 2022-09-27 2023-03-14 湖南科技大学 Complex mechanical part measurement point cloud registration method and system based on improved ICP
CN116188543A (en) * 2022-12-27 2023-05-30 中国人民解放军61363部队 Point cloud registration method and system based on deep learning unsupervised
CN116188543B (en) * 2022-12-27 2024-03-12 中国人民解放军61363部队 Point cloud registration method and system based on deep learning unsupervised
CN117495932A (en) * 2023-12-25 2024-02-02 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system
CN117495932B (en) * 2023-12-25 2024-04-16 国网山东省电力公司滨州供电公司 Power equipment heterologous point cloud registration method and system

Also Published As

Publication number Publication date
CN113706710B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN113706710B (en) Virtual point multi-source point cloud fusion method and system based on FPFH characteristic difference
Xia et al. Geometric primitives in LiDAR point clouds: A review
CN113345018B (en) Laser monocular vision fusion positioning mapping method in dynamic scene
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN110930495A (en) Multi-unmanned aerial vehicle cooperation-based ICP point cloud map fusion method, system, device and storage medium
Zhou et al. S4-SLAM: A real-time 3D LIDAR SLAM system for ground/watersurface multi-scene outdoor applications
CN114332348B (en) Track three-dimensional reconstruction method integrating laser radar and image data
CN111915517B (en) Global positioning method suitable for RGB-D camera under indoor illumination unfavorable environment
CN114088081B (en) Map construction method for accurate positioning based on multistage joint optimization
CN115032648B (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN113110455A (en) Multi-robot collaborative exploration method, device and system for unknown initial state
Yin et al. Pse-match: A viewpoint-free place recognition method with parallel semantic embedding
Pan et al. Pin-slam: Lidar slam using a point-based implicit neural representation for achieving global map consistency
Xu et al. A LiDAR SLAM System with Geometry Feature Group Based Stable Feature Selection and Three-Stage Loop Closure Optimization
CN113808152A (en) Unmanned aerial vehicle autonomous navigation method based on ORB _ SLAM2
CN112950786A (en) Vehicle three-dimensional reconstruction method based on neural network
Xu et al. Fast and accurate registration of large scene vehicle-borne laser point clouds based on road marking information
Zhang et al. Accurate real-time SLAM based on two-step registration and multimodal loop detection
Zhou et al. A lidar mapping system for robot navigation in dynamic environments
Guo et al. A feasible region detection method for vehicles in unstructured environments based on PSMNet and improved RANSAC
CN113256693A (en) Multi-view registration method based on K-means and normal distribution transformation
Zhang et al. Object depth measurement from monocular images based on feature segments
Youji et al. A SLAM method based on LOAM for ground vehicles in the flat ground
Wang et al. A novel real-time semantic-assisted LiDAR odometry and mapping system
Guo et al. 3D Lidar SLAM Based on Ground Segmentation and Scan Context Loop Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant