CN116188543B - Point cloud registration method and system based on deep learning unsupervised - Google Patents

Point cloud registration method and system based on deep learning unsupervised Download PDF

Info

Publication number
CN116188543B
CN116188543B CN202211685566.3A CN202211685566A CN116188543B CN 116188543 B CN116188543 B CN 116188543B CN 202211685566 A CN202211685566 A CN 202211685566A CN 116188543 B CN116188543 B CN 116188543B
Authority
CN
China
Prior art keywords
point cloud
registration
source
network
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211685566.3A
Other languages
Chinese (zh)
Other versions
CN116188543A (en
Inventor
牛泽璇
官恺
朱晓雷
金飞
杜延峰
晏非
赵自明
何江
刘雅祺
袁海军
陈思雨
杨帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
61363 Troop Of Chinese Pla
Original Assignee
61363 Troop Of Chinese Pla
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 61363 Troop Of Chinese Pla filed Critical 61363 Troop Of Chinese Pla
Priority to CN202211685566.3A priority Critical patent/CN116188543B/en
Publication of CN116188543A publication Critical patent/CN116188543A/en
Application granted granted Critical
Publication of CN116188543B publication Critical patent/CN116188543B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06V10/7753Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision three-dimensional point cloud data processing, in particular to a point cloud registration method and a point cloud registration system based on deep learning, which are characterized in that a point cloud registration network is constructed and trained, a twin network is utilized in the point cloud registration network to extract a clean source point cloud characteristic and a clean target point cloud characteristic, a similar matrix convolution iteration is utilized to update and correct the clean source point cloud, a potential surface prediction network is utilized to predict the clean target point cloud and the point cloud noise of the clean source point cloud after updating and correcting, and a point cloud space point position is acquired according to a point cloud noise prediction result and a noise reduction mode so as to perform one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position; aiming at a target point cloud and a source point cloud of an object to be detected, inputting the target point cloud and the source point cloud into a trained point cloud registration network, and realizing point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network. The method can alleviate noise to a certain extent and improve the point cloud registration accuracy.

Description

Point cloud registration method and system based on deep learning unsupervised
Technical Field
The invention relates to the technical field of computer vision three-dimensional point cloud data processing, in particular to a point cloud registration method and system based on deep learning unsupervised.
Background
Rigid point cloud registration refers to the problem of finding a rigid transformation to align two given point clouds with unknown point correspondence, which has applications in many areas of computer vision and robotics, such as robot and object pose estimation, point cloud based mapping, etc., where rigid point cloud registration requires resolution of unknown point correspondence and rigid transformation to pair Ji Dianyun. At present, the deep learning noise reduction method has well progressed in the point cloud registration aspect.
In recent years, with the rapid development of the deep learning point cloud registration technology, various deep learning point cloud registration algorithms including RPMNet and DCP methods are continuously emerging. Although the deep learning supervision method has been greatly progressed, the method is limited by the labeling of sample data, which results in the fact that the same method still needs to be manually labeled when used in a new application scene, and the possible systematic errors or random errors of the labels affect the final registration precision, although a large number of style migration methods can perform style migration on the labeled samples at present, the technologies are only suitable for colors of two-dimensional images, and the style migration technology aiming at three-dimensional point cloud space coordinates is not reported. Thus, the deep learning unsupervised registration method is still the main direction of future development. The unsupervised registration method is used for mainly researching an unsupervised loss function on the basis of having a deep learning network structure, although an IDAM network provides a thinking worthy of reference, the unsupervised loss function is affected by noise and the like, even if the relative relation between a source point cloud and a target point cloud is calculated correctly, the unsupervised registration cannot be achieved under the conditions of noise, point cloud non-one-to-one correspondence and the like by utilizing a point-to-constraint mode, and the loss function possibly gives feedback of network structure errors, so that the point cloud registration effect is affected.
Disclosure of Invention
Therefore, the invention provides a point cloud registration method and a point cloud registration system based on deep learning, which are used for predicting the potential surface of a point cloud in a noise reduction mode to obtain the position of any point in the space of the point cloud, so that the one-to-one correspondence between a source point cloud and a target point cloud is realized; and by introducing a potential surface prediction network and a corresponding loss function, noise can be relieved to a certain extent, and further, the point cloud registration accuracy can be improved.
According to the design scheme provided by the invention, the point cloud registration method based on deep learning unsupervised is provided, and comprises the following steps:
constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode;
and collecting a target point cloud and a source point cloud of the object to be detected, inputting the target point cloud and the source point cloud into a trained point cloud registration network, and realizing point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
As the point cloud registration method based on deep learning unsupervised, the twin network further adopts a GNN network structure to perform feature extraction on the source point cloud and the target point cloud, and utilizes a multi-layer perceptron to obtain feature saliency scores, respectively ordering the saliency scores of all points in the source point cloud and the target point cloud, and respectively selecting points with the saliency scores meeting a preset threshold as clean source point cloud and clean target point cloud.
In the deep learning-based unsupervised point cloud registration method, in the similarity matrix convolution, the source point cloud features and the target point cloud features are combined and jointly encoded in tensor feature dimensions to obtain the source point cloud and target point cloud distance enhancement features, the similarity score of each point is extracted, the distance enhancement features are fused, a softmax function is used for converting a similarity matrix value into a probability value, and an argmax function is used for searching the optimal corresponding relation between the source point cloud and the target point cloud.
As the point cloud registration method based on the deep learning unsupervised, in the combined coding of the source point cloud features and the target point cloud features in tensor feature dimension, the nth iteration source point cloud point p is further performed i And a point q of the target point cloud j The eigenvalue of (c) is expressed asWherein, [ -; carrying out]Representing tensorsCombining and jointly encoding feature dimensions; f (f) s (i) Representing the characteristics of a source point cloud at an i point; f (f) t (j) Representing the characteristic of the cloud of the target point at the point j; ||p i -q j I represents p i And q j A Euclidean distance between them; />Represents the ratio q j Pointing to p i Is a unit vector of (a).
As the point cloud registration method based on deep learning unsupervised, in the method, the optimal correspondence between the source point cloud and the target point cloud is searched by utilizing an argmax function, the searching process is optimized by utilizing a Singular Value Decomposition (SVD) method, and the optimized searching process is converted into the following steps: r is R (n) ,Transform matrix R using decomposition (n) And t (n) Updating the position of the source point cloud, wherein R and t are rigid transformation matrixes of the source point cloud and the target point cloud.
As the point cloud registration method based on the deep learning unsupervised method, further, aiming at the distance enhancement features of the source point cloud and the target point cloud in the similar matrix convolution, the method acquires the mixed elimination weight by aggregating all possible corresponding information of specified points in the source point cloud and utilizing the effectiveness score, wherein the weight of the ith point pair in the mixed elimination weight is expressed as Representing an indication function, v (i) being the validity fraction,/->Sigma (·) represents a sigmoid function, < ->Representing a merging operation unchanged according to element arrangement; f represents a multilayer feelingKnowing the value contained in the ith row and jth column of the feature fusion matrix tensor, M k (v (k)) means averaging the kth dimension of v (k).
As the point cloud registration method based on the deep learning unsupervised method, further, in the point cloud registration network training, the point cloud registration network is trained and optimized based on a total loss function and by using a sample data set, wherein the total loss function is expressed as: l (L) total =L PM +L NE +L HE +L LSU ,L PM L for monitoring the point registration loss of the similarity matrix convolution NE L for negative entropy loss for supervised extraction of clean point clouds HE To eliminate the loss of mixing by feeding back the correct label and filtering the wrong label by taking the probability of the existence point in the target point cloud as a supervision signal, L LSU To constrain the potential consistent surface loss that leads to inconsistent registration accuracy due to noise and point correspondence problems.
Further, the invention also provides a point cloud registration system based on deep learning unsupervised, comprising: a model construction module and a point cloud registration module, wherein,
the model construction module is used for constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode;
the point cloud registration module is used for collecting the target point cloud and the source point cloud of the object to be detected, inputting the target point cloud and the source point cloud into the trained point cloud registration network, and realizing the point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
The invention has the beneficial effects that:
according to the method, noise can be removed in the registration process by utilizing the constraint of the potential surface prediction network, the generalization performance and the robustness performance of the point cloud registration network model are effectively improved, and the transformation relationship between the source point cloud and the target point cloud can be estimated better based on the optimized point cloud registration network model, so that the point cloud registration task can be completed more accurately. And further, experimental data show that the scheme can meet the requirements of large data volume and high-precision point cloud registration visual application in scenes, and has a good application prospect.
Description of the drawings:
fig. 1 is a schematic diagram of a point cloud registration flow based on deep learning unsupervised in an embodiment;
FIG. 2 is a schematic diagram of a situation in which point cloud coordinates do not correspond in an embodiment;
FIG. 3 is a schematic illustration of a potential surface in an embodiment;
FIG. 4 is a schematic representation of predicted potential and real surfaces in an embodiment;
FIG. 5 is a schematic diagram of a point cloud registration network structure in an embodiment;
FIG. 6 is a graphical illustration of a model visualization result in an embodiment;
FIG. 7 is a diagram showing the relationship between the noise intensity and the accuracy of the training set in the embodiment;
FIG. 8 is a graph showing the relationship between the noise intensity of the test set and generalization in the embodiment;
fig. 9 is a visual comparative illustration in the examples.
The specific embodiment is as follows:
the present invention will be described in further detail with reference to the drawings and the technical scheme, in order to make the objects, technical schemes and advantages of the present invention more apparent.
At present, the deep learning unsupervised point cloud registration method still adopts a point corresponding mode as a constraint condition in a loss function part, and under the condition that noise and point cloud do not correspond one to one, the constraint can cause the reduction of registration precision. Referring to fig. 1, an embodiment of the present invention provides a point cloud registration method based on deep learning without supervision, including:
s101, constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode;
s102, collecting a target point cloud and a source point cloud of an object to be detected, inputting the target point cloud and the source point cloud into a trained point cloud registration network, and realizing point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
Under the condition of point cloud one-to-one correspondence, a point P in the source point cloud P and a point Q in the target point cloud Q are corresponding points, and P is changed into P after the rotation matrix R and the translation matrix t are subjected to rigid transformation Rt The calculation process is shown in formula (1).
p Rt =Rp+t(1)
In theory, under the condition that the R and t parameters are accurately calculated, the coordinates of the source point cloud point p' subjected to transformation calculation are completely consistent with the target point cloud q.
The source point cloud P is transformed and calculated into a noisy source point cloud after being affected by noise and is marked as P', and the transformed noise source point cloud is marked as P Rt . Similarly, the target point cloud Q is noted as Q' affected by noise. At this time, the noise intensity at the point p affected by the noise is n p The actual position of p' can be calculated according to the method, and the calculation method is shown in formula (2).
p′ Rt =R(p+n p )+t(2)
While the actual position of the q point is also subject to similar noise n q The effect is represented by the position q' and the relationship between the effect and q is represented by the formula (3).
q′=q+n q (3)
Previous unsupervised loss to reduce the distance between p 'and q' asConstraint by whole P Rt And the minimum Q' distance as a constraint, the influence of noise is not considered, resulting in a decrease in registration accuracy.
In a real scene, the source point cloud P and the target point cloud Q are not in one-to-one correspondence under the influence of different observation positions and angles of the sensors, so that the point clouds are generally searched in a nearest-neighbor mode for one-to-one registration. The approximate corresponding point q can be obtained by transforming, calculating and solving the coordinates of the nearest neighbor points, and the calculating mode is shown in a formula (4) and a formula (5).
p Rt =Rp+t(4)
q=KNN(p Rt ,Q)(5)
However, this registration introduces an additional error d, which is calculated as shown in equation (6).
d=q-(Rp+t)(6)
Due to the introduction of the error d, the registration loss calculation result of the unsupervised method cannot well express the corrected point cloud p Rt And q, resulting in reduced registration accuracy.
This case of noise and point-to-point correspondence can be generalized to the problem of inconsistent description of two point clouds. Point cloud registration is generally constrained by adopting a point pair mode, however, in the case that two point cloud coordinates are not consistent, forced pairing of points can cause reduced registration accuracy. In fig. 2, the coordinates of the source point cloud a and the target point cloud a' do not completely correspond, and if a point pair is adopted as a constraint, the registration accuracy is reduced.
However, the points in the point cloud all exist depending on a plane, and although the plane cannot be represented in the data, the plane can be predicted by a calculation mode, and is called a potential surface, as shown in fig. 3. From fig. 4, the relationship between the predicted potential surface and the real surface can be found, wherein black represents the real surface and grey represents the predicted potential surface.
Therefore, to improve the accuracy of the deep learning unsupervised point cloud registration, one should start with describing the coincidence problem. Inspired by the idea of predicting a potential surface mode of point cloud noise reduction, in the embodiment of the present disclosure, the potential surface of the point cloud is predicted by the noise reduction mode to describe the position of any point in the space of the point cloud, so as to realize one-to-one correspondence between the source point cloud and the target point cloud. In addition, in the prediction process, noise can be relieved to a certain extent by introducing a potential surface prediction network and a corresponding loss function, and the point cloud registration accuracy is further improved.
As a preferred embodiment, the twin network adopts GNN network structure to perform feature extraction on the source point cloud and the target point cloud, and utilizes the multi-layer perceptron to obtain feature saliency scores, respectively ranks the saliency scores of all points in the source point cloud and the target point cloud, and respectively selects points with the saliency scores meeting a preset threshold as the cleaning source point cloud and the cleaning target point cloud. In the similarity matrix convolution, the source point cloud characteristics and the target point cloud characteristics can be combined and encoded in tensor characteristic dimensions to obtain the distance enhancement characteristics of the source point cloud and the target point cloud, the similarity score of each point is extracted, the distance enhancement characteristics are fused, a softmax function is used for converting a similarity matrix value into a probability value, and an argmax function is used for searching the optimal corresponding relation between the source point cloud and the target point cloud.
In the point cloud registration network as shown in fig. 5, first, feature extraction is performed on point clouds (n×3) to form point cloud features (n×k); secondly, removing difficult points on the basis of feature extraction, and eliminating obvious error points in the difficult points to form a clean point cloud (M multiplied by 3); thirdly, calculating a cleaning point cloud related feature (MxM x 4), and splicing the cleaning point cloud related feature (MxM x 4) in a feature dimension to form a spliced feature tensor (MxM x (2K+4)); then, information fusion of feature dimensions is realized by utilizing one-dimensional convolution, and fused features (MxMx32) are formed; calculating similarity score (MxM) through one-dimensional convolution, simultaneously calculating a weight matrix through mixing elimination, and finally obtaining transformation matrices R and t through SVD decomposition; and finally correcting the cleaned source point cloud according to the transformation matrix R and t, iterating again until the iteration is finished, respectively predicting the source point cloud and the target point cloud by using a potential surface prediction network, respectively transforming the source point cloud and the target point cloud based on the generated transformation matrix R and t, and restraining by using the information of the surface.
In the feature extraction of the twin network, the GNN network has the advantages of strong feature extraction capability, higher precision and relatively faster calculation speed, so that the GNN network can be selected as the feature extraction network.
In the similar matrix convolution, when solving the rigid transformation matrices R and t, the corresponding relationship between the source point cloud and the target point cloud needs to be found. The current mainstream mode adopts the inner product of the point characteristics or the L2 distance as the measure of similarity, and seeks the optimal solution from multiple training feedback. However, this approach has two disadvantages, namely that the source point cloud has multiple correspondences in the target point cloud, and due to randomness of the registration process, misregistration may occur, so that the result of single registration is not ideal. Thus, registration may be performed step by step in an iterative manner; on the other hand, the similarity evaluation capability between two points is limited by using the two points alone, so the point cloud can be evaluated through a group of characteristic combinations.
According to the two problems, the corresponding point is found by adopting the convolution of the distance perception similarity matrix, and p in the source point cloud is assumed i Extracted geometric feature f s And target point cloud q j Extracted geometric feature f t All are K dimensions, and point p in the nth iteration source point cloud i And a point q of the target point cloud j The characteristic tensor value description of (2) is as shown in formula (7).
In the formula, [ -; …]Combining the tensor feature dimensions and carrying out joint coding of the features; f (f) s (i) Representing the characteristics of a source point cloud at an i point; t (i) represents the characteristic of the target point cloud at the point j; ||p i -q j I represents p i And q j A Euclidean distance between them;represents the ratio q j Pointing to p i Is a unit vector of (a).
T (n) Named distance enhancement featureThe final dimension combined size is 2k+4; in order to further fuse the features, a similarity score for each point is extracted, after the tensor dimensions are spliced, the distance enhancement features are fused in the feature dimensions by means of one-dimensional convolution, and then the similarity matrix values are converted into probability values by means of a softmax function.
The ith row and the jth column in the fusion distance characteristic tensor T can be understood as a source point cloud p i And target point cloud q j The corresponding probability can find the optimal corresponding relation by using the argmax function. Finally, the optimization process transforms as shown in equation (8).
This is a classical absolute orientation problem that can be effectively solved using Singular Value Decomposition (SVD) methods, then using the decomposed R (n) And t (n) The source point cloud location is updated and then the next iteration is entered.
Although the similarity matrix has strong registration function, the distance characteristic tensor and the number N of the source point clouds are enhanced s Target point cloud N t And the number K of adjacent points are related, and as the number of the adjacent points is increased, the volume N of tensor is increased s ×N t X (2k+4) is also increasing exponentially, resulting in a huge computational overhead. Meanwhile, in the point cloud registration process, a small number of registration points can finish point cloud registration, so that in the actual calculation process, the point cloud is subjected to downsampling. However, the downsampling process may cause many points between the source point cloud and the target point cloud to no longer have a correspondence, which may drastically reduce registration accuracy. To solve this problem, the network is eliminated in two stages, registration difficulty point elimination and hybrid elimination, respectively.
The registration difficulty point elimination can effectively reduce the burden of the similar matrix convolution pair. The process first gives the local shape features of each point to be extracted, and then obtains a saliency score through a multi-layer perceptron. Wherein the higher the score, the more pronounced the point feature, such as a corner point, is. The whole process is carried out on a single point, and point pairs are not considered. Significance of all pointsThe first M points with higher feature significance can be selected as the cleaning point cloud by sequencing, and the rest points are regarded as registration difficulty points to be eliminated. In the embodiment, the selection can be selected
Although registration difficulties are eliminated, the process may have negative effects on the model, such as the situation that the points which may be correctly registered are eliminated in error in the registration process may occur, so that the similar matrix convolution can never find the correct correspondence.
In the registration process, an attempt is made to find the relationship between the maximum similarity scores of the source point cloud and the target point cloud, however, in the case that no corresponding point exists in the point cloud, the result becomes inaccurate through the formula (8). This situation is particularly severe when partially overlapping point cloud scenes are presented, in which case the points of the non-overlapping areas do not have any correspondence even if no registration difficulties are eliminated with them.
To address this problem, in the present embodiment, the point pairs may be operated using a hybrid point cancellation technique. The specific process is that all possible corresponding information for a given point in the source point cloud can be aggregated by employing a permutation invariant pool operation and a validity score is output. The higher the score, the greater the likelihood of correspondence is accounted for.
The calculation availability process can be described by the following formula (9).
Wherein σ (·) represents a sigmoid function;representing a merging method that is consistent in terms of element arrangement, typically includes averaging or maximizing, etc., in which case maxima may be used; f represents a multi-layer perceptron.
The mixture elimination weights can be calculated by the effectiveness score, and the weights of the ith point pair can be defined by equation (10).
Wherein,the indication function is represented, and 1 is satisfied when the condition is satisfied, and the rest are 0.
In the potential surface prediction, a similar structure of a noise reduction network refinement module can be adopted, and considering the computational complexity of the noise reduction network, the network can use only 2 graph convolution layers, and the number of neurons of the hidden layer can be set to be 32.
Under the condition that the sample is not marked, the unsupervised loss function plays a role in replacing the sample marking through the constraint of a mathematical mode. Therefore, the unsupervised loss function is particularly critical. The total loss function may be set as the sum of four parts, point registration loss (Point Matching Loss), negative entropy loss (Negative Entropy Loss), mixture elimination loss (Hybrid Elimination Loss), and potential surface coincidence loss (Latent Surface Uniform Loss), the definition of which may be as shown in equation (11).
L total =L PM +L NE +L HE +L LSU (11)
The point registration penalty is a standard cross entropy penalty for supervising the similarity matrix convolution. The loss function is calculated during each iteration and can be defined as shown in equation (12).
Wherein,
in the formula (13), j is the target point cloud B after the true value change T Closest source point cloud B S Cord of the i-th point in (b)And (5) guiding. r is the minimum radius controlling two sufficiently close points, is the super-parameter, if p i And q j If the distance between the two points is larger than r, no corresponding relation exists between the two points, so that control constraint is not added to the part of points, and the situation is common in the part of overlapping point clouds.
The final total negative entropy loss is the average of all iterations, which is defined as shown in equation (14).
The negative entropy function is mainly used to eliminate difficult points in the registration process. Considering that the training is in an unsupervised mode, the labeling information of the sample cannot be obtained. Thus, negative entropy loss is employed to eliminate these registration difficulties. The specific idea is as follows: if a point p i ∈B s Is a salient point, i.e., a high salient score, the point has a higher confidence and a lower probability distribution entropy. Therefore, the negative entropy of the probability distribution can be used as a supervisory signal for the saliency score, and then the negative entropy loss of the nth iteration is defined as shown in formula (15).
Where s (i) is the source point cloud B S A significant score for the i-th point in (a).
Theoretically, the loss function can be used for each iteration, and then the loss value can be calculated by superposition and averaging, however, this way may interfere with the similar matrix convolution training. As understood by definition, point registration loses euclidean features of the training network, while negative entropy loses shape features of the training network. From the registration process, the shape features are more important than the euclidean features in the early stage of training registration, so the loss function is used only in the first iteration, while the negative entropy loss is cut off from the gradient flow of the similar matrix to avoid further interference, and additional interference is avoided.
Hybrid cancellationThe loss is similar to the difficult point cancellation concept, except that the difficult point cancellation only considers the point itself information, while the hybrid cancellation considers the point pair information. Thus, the effect of mixing to eliminate losses is more pronounced. Specific implementation of the cloud B T As a supervisory signal, the mixed cancellation loss of the nth iteration is defined as shown in equation (16).
Wherein the method comprises the steps of
This process actually feeds back the correct tags and filters the incorrect tags. Those pairs of points with high probability of correct registration will have higher effectiveness scores during long-term training and multiple iterations.
The potential consistent surface loss is used to constrain the problem of inconsistent registration accuracy due to noise, point-to-point non-one correspondence. The potential surface consistency consists of noise reduction Loss (Denoise Loss) and noise consistency Loss (Noise Consistency Loss) together. The loss function for the nth iteration can be described by equation (18).
In the method, in the process of the invention,noise reduction loss value representing the nth iteration, +.>Consistency loss.
Because the potential surface of the cloud is predicted by using the noise reduction network in the noise reduction process, the noise reduction network is trained, and therefore the network drop should be calculatedLoss of noise. Consider a source point cloud B S And target point cloud B T The coordinates of the corresponding points are affected by noise and acquisition, so that the coordinates cannot be in one-to-one correspondence in space, and the coordinates are difficult to converge on one point even after the noise is eliminated. Thus, in designing the noise reduction loss function, the predicted R and t can be used as transformation parameters to obtain the source point cloud B S And target point cloud B T Calculating a predicted target point cloud B' T And predicting source point cloud B S . The transformation process can be described by the formula (19) and the formula (20).
B′ T =RB S +t(19)
B S ′=R(B T -t)(20)
Then, B is S 、B S ′、B T And B' T Respectively input to a noise removing network to obtain noise vectorsAnd->Four noise vectors. Meanwhile, considering that the noise network also needs to be trained, and the whole network is an unsupervised training method, the noise network loss is trained by adopting the unsupervised method, and the loss function can be described by a formula (21).
Wherein,
wherein sigma is a scaling factor; n is the number of points input to the point cloud, where input contains B S 、B S ′、B T And B' T Four types of input; v (V) input Representing a network-based prediction noise vector;noise vectors calculated based on the neighborhood.
The calculation method of (2) is shown in the formula (23).
Wherein KNN (p) i ,B inout ) Representing slave point cloud B input Find point p in i The kth neighbor point. The nearest point coordinate vectors together form a point p i Is a noise vector of (a).
Noise vector after noise reduction can be obtained through noise reduction network
V input =f denoise (input,θ)(25)
In the formulas (24) and (25), input is similar to the above, and is B S 、B S ′、B T And B' T Respectively generate correspondinglyAnd->Four noise vectors.
Due toAnd->Respectively by->And->Obtained by means of spatial transformation taking into account +.>And->Not one-to-one even in case of accurate R and t predictions>And->And->And not exactly equal. Theoretically, under the condition that R and t are predicted accurately, the method comprises the following steps of ++>And->And->The difference between them should be equal.
Thus, the final noise consistency penalty is described as source point cloud B S And target point cloud B T Predicted noise vectorAndpredicting source point cloud B S 'and predicted target point cloud B' T Predicted noise vector->And->The smooth L1 loss function for the two sets of point cloud vector differences is shown in equation (26).
Wherein,
the total loss function is obtained through the above, and the point cloud registration network is trained and optimized based on the total loss function, so that generalization and robust performance of the point cloud registration network model are improved.
Further, based on the above method, the embodiment of the present invention further provides a point cloud registration system based on deep learning without supervision, including: a model construction module and a point cloud registration module, wherein,
the model construction module is used for constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode;
the point cloud registration module is used for collecting the target point cloud and the source point cloud of the object to be detected, inputting the target point cloud and the source point cloud into the trained point cloud registration network, and realizing the point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
To verify the validity of this protocol, the following is further explained in connection with experimental data:
verifying the validity of the present method, in particular the present IDMA method, by comparing a plurality of existing methods, such as FG, DCP, PRNet; secondly, considering that the method can remove noise in the registration process, the method has an improvement effect on the generalization performance of the model, so that verification is required by experiments; finally, comparing with a firework-based registration algorithm, analyzing the difference between the traditional registration method and the deep learning registration method in terms of accuracy and performance, and searching for application scenes suitable for each application scene.
Experimental data set the experimental parameter set is shown in table 1 using the model 40_ply_hdf5_2048 data set.
TABLE 1 Experimental parameters of the potential surface texture of the consistent network
1. Validity analysis
In order to verify the effectiveness of the method, the method is compared with the method before improvement through experiments, and the method is an unsupervised method, so that the training set result shows the final precision of the model, and the testing set result shows the generalization performance of the model. The training set and the test set are distinguished when specific results are listed, and the default is to train the results on the test set when the specific results are not noted. The specific experimental results are shown in table 2.
Table 2 comparison of experimental results for various registration methods
From experimental results, the registration accuracy of the IDAM and the method is far higher than that of other methods. On a training set, the average absolute error of the rotation matrix R of the method is slightly lower than that of the IDAM method before improvement, the improvement of the translation matrix t is similar, and the method after improvement is slightly improved in accuracy on a training model. On the test set, the generalization accuracy of the model is greatly improved, wherein the root mean square error is reduced by approximately 25%, the average absolute error of the rotation matrix R is reduced by about 20%, and the mean absolute error of the translation matrix t is not reduced, but the root mean square error is reduced. The point cloud used in the scheme is clean point cloud which does not contain any noise, the influence of the noise is eliminated, the previous constraint mode is based on point mode registration, and the scheme is based on potential table mode point cloud registration, so that the influence of the noise can be reduced in the registration process of the network, the characteristic extraction network can learn the characteristics of more point clouds rather than the characteristics of the noise, and therefore better generalization can be represented on a test set.
To further verify the experimental results of this case, DCP, PRNet, GNN +idam and gnn+the method of this case were visualized. The color of the upper right part represents the source point cloud, the color of the lower left part represents the target point cloud, the color between the two represents the point cloud passing through the prediction transformation matrix R and t, and the registration effect is better when the two are overlapped, namely the color of the lower left part or the color between the two. As can be seen from fig. 6, the accuracy and visualization effect of the various methods can be mutually verified, with gnn+ being the method of highest accuracy.
2. Noise immunity and generalization analysis
The potential surface loss is mainly used for resisting the situation that the surfaces do not correspond to each other and noise, and in theory, the noise resistance of the network can be improved by increasing the potential surface loss. To further investigate the effect, both networks were performed in the noise range of 0% to 2%, respectively.
From the training set results fig. 7, it can be seen that, in the range of 0.25% gaussian noise, the potential surface network structure has a certain improvement on the rotation matrix R and the translation matrix t, but a certain oscillation exists in the middle stage, which is presumably due to the fact that the noise greatly affects the potential surface prediction, so that the registration accuracy is reduced.
From the test set result fig. 8, it can be seen that in the range of 0.25% gaussian noise, the potential surface network structure is greatly improved on the rotation matrix R and the translation matrix t, and similar to the training set, certain fluctuation exists in the noise increasing process, but the whole is lower than the IDAM method, which indicates that the method can improve the generalization performance of the model. In addition, due to excessive noise, deviations in the predicted surface may be caused, thereby affecting the final registration accuracy.
3. Algorithm contrast analysis
Table 3 registration feature comparison based on conventional and deep learning methods
Because the adopted data sets are different, in order to compare the advantages and disadvantages of the two methods, the adaptive firework registration algorithm and the IDAM method in the scheme are compared on the same data set, and the results are shown in the following table 4.
Table 4 registration accuracy comparison based on conventional and deep learning methods
From the experimental results, it can be obviously seen that the gnn+idam method before improvement or the gnn+present method after improvement exceeds the adaptive firework registration algorithm after improvement, and in addition, the deep learning method has great lifting potential and will lead the traditional point cloud registration method further in the future. And visualizing the experimental result, and listing according to iteration rounds, wherein the color at the upper right corner is the source point cloud position, the color at the lower left corner is the target point cloud position, and the color in the middle of the two modes and the improvement ICP algorithm. Comparing fig. 9 (a) and (b), it can be found that (a) has a slight deviation between the target point cloud and the predicted position, while (b) almost completely coincides, which can also indicate that gnn+the present method has better registration accuracy than the adaptive firework algorithm+icp.
In the practical application process, the performance is also an important reference index. Considering that the previous firework-based registration algorithm samples a c++ language, which is different from the python language commonly used for deep learning, the process is compiled again with the python language for fair comparison.
Table 6 performance comparison
/>
From experimental results, the time consumption of the enhanced firework algorithm +Tree_ICP is approximately 2 times of that of the basic firework algorithm +Tree_ICP, and the time consumption of the adaptive firework algorithm +Tree_ICP is slightly higher than that of the basic firework registration algorithm. Unlike traditional optimization algorithms, deep learning methods have a model training process that generally requires a long time, which results in extremely low cost performance of the deep learning method when processing a single point cloud.
The point cloud registration method based on the deep learning in the scheme is superior to the point cloud registration based on the traditional method in precision and performance through comparison of the precision analysis and the performance analysis, however, the method cannot be used for explaining that the deep learning can completely replace the traditional registration method, and the traditional registration method still has certain advantages under a certain scene. A specific analytical comparison of the two methods is shown in table 6 below.
Table 6 Firework series optimization method and IDMA series deep learning method point cloud registration contrast analysis
As can be seen from table 6, the point cloud registration method based on deep learning has great advantages in large-data-volume and high-precision scenes, while the conventional method still maintains a place in small-data-volume and low-precision-requirement data scenes.
Based on the experimental data, further proved that the scheme can effectively improve the generalization performance and the robustness of the model through the potential surface consistent constraint loss under the noiseless condition, and the root mean square error and the average absolute error of the rotation matrix R are respectively reduced by about 25 percent and 20 percent; noise resistance of the network model can be effectively improved in a mode of noise reduction and surface consistency constraint, and the potential surface network structure is greatly improved on the rotation matrix R and the translation matrix t within the range of 0.25% Gaussian noise; when the noise is too large, deviation of the predicted surface can be caused, and the final registration result is affected; the deep learning series matching method is superior to the traditional firework series registration method in precision and performance at present.
The relative steps, numerical expressions and numerical values of the components and steps set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or a combination thereof, and the elements and steps of the examples have been generally described in terms of functionality in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those of ordinary skill in the art may implement the described functionality using different methods for each particular application, but such implementation is not considered to be beyond the scope of the present invention.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the above methods may be performed by a program that instructs associated hardware, and that the program may be stored on a computer readable storage medium, such as: read-only memory, magnetic or optical disk, etc. Alternatively, all or part of the steps of the above embodiments may be implemented using one or more integrated circuits, and accordingly, each module/unit in the above embodiments may be implemented in hardware or may be implemented in a software functional module. The present invention is not limited to any specific form of combination of hardware and software.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. The point cloud registration method based on the deep learning unsupervised is characterized by comprising the following steps of:
constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode; and is also provided withIn the similar matrix convolution, the source point cloud and target point cloud distance enhancement features are obtained by carrying out combined coding on the source point cloud features and the target point cloud features in tensor feature dimensions, and the combined coding process is expressed as follows: nth iteration source point cloud point p i And a point q of the target point cloud j The eigenvalue of (c) is expressed as[·;·]Merged joint encoding representing tensor feature dimensions, f s (i) Representing the characteristics of a source point cloud at an i point; f (f) t (j) Representing the characteristic of the source point cloud at the point j; ||p i -q j I represents p i And q j A Euclidean distance between them; />Represents the ratio q j Pointing to p i Is a unit vector of (a); extracting similarity scores of all points, fusing distance enhancement features, converting a similarity matrix value into a probability value by using a softmax function, and searching an optimal corresponding relation between a source point cloud and a target point cloud by using an argmax function, wherein the searching process is expressed as follows: optimizing the searching process by utilizing a Singular Value Decomposition (SVD) method, and converting the optimized searching process into: />Transform matrix R using decomposition (n) And t (n) Updating the position of a source point cloud, wherein R and t are rigid body change matrixes of the source point cloud and a target point cloud;
and collecting a target point cloud and a source point cloud of the object to be detected, inputting the target point cloud and the source point cloud into a trained point cloud registration network, and realizing point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
2. The deep learning unsupervised point cloud registration method according to claim 1, wherein the twin network adopts GNN network structure to perform feature extraction on source point cloud and target point cloud, and uses multi-layer perceptron to obtain feature saliency scores, respectively order saliency scores of all points in the source point cloud and the target point cloud, and respectively select points with saliency scores meeting preset thresholds as clean source point cloud and clean target point cloud.
3. The deep learning unsupervised point cloud registration method according to claim 1, wherein for the source point cloud and target point cloud distance enhancement features in the similarity matrix convolution, a hybrid cancellation weight is obtained by aggregating all possible correspondence information of specified points in the source point cloud and using a validity score, wherein the weight of the ith point pair in the hybrid cancellation weight is expressed as Representing an indication function, v (i) validity score,/->Sigma (·) represents a sigmoid function; />Representing a merging operation unchanged according to element arrangement; f represents the multi-layer perceptron, F (i, j) represents the value contained in the ith row and the jth column of the feature fusion matrix tensor, M k (v (k)) means averaging the kth dimension of v (k).
4. The unsupervised point cloud registration method based on deep learning according to claim 1, wherein in the point cloud registration network training, the point cloud registration network is optimized based on a total loss function and by using a sample data set, wherein the total loss function is expressed as: l (L) tota1 =L PM +L NE +L HE +L LSU ,L PM L for monitoring the point registration loss of the similarity matrix convolution NE L for negative entropy loss for supervised extraction of clean point clouds HE To label correct with probability of existence point in target point cloud as supervisory signalHybrid cancellation loss, L, fed back and error tag filtered LSU To constrain the potential consistent surface loss that may be caused by noise and point correspondence problems in the case of inconsistent registration accuracy.
5. A deep learning unsupervised point cloud registration system comprising: a model construction module and a point cloud registration module, wherein,
the model construction module is used for constructing a point cloud registration network and training, wherein the point cloud registration network comprises: the system comprises a twin network for extracting the characteristics of the cleaning source point cloud and the characteristics of the cleaning target point cloud, a similar matrix convolution for iteratively updating and correcting the cleaning source point cloud by utilizing tensor concatenation of the characteristics of the cleaning target point cloud and the characteristics of the cleaning source point cloud, a potential surface prediction network for predicting the cleaning target point cloud and the point cloud noise of the cleaning source point cloud after updating and correcting, and a point cloud registration output for carrying out one-to-one correspondence on the source point cloud and the target point cloud according to the point cloud space point position by acquiring the point cloud space point position according to the point cloud noise prediction result and utilizing a noise reduction mode; in similar matrix convolution, the source point cloud characteristics and the target point cloud characteristics are subjected to combined coding in tensor characteristic dimensions to obtain source point cloud and target point cloud distance enhancement characteristics, wherein the combined coding process is expressed as follows: nth iteration source point cloud point p i And a point q of the target point cloud j The eigenvalue of (c) is expressed as [·;·]Merged joint encoding representing tensor feature dimensions, f s (i) Representing the characteristics of a source point cloud at an i point; f (f) t (j) Representing the characteristic of the source point cloud at the point j; ||p i -q j I represents p i And q j A Euclidean distance between them; />Represents the ratio q j Pointing to p i Is a unit vector of (a); extracting similarity scores of all points, fusing distance enhancement features, converting a similarity matrix value into a probability value by using a softmax function, and searching an optimal corresponding relation between a source point cloud and a target point cloud by using an argmax function, wherein the searching process is expressed as follows: optimizing the searching process by utilizing a Singular Value Decomposition (SVD) method, and converting the optimized searching process into: />Transform matrix R using decomposition (n) And t (n) Updating the position of a source point cloud, wherein R and t are rigid body change matrixes of the source point cloud and a target point cloud;
the point cloud registration module is used for collecting the target point cloud and the source point cloud of the object to be detected, inputting the target point cloud and the source point cloud into the trained point cloud registration network, and realizing the point cloud registration of the target point cloud and the source point cloud of the object to be detected through the point cloud registration network.
6. The electronic equipment is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for executing a program stored on a memory and for carrying out the method steps of any one of claims 1 to 4 when the program is executed.
7. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored therein a computer program which, when executed by a processor, implements the method steps of any of claims 1-4.
CN202211685566.3A 2022-12-27 2022-12-27 Point cloud registration method and system based on deep learning unsupervised Active CN116188543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211685566.3A CN116188543B (en) 2022-12-27 2022-12-27 Point cloud registration method and system based on deep learning unsupervised

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211685566.3A CN116188543B (en) 2022-12-27 2022-12-27 Point cloud registration method and system based on deep learning unsupervised

Publications (2)

Publication Number Publication Date
CN116188543A CN116188543A (en) 2023-05-30
CN116188543B true CN116188543B (en) 2024-03-12

Family

ID=86443421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211685566.3A Active CN116188543B (en) 2022-12-27 2022-12-27 Point cloud registration method and system based on deep learning unsupervised

Country Status (1)

Country Link
CN (1) CN116188543B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116452757B (en) * 2023-06-15 2023-09-15 武汉纺织大学 Human body surface reconstruction method and system under complex scene
CN117095061B (en) * 2023-10-20 2024-02-09 山东大学 Robot pose optimization method and system based on point cloud strength salient points
CN118038085B (en) * 2024-04-09 2024-06-07 无锡学院 Point cloud key point detection method and device based on twin network
CN118036732A (en) * 2024-04-11 2024-05-14 神思电子技术股份有限公司 Social event pattern relation completion method and system based on critical countermeasure learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019226686A2 (en) * 2018-05-23 2019-11-28 Movidius Ltd. Deep learning system
WO2020036742A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
WO2020154964A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. A point clouds registration system for autonomous vehicles
CN113077501A (en) * 2021-04-02 2021-07-06 浙江大学计算机创新技术研究院 End-to-end point cloud registration method based on feature learning
CN113706710A (en) * 2021-08-11 2021-11-26 武汉大学 Virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate flash) feature difference
CN113780389A (en) * 2021-08-31 2021-12-10 中国人民解放军战略支援部队信息工程大学 Deep learning semi-supervised dense matching method and system based on consistency constraint
CN114332175A (en) * 2021-12-16 2022-04-12 广东工业大学 Attention mechanism-based low-overlap 3D dynamic point cloud registration method and system
CN114937066A (en) * 2022-06-09 2022-08-23 重庆理工大学 Point cloud registration system and method based on cross offset features and space consistency
CN115170626A (en) * 2022-07-07 2022-10-11 广西师范大学 Unsupervised method for robust point cloud registration based on depth features
CN115471423A (en) * 2022-09-28 2022-12-13 吉林大学 Point cloud denoising method based on generation countermeasure network and self-attention mechanism

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4016392A1 (en) * 2020-12-16 2022-06-22 Dassault Systèmes Machine-learning for 3d object detection

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019226686A2 (en) * 2018-05-23 2019-11-28 Movidius Ltd. Deep learning system
WO2020036742A1 (en) * 2018-08-17 2020-02-20 Nec Laboratories America, Inc. Dense three-dimensional correspondence estimation with multi-level metric learning and hierarchical matching
WO2020154964A1 (en) * 2019-01-30 2020-08-06 Baidu.Com Times Technology (Beijing) Co., Ltd. A point clouds registration system for autonomous vehicles
CN113077501A (en) * 2021-04-02 2021-07-06 浙江大学计算机创新技术研究院 End-to-end point cloud registration method based on feature learning
CN113706710A (en) * 2021-08-11 2021-11-26 武汉大学 Virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate flash) feature difference
CN113780389A (en) * 2021-08-31 2021-12-10 中国人民解放军战略支援部队信息工程大学 Deep learning semi-supervised dense matching method and system based on consistency constraint
CN114332175A (en) * 2021-12-16 2022-04-12 广东工业大学 Attention mechanism-based low-overlap 3D dynamic point cloud registration method and system
CN114937066A (en) * 2022-06-09 2022-08-23 重庆理工大学 Point cloud registration system and method based on cross offset features and space consistency
CN115170626A (en) * 2022-07-07 2022-10-11 广西师范大学 Unsupervised method for robust point cloud registration based on depth features
CN115471423A (en) * 2022-09-28 2022-12-13 吉林大学 Point cloud denoising method based on generation countermeasure network and self-attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LSG-CPD: Coherent Point Drift With Local Surface Geometry for Point Cloud Registration;Weixiao Liu等;《Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)》;15293-15302 *
三维点云配准中的关键技术研究;石晓敬;《中国博士学位论文全文数据库 (信息科技辑)》(第15期);I138-10 *

Also Published As

Publication number Publication date
CN116188543A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN116188543B (en) Point cloud registration method and system based on deep learning unsupervised
CN109800648B (en) Face detection and recognition method and device based on face key point correction
Arnold et al. Fast and robust registration of partially overlapping point clouds
CN108197587B (en) Method for performing multi-mode face recognition through face depth prediction
Liu et al. Learning gaussian instance segmentation in point clouds
CN109977757B (en) Multi-modal head posture estimation method based on mixed depth regression network
CN113538486B (en) Method for improving identification and positioning accuracy of automobile sheet metal workpiece
KR102305230B1 (en) Method and device for improving accuracy of boundary information from image
WO2024021523A1 (en) Graph network-based method and system for fully automatic segmentation of cerebral cortex surface
AU2022286399A1 (en) Systems for rapid accurate complete detailing and cost estimation for building construction from 2d plans
CN115586749B (en) Workpiece machining track control method based on machine vision and related device
CN118176522A (en) Method and system for generating segmentation mask
CN113129447A (en) Three-dimensional model generation method and device based on single hand-drawn sketch and electronic equipment
Yeh et al. Toward selecting and recognizing natural landmarks
Kong et al. Local Stereo Matching Using Adaptive Cross‐Region‐Based Guided Image Filtering with Orthogonal Weights
CN117522990B (en) Category-level pose estimation method based on multi-head attention mechanism and iterative refinement
US20210150078A1 (en) Reconstructing an object
Häger et al. Predicting disparity distributions
Netto et al. Robust point-cloud registration based on dense point matching and probabilistic modeling
Slimani et al. RoCNet++: Triangle-based descriptor for accurate and robust point cloud registration
Wang et al. Robust point cloud registration using geometric spatial refinement
Cheng et al. Deep learning-based point cloud registration: a comprehensive investigation
CN114202662A (en) Cutter characteristic point identification method and equipment combining transverse geometric characteristics of adjacent tool paths
CN113688875A (en) Industrial system fault identification method and device
Chen et al. Rethinking point cloud registration as masking and reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Niu Zexuan

Inventor after: Yuan Haijun

Inventor after: Chen Siyu

Inventor after: Yang Fan

Inventor after: Guan Kai

Inventor after: Zhu Xiaolei

Inventor after: Jin Fei

Inventor after: Du Yanfeng

Inventor after: Yan Fei

Inventor after: Zhao Ziming

Inventor after: He Jiang

Inventor after: Liu Yaqi

Inventor before: Niu Zexuan

Inventor before: Yuan Haijun

Inventor before: Chen Siyu

Inventor before: Yang Fan

Inventor before: Guan Kai

Inventor before: Zhu Xiaolei

Inventor before: Jin Fei

Inventor before: Du Yanfeng

Inventor before: Yan Fei

Inventor before: Zhao Ziming

Inventor before: He Jiang

Inventor before: Liu Yaqi

CB03 Change of inventor or designer information