CN113139991A - 3D point cloud registration method based on overlapping region mask prediction - Google Patents

3D point cloud registration method based on overlapping region mask prediction Download PDF

Info

Publication number
CN113139991A
CN113139991A CN202110521230.2A CN202110521230A CN113139991A CN 113139991 A CN113139991 A CN 113139991A CN 202110521230 A CN202110521230 A CN 202110521230A CN 113139991 A CN113139991 A CN 113139991A
Authority
CN
China
Prior art keywords
point cloud
features
source
overlapping area
target point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110521230.2A
Other languages
Chinese (zh)
Inventor
刘帅成
徐浩
刘光辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110521230.2A priority Critical patent/CN113139991A/en
Publication of CN113139991A publication Critical patent/CN113139991A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a 3D point cloud registration method based on overlapping region mask prediction, which uses an iterative convolutional neural network to learn point cloud rigid body transformation from a coarse mode to a fine mode. The 3D rigid body transformation is characterized by quaternions, which can avoid the situation of a gimbaled lock, and translation distances along the directions of the three XYZ axes. Furthermore, we learn a mask with attention mechanism in the network. The mask not only highlights the overlapping parts of the source point cloud and the target point cloud, but also can filter out the non-overlapping areas and the interference of noise. After each iteration process, when the mask predicted by the iteration is used in the point cloud feature extraction process of the next iteration, the prediction of the point cloud overlapping region and the regression of the 3D rigid body transformation are guaranteed to be mutually promoted.

Description

3D point cloud registration method based on overlapping region mask prediction
Technical Field
The invention relates to the field of computational graphics and computer vision, in particular to a 3D point cloud registration method based on overlapping region mask prediction.
Background
The 3D point cloud registration is a process of matching And superimposing two or more point clouds acquired at different times, at different viewing angles And under different sensors, And this technique is widely applied to applications And fields such as three-dimensional Scene Reconstruction (3D Scene Reconstruction), synchronous positioning And Mapping (Simultaneous Localization And Mapping), Augmented Reality (Augmented Reality), And automatic driving (Autopilot).
Among the existing various Point cloud registration methods, an algorithm based on nearest neighbor iteration (Iterative Closest Point) is widely applied due to its simplicity and efficiency. The method comprises the steps of repeatedly calculating nearest neighbor points of coordinates in an Euclidean space in an iteration mode to determine matching point pairs, solving a rigid body transformation matrix in a Singular Value Decomposition (Singular Value Decomposition) mode, wherein the solving precision highly depends on the difference size, the noise size and the overlapping degree of the initial positions of a source point cloud and a target point cloud. When scenes with large initial position difference, strong noise interference and small overlapping degree are processed, the matching point pairs determined by the method usually have errors, so that the scenes cannot be normally registered. Researchers have subsequently proposed DNN-based methods to learn robust depth features that can successfully handle large initial position differences and strong noise scenes. However, when the overlapping area of the source point cloud and the target point cloud is small, the mismatch of the corresponding points is easily caused. In order to enable the algorithm to have stronger registration capability for partially overlapped point clouds, a method based on external point filtering is provided, but the registration accuracy is difficult to improve because corresponding point pairs still need to be matched and only information of sparse partial points in the point clouds can be utilized.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a 3D point cloud registration method based on overlapping region mask prediction.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
A3D point cloud registration method based on overlapping region mask prediction comprises the following steps:
the method comprises the following steps:
s1, extracting the features of the source point cloud and the target point cloud to obtain the features of each point;
s2, performing multilayer convolution operation on the features of each point obtained in the step S1 to obtain global features of the source point cloud and the target point cloud at the same latitude, and calculating the mutual overlapping area of the source point cloud and the target point cloud;
and S3, combining the features of each point obtained in the step S1 and the global features obtained in the step S2, calculating the 3D rigid body transformation parameters, and iterating.
The method has the advantages that the information of all points in the point cloud can be utilized, meanwhile, the classification result of the point cloud overlapping region can be utilized to avoid the influence of the characteristics of non-overlapping partial points on the global characteristics of the point cloud, and then the regression of rigid body transformation parameters is promoted to be more accurate.
Further, the step S1 specifically includes:
s11, extracting the characteristics of each point in the source point cloud and the target point cloud;
s12, weighting the characteristics obtained in the step S11 by using a mask of iterative prediction;
s13, performing maximum pooling operation on the characteristics of the points weighted in the step S12 to obtain respective global characteristics of the source point cloud and the target point cloud;
and S13, copying a plurality of global features obtained in the step S13, and combining the copied global features with the intermediate layer features of the points in the source point cloud and the target point cloud respectively to obtain the features of each point in the source point cloud and the target point cloud.
The method has the advantages that information interaction between the source point cloud and the target point cloud can be carried out, the subsequent point cloud overlapping area classification network is endowed with the capability of distinguishing whether the points are positioned in the overlapping area, and accurate classification of the subsequent point cloud overlapping area is facilitated.
Further, the step S2 specifically includes:
the step S2 specifically includes:
s21, performing multilayer convolution calculation on the features of each point obtained in the step S1 to obtain point cloud features after dimension reduction, and extracting the intermediate layer features after dimension reduction each time;
s22, calculating the point cloud characteristics after dimensionality reduction through a Softmax function to obtain the probability that each point belongs to an overlapping area, and calculating through an argmax function to obtain a classification result;
and S23, obtaining an overlapping area of the source point cloud and the target point cloud according to the classification result of the step S22.
Further, the calculation manner of obtaining the classification result through the argmax function in step S22 is as follows:
Figure BDA0003064038020000031
wherein y represents the classification result of the kth point in the point cloud, z represents the confidence value of the network prediction, C represents two categories of belonging to an overlapping area and not belonging to the overlapping area, and N represents the number of coordinate points in the point cloud.
The method has the advantages that the overall characteristics and the characteristics of each point in the source point cloud and the target point cloud can be comprehensively utilized, so that whether each point in the point cloud belongs to the overlapping area or not can be accurately judged, and finally, the interference of the characteristics of the non-overlapping part points on the registration process is avoided.
Further, the step S3 specifically includes:
the step S3 specifically includes:
s31, combining the characteristics of each point in the source point cloud and the target point cloud obtained in the step S1 and the characteristics of the middle layer obtained in the step S21;
s32, performing maximum pooling calculation on the combined features obtained in the step S31 to obtain the overall features of the source point cloud and the target point cloud;
s33, performing multilayer perception regression calculation on the overall characteristics obtained in the step S32 to obtain quaternion and translation distance, and performing normalization processing on the quaternion to obtain 3D rigid body transformation after one iteration;
and S34, applying the 3D rigid body transformation obtained in the step S33 to the source point cloud, and repeating the steps S31-S33 on the transformed source point cloud to obtain the 3D target point cloud after registration.
Further, the manner of calculating the quaternion and the translation distance by the multilayer perceptual regression calculation in step S33 is as follows:
{q,t}=rθ(cat[fx,fY])
wherein q representsQuaternion and t denote the translation distance, rθ() Representing a multilayer perceptual regression network, cat]Indicating a cascade operation, fXAnd fYGlobal features representing the source point cloud and the target point cloud.
The technical scheme has the advantages that the intermediate layer features and the global features during point cloud feature extraction are fused, feature information for registration is enriched, the feature learning difficulty of the whole registration network is reduced in the iterative registration process, the network in different iterations is benefited to pay attention to rigid body transformation information in different distributions, and the registration accuracy is improved finally.
Further, the loss function of the overlapping region of the source point cloud and the target point cloud calculated in step S2 is expressed as:
Figure BDA0003064038020000041
wherein g and p respectively represent the label of the point cloud overlapping area and the probability of the point cloud overlapping area predicted by the network, i represents the ith iteration process, alpha represents the proportion of the overlapping area of the source point cloud and the target point cloud, and M represents the overlapping area mask.
The beneficial effect of the scheme is that the classification loss weighted by the proportion coefficient is beneficial to the contribution of positive and negative samples in the balancing process, and further the classification accuracy is improved.
Further, the regression loss of the rigid body transformation in step S33 is represented as:
Figure BDA0003064038020000051
where g denotes the true label and λ denotes the coefficient that balances the two loss terms.
The beneficial effect of the above scheme is that the quaternion is directly constrained to avoid the occurrence of the universal lock condition, and meanwhile, the proportional coefficient is added to balance the two losses, which is beneficial for the network to uniformly optimize the quaternion and the translation distance.
Drawings
Fig. 1 is a schematic flow chart of a 3D point cloud registration method based on overlapping region mask prediction according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
A3D point cloud registration method based on overlapping region mask prediction comprises the following steps:
s1, extracting the features of the source point cloud and the target point cloud to obtain the features of each point;
in particular to
And S11, extracting the features of each point in the source point cloud and the target point cloud, wherein in the embodiment, the point cloud pair is input into a Feature extraction network (Feature Extractor), and Feature extraction is firstly carried out on each point in the point cloud.
S12, weighting the characteristics obtained in the step S11 by using a mask of iterative prediction;
s13, performing maximum pooling operation on the characteristics of the points weighted in the step S12 to obtain respective global characteristics of the source point cloud and the target point cloud; weighting the features by using the mask of the last iteration prediction, and then obtaining the respective global features of the source point cloud and the target point cloud through maximum pooling.
And S13, copying and duplicating the global features obtained in the step S13, combining the copied global features with the intermediate layer features of the points in the source point cloud and the target point cloud respectively, and fusing to obtain the features of each point containing the information of the source point cloud and the target point cloud.
S2, performing multilayer convolution operation on the features of each point obtained in the step S1 to obtain global features of the source point cloud and the target point cloud at the same latitude, and calculating the mutual overlapping area of the source point cloud and the target point cloud;
and (3) sending the characteristics of each point into a Mask Predictor, and predicting the overlapped area of the source point cloud and the target point cloud, wherein the overlapped area is represented as 1, and the non-overlapped area is represented as 0.
The specific operation is as follows:
s21, performing multilayer convolution calculation on the features of each point obtained in the step S1 to obtain point cloud features after dimension reduction, and extracting the intermediate layer features after dimension reduction each time; and after the feature of each point is subjected to the multilayer convolution layer, continuously reducing the dimension of the feature, and finally obtaining the feature of each point with the dimension of 2.
S22, calculating the point cloud characteristics after dimensionality reduction through a Softmax function to obtain the probability that each point belongs to an overlapping area, and calculating through an argmax function to obtain a classification result, wherein the specific calculation mode is as follows:
Figure BDA0003064038020000061
wherein y represents the classification result of the kth point in the point cloud, z represents the confidence value of the network prediction, C represents two categories of belonging to an overlapping area and not belonging to the overlapping area, and N represents the number of coordinate points in the point cloud.
The predicted mask will be applied to two places: the method comprises the steps of weighting the point cloud features during extraction and weighting the intermediate layer features predicted by the mask during regression of 3D rigid body transformation parameters to shield interference of outliers.
And S23, obtaining an overlapping area of the source point cloud and the target point cloud according to the classification result of the step S22.
And S3, combining the characteristics of each point obtained in the step S1 and the characteristics of the middle layer obtained in the step S2, calculating the 3D rigid body transformation parameters, and iterating.
In this embodiment, the step S3 specifically includes:
s31, combining the characteristics of each point in the source point cloud and the target point cloud obtained in the step S1 and the characteristics of the middle layer obtained in the step S21; in this embodiment, the characteristics of each point of the source point cloud and the target point cloud and the characteristics of the intermediate layer of the mask prediction module are combined and then sent to a 3D rigid body Transformation parameter Regression module (Transformation Regression).
S32, performing maximum pooling calculation on the combined features obtained in the step S31 to obtain the overall features of the source point cloud and the target point cloud,
s33, performing multilayer perception regression calculation on the overall characteristics obtained in the step S32 to obtain quaternion and translation distance, and performing normalization processing on the quaternion to obtain 3D rigid body transformation after one iteration; the method for obtaining the quaternion and the translation distance through multilayer perceptual regression calculation comprises the following steps:
(q,t)=rθ(cat[fx,fY])
where q denotes a quaternion and t denotes a translation distance, rθ() Representing a multilayer perceptual regression network, cat]Indicating a cascade operation, fXAnd fYGlobal features representing the source point cloud and the target point cloud.
And S34, applying the 3D rigid body transformation obtained in the step S33 to the source point cloud, and repeating the steps S31-S33 on the transformed source point cloud to obtain the 3D target point cloud after registration.
Further, the loss function of the overlapping region of the source point cloud and the target point cloud calculated in step S2 is expressed as:
Figure BDA0003064038020000081
g and p respectively represent a label of a point cloud overlapping area and the probability of the point cloud overlapping area predicted by the network, represent the ith iteration process, alpha represents the proportion of the overlapping area of the source point cloud and the target point cloud, and M represents an overlapping area mask.
Further, the regression loss of the rigid body transformation in step S33 is represented as:
Figure BDA0003064038020000082
where i denotes the ith iteration, g denotes the true label, and λ denotes the coefficient that balances the two loss terms. The loss of constrained quaternion uses the L1 distance and the loss of constrained translation distance uses the L2 distance. This results in the final total loss function, which is the sum of the two losses in each iteration as follows:
Figure BDA0003064038020000083
it will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (8)

1. A3D point cloud registration method based on overlapping region mask prediction is characterized by comprising the following steps:
s1, extracting the features of the source point cloud and the target point cloud to obtain the features of each point;
s2, performing multilayer convolution operation on the features of each point obtained in the step S1 to obtain global features of the source point cloud and the target point cloud at the same latitude, and calculating the mutual overlapping area of the source point cloud and the target point cloud;
and S3, combining the features of each point obtained in the step S1 and the global features obtained in the step S2, calculating the 3D rigid body transformation parameters, and iterating.
2. The 3D point cloud registration method based on overlap region mask prediction according to claim 1, wherein the step S1 specifically comprises:
s11, extracting the characteristics of each point in the source point cloud and the target point cloud;
s12, weighting the characteristics obtained in the step S11 by using a mask of iterative prediction;
s13, performing maximum pooling operation on the characteristics of the points weighted in the step S12 to obtain respective global characteristics of the source point cloud and the target point cloud;
and S13, copying a plurality of global features obtained in the step S13, and combining the copied global features with the intermediate layer features of the points in the source point cloud and the target point cloud respectively to obtain the features of each point in the source point cloud and the target point cloud.
3. The 3D point cloud registration method based on overlap region mask prediction according to claim 2, wherein the step S2 specifically includes:
s21, performing multilayer convolution calculation on the features of each point obtained in the step S1 to obtain point cloud features after dimension reduction, and extracting the intermediate layer features after dimension reduction each time;
s22, calculating the point cloud characteristics after dimensionality reduction through a Softmax function to obtain the probability that each point belongs to an overlapping area, and calculating through an argmax function to obtain a classification result;
and S23, obtaining an overlapping area of the source point cloud and the target point cloud according to the classification result of the step S22.
4. The 3D point cloud registration method based on overlap region mask prediction according to claim 3, wherein the calculation manner of the classification result obtained by the argmax function calculation in step S22 is as follows:
Figure FDA0003064038010000021
wherein y represents the classification result of the kth point in the point cloud, z represents the confidence value of the network prediction, C represents two categories of belonging to an overlapping area and not belonging to the overlapping area, and N represents the number of coordinate points in the point cloud.
5. The 3D point cloud registration method based on overlap region mask prediction according to claim 4, wherein the step S3 specifically comprises:
s31, combining the characteristics of each point in the source point cloud and the target point cloud obtained in the step S1 and the characteristics of the middle layer obtained in the step S21;
s32, performing maximum pooling calculation on the combined features obtained in the step S31 to obtain the overall features of the source point cloud and the target point cloud;
s33, performing multilayer perception regression calculation on the overall characteristics obtained in the step S32 to obtain quaternion and translation distance, and performing normalization processing on the quaternion to obtain 3D rigid body transformation after one iteration;
and S34, applying the 3D rigid body transformation obtained in the step S33 to the source point cloud, and repeating the steps S31-S33 on the transformed source point cloud to obtain the 3D target point cloud after registration.
6. The 3D point cloud registration method based on overlap region mask prediction of claim 5, wherein the way of calculating the quaternion and the translation distance by multi-layer perceptual regression calculation in the step S33 is as follows:
{q,t}=rθ(cat[fX,fY])
where q denotes a quaternion and t denotes a translation distance, rθ() Representing a multilayer perceptual regression network, cat]Indicating a cascade operation, fXAnd fYGlobal features representing the source point cloud and the target point cloud.
7. The overlap region mask prediction-based 3D point cloud registration method according to claim 6, wherein the loss function of the mutual overlap region of the source point cloud and the target point cloud calculated in the step S2 is represented as:
Figure FDA0003064038010000031
wherein g and p respectively represent the label of the point cloud overlapping area and the probability of the point cloud overlapping area predicted by the network, i represents the ith iteration process, alpha represents the proportion of the overlapping area of the source point cloud and the target point cloud, and M represents the overlapping area mask.
8. The overlap region mask prediction-based 3D point cloud registration method of claim 7, wherein the regression loss of the rigid body transformation in the step S33 is represented as:
Figure FDA0003064038010000032
where g denotes the true label and λ denotes the coefficient that balances the two loss terms.
CN202110521230.2A 2021-05-13 2021-05-13 3D point cloud registration method based on overlapping region mask prediction Pending CN113139991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110521230.2A CN113139991A (en) 2021-05-13 2021-05-13 3D point cloud registration method based on overlapping region mask prediction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110521230.2A CN113139991A (en) 2021-05-13 2021-05-13 3D point cloud registration method based on overlapping region mask prediction

Publications (1)

Publication Number Publication Date
CN113139991A true CN113139991A (en) 2021-07-20

Family

ID=76817861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110521230.2A Pending CN113139991A (en) 2021-05-13 2021-05-13 3D point cloud registration method based on overlapping region mask prediction

Country Status (1)

Country Link
CN (1) CN113139991A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015090591A (en) * 2013-11-06 2015-05-11 株式会社パスコ Generation device and generation method for road surface ortho-image
CN109493372A (en) * 2018-10-24 2019-03-19 华侨大学 The product point cloud data Fast global optimization method for registering of big data quantity, few feature
CN111860520A (en) * 2020-07-21 2020-10-30 南京航空航天大学 Large airplane point cloud model self-supervision semantic segmentation method based on deep learning
CN111915658A (en) * 2020-09-30 2020-11-10 浙江智慧视频安防创新中心有限公司 Registration method and system for point cloud
CN112017225A (en) * 2020-08-04 2020-12-01 华东师范大学 Depth image matching method based on point cloud registration
WO2021009258A1 (en) * 2019-07-15 2021-01-21 Promaton Holding B.V. Object detection and instance segmentation of 3d point clouds based on deep learning
CN112365511A (en) * 2020-11-14 2021-02-12 重庆邮电大学 Point cloud segmentation method based on overlapped region retrieval and alignment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015090591A (en) * 2013-11-06 2015-05-11 株式会社パスコ Generation device and generation method for road surface ortho-image
CN109493372A (en) * 2018-10-24 2019-03-19 华侨大学 The product point cloud data Fast global optimization method for registering of big data quantity, few feature
WO2021009258A1 (en) * 2019-07-15 2021-01-21 Promaton Holding B.V. Object detection and instance segmentation of 3d point clouds based on deep learning
CN111860520A (en) * 2020-07-21 2020-10-30 南京航空航天大学 Large airplane point cloud model self-supervision semantic segmentation method based on deep learning
CN112017225A (en) * 2020-08-04 2020-12-01 华东师范大学 Depth image matching method based on point cloud registration
CN111915658A (en) * 2020-09-30 2020-11-10 浙江智慧视频安防创新中心有限公司 Registration method and system for point cloud
CN112365511A (en) * 2020-11-14 2021-02-12 重庆邮电大学 Point cloud segmentation method based on overlapped region retrieval and alignment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAO XU: "OMNet: Learning Overlapping Mask for Partial-to-Partial Point Cloud Registration", 《ARXIV》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114004871A (en) * 2022-01-04 2022-02-01 山东大学 Point cloud registration method and system based on point cloud completion

Similar Documents

Publication Publication Date Title
CN112288011B (en) Image matching method based on self-attention deep neural network
CN114863573B (en) Category-level 6D attitude estimation method based on monocular RGB-D image
CN112750198B (en) Dense correspondence prediction method based on non-rigid point cloud
CN113160287B (en) Complex component point cloud splicing method and system based on feature fusion
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN110443849B (en) Target positioning method for double-current convolution neural network regression learning based on depth image
CN112084895B (en) Pedestrian re-identification method based on deep learning
CN110310305A (en) A kind of method for tracking target and device based on BSSD detection and Kalman filtering
CN117252928B (en) Visual image positioning system for modular intelligent assembly of electronic products
Chen et al. Full transformer framework for robust point cloud registration with deep information interaction
CN113139991A (en) 3D point cloud registration method based on overlapping region mask prediction
CN114119690A (en) Point cloud registration method based on neural network reconstruction Gaussian mixture model
CN114565953A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111832399A (en) Attention mechanism fused cross-domain road navigation mark registration algorithm
CN111578956A (en) Visual SLAM positioning method based on deep learning
CN115994933A (en) Partial point cloud registration method based on consistency learning
CN108469729B (en) Human body target identification and following method based on RGB-D information
CN116310128A (en) Dynamic environment monocular multi-object SLAM method based on instance segmentation and three-dimensional reconstruction
CN115829942A (en) Electronic circuit defect detection method based on non-negative constraint sparse self-encoder
CN115239776A (en) Point cloud registration method, device, equipment and medium
CN112069997B (en) Unmanned aerial vehicle autonomous landing target extraction method and device based on DenseHR-Net
CN114998630A (en) Ground-to-air image registration method from coarse to fine
De Giacomo et al. Guided sonar-to-satellite translation
CN113705731A (en) End-to-end image template matching method based on twin network
Sun et al. Accurate deep direct geo-localization from ground imagery and phone-grade gps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210720

RJ01 Rejection of invention patent application after publication