CN113313200A - Point cloud fine matching method based on normal constraint - Google Patents

Point cloud fine matching method based on normal constraint Download PDF

Info

Publication number
CN113313200A
CN113313200A CN202110685466.XA CN202110685466A CN113313200A CN 113313200 A CN113313200 A CN 113313200A CN 202110685466 A CN202110685466 A CN 202110685466A CN 113313200 A CN113313200 A CN 113313200A
Authority
CN
China
Prior art keywords
point
model
point cloud
scene
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110685466.XA
Other languages
Chinese (zh)
Other versions
CN113313200B (en
Inventor
李俊
彭思龙
汪雪林
顾庆毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd
Suzhou Research Institute Institute Of Automation Chinese Academy Of Sciences
Original Assignee
Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd
Suzhou Research Institute Institute Of Automation Chinese Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd, Suzhou Research Institute Institute Of Automation Chinese Academy Of Sciences filed Critical Suzhou Zhongke Whole Elephant Intelligent Technology Co ltd
Priority to CN202110685466.XA priority Critical patent/CN113313200B/en
Publication of CN113313200A publication Critical patent/CN113313200A/en
Application granted granted Critical
Publication of CN113313200B publication Critical patent/CN113313200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud fine matching method based on normal constraint, which relates to the field of machine vision and comprises the following steps: s1: acquiring the pose under the rough matching of the point cloud, and taking the pose of the model point cloud as an initial value; s2: taking a model point, searching a plurality of scene points with close distances, and pairing the model point with the scene points; s3: calculating a point distance sequence and a normal included angle sequence after pairing; s4: calculating the probability of matching points according to the point distance sequence and the included angle sequence; s5: multiplying the point distance matching probability and the corresponding included angle matching probability, and dividing the product by the sum of all the probabilities to obtain a joint probability; s6: combining ICP and joint probability to obtain coordinate transformation from a model to a scene; s7: if the change of the finally obtained model point cloud coordinate and the previously obtained model point cloud coordinate is less than a given threshold value, ending the iteration; otherwise, S2 through S7 are repeated. The method and the device improve the accuracy of pose transformation of the scene similarity region.

Description

Point cloud fine matching method based on normal constraint
Technical Field
The invention relates to the technical field of machine vision, in particular to a point cloud fine matching method based on normal constraint.
Background
The point cloud fine matching is a method for further adjusting the model pose to be closer to the similar area in the scene on the basis that the model point cloud is roughly aligned with the similar area in the scene point cloud. The current point cloud precise matching method widely used is an iterative closest point algorithm (ICP) which is used for continuously searching the closest scene point corresponding to each point of the model point cloud and updating the pose of the model in an iterative calculation mode. On the basis of the iterative framework, fine matching modes such as point-to-point ICP, point-to-surface ICP, surface-to-surface ICP and the like are derived, and the method is characterized in that an optimization index is obtained, the point-to-point optimization index is the minimum distance between the model point and the scene point, and the point-to-surface and surface-to-surface optimization index is the minimum normal deviation between the model point and the scene point. However, these methods all use model points and their nearest scene points when selecting matching point pairs in the model and scene, which is not reasonable in many cases. For example, when the upper and lower surfaces of the model point cloud are closer, the upper surface faces normally upward and the lower surface faces normally downward. For a scene point cloud generated by a 3D camera shot from above, its normal direction is all upward. After the rough matching between the model point cloud and the scene point cloud is completed, because the matching precision is not high, the model point closest to the scene point cloud at the moment is possibly the lower surface of the model but not the upper surface of the model, and if the matching result is still used for ICP iteration, the obtained closest matching pose is obviously inaccurate.
Chinese patent CN109767463B provides an automatic registration method for three-dimensional point cloud, in order to solve the problems of the traditional ICP algorithm such as calculation efficiency, precision and easy noise interference, the invention firstly establishes KD-tree for source point cloud and target point cloud to accelerate the search of the near point; and then, coarse registration is realized by adopting a registration method based on a normal vector and a feature histogram, and a feature extraction part in the coarse registration is improved, so that feature points can be effectively extracted, and a large amount of point cloud information with unobvious features can not be lost. In order to further improve the registration precision, the precise registration provides an improved multi-resolution iteration nearest point algorithm, the algorithm provides that the point cloud resolution is calculated by using the density of the characteristic points, and meanwhile, the key point sampling method is improved. Although the patent discloses a technical scheme aiming at a normal vector and an iterative nearest point algorithm, the technical scheme of performing precise matching by combining a point distance sequence is not disclosed, and the problem of inaccurate matching still exists.
Chinese patent CN111932628A discloses a pose determination method and apparatus, an electronic device, and a storage medium, which can improve the efficiency of determining a pose. The method comprises the following steps: generating a scene point cloud picture according to the real scene image, and extracting a scene coordinate point set from the scene point cloud picture; obtaining a model point cloud picture corresponding to the scene point cloud picture, and selecting a model coordinate point set corresponding to the scene coordinate point set from the model point cloud picture in parallel; on the basis of the scene coordinate point and the model coordinate point set, a corresponding spatial transformation relation between the scene point cloud picture and the model point cloud picture is obtained through parallel iterative computation; and determining the relative pose between the scene point cloud picture and the model point cloud picture according to the space transformation relation. Although the patent discloses an iterative calculation method and a technical scheme of point distances, the patent does not disclose a technical scheme of performing precise matching by combining a normal included angle sequence, and the problem of inaccurate matching still exists.
Disclosure of Invention
The invention provides a point cloud fine matching method based on normal constraint, aiming at solving the problem that the point pair matching precision in model point cloud and scene point cloud is not high due to the adoption of a single matching mode in the prior art.
In order to realize the purpose of the invention, the invention adopts the following specific technical scheme:
a point cloud fine matching method based on normal constraint comprises the following steps:
step S1: point cloud rough matching of the model point cloud and the scene point cloud is realized by adopting point pair characteristic matching, the pose under the rough matching of the point cloud is obtained, and the obtained pose is used as an initial value;
step S2: selecting a model point in the model point cloud, searching a plurality of scene points in the scene point cloud by adopting a KD tree searching method, wherein the searched scene points are the scene points with the short distance of the selected model point, and pairing the model point with the scene points;
step S3: calculating a point distance sequence and a normal included angle sequence of a plurality of matched scene points and a model point, deleting all point distances larger than a specified value of the point distances and all normal included angles larger than a specified value of the normal included angles, and using the matched points which are not deleted as candidate matched point pairs;
step S4: for the candidate matching point pair of each model point, calculating the probability that the scene point is the matching point of the model point according to the point distance sequence and the normal included angle sequence;
step S5: for the candidate matching point pair of each model point, multiplying the point distance matching probability by the corresponding normal included angle matching probability, and dividing the result by the sum of the probability products of all the candidate matching point pairs to serve as the joint probability of matching each scene point and the model point under the condition of simultaneously considering the point distance and the normal included angle;
step S6: establishing a coordinate transformation equation set used by an ICP algorithm based on all candidate matching point pairs of the model point cloud and the scene point cloud, wherein each equation uses a pair of candidate matching point pairs, and the joint probability corresponding to the point pairs is multiplied at the two ends of the equation at the same time to be used as a weight coefficient of the equation, and the equation set is solved to obtain the optimal transformation array and translation amount from the model point cloud to the scene point cloud in the current iteration step;
step S7: if the model point cloud coordinate obtained after the execution of the current iteration step is compared with the initial value of the model point cloud coordinate when the current iteration step is not executed, the condition that the average deviation of the model point clouds before the execution of the current iteration step and after the execution of the iteration step is smaller than a given threshold value is not met, the steps S2 to S7 are repeated, and the model point cloud coordinate after the execution of the current iteration step is used as the initial value of the next model point cloud coordinate after iteration for iteration; and if the average deviation of the model point clouds before and after the current iteration step is executed is smaller than a given threshold value compared with the initial value of the model point cloud coordinate when the current iteration step is not executed, the iteration is finished.
Further, the pose in step S1 is a six-dimensional vector formed by the model point cloud coordinates transformed into the scene point cloud coordinates of the alignment area in the scene, the three rotation angles of the required coordinate transformation rotation matrix, and the three-dimensional translation vector.
Further, the normal angle in step S3 is an angle between the normal of the model point and the normal of the adjacent scene point.
Further, in the step S4, the method for calculating the probability that the point in the point distance sequence is a matching point of the model point includes the steps of:
step 1: firstly, calculating the point distances between a plurality of scene points and the selected model points respectively, and summing the calculated point distances to obtain the sum of all the point distances;
step 2: comparing the point distances of each scene point and each model point with the sum of all the point distances in the step 1 to obtain a plurality of point distance ratios;
and step 3: and (3) respectively subtracting the ratios of the point distances in the step (2) by 1 to obtain the matching probability of each point pair under the condition of only considering the size of the point distance.
Further, in the step S4, the method for calculating the probability that the scene point in the normal angle sequence is the matching point of the model point includes the steps of:
step A: calculating normal included angles between a plurality of scene points and the selected model points respectively, and summing the plurality of normal included angles obtained by calculation to obtain the sum of all the normal included angles;
and B: comparing the normal included angle of each scene point and each model point with the sum of all normal included angles in the step 1 to obtain a plurality of normal included angle ratios;
and C: and (3) subtracting the ratio of the plurality of normal included angles in the step (2) from 1 respectively to obtain the matching probability of each point pair under the condition of only considering the size of the normal included angle.
Further, the coordinate transformation method from the model point cloud to the scene point cloud in step S6 includes:
step a: acquiring a plane passing through a model point based on all candidate matching point pairs of the model point cloud and the scene point cloud, wherein the plane is vertical to the normal direction of the model point, and establishing an ICP (inductively coupled plasma) optimization equation system from the model point to the plane;
step b: multiplying two ends of the ICP optimization equation set established in the step a by joint probability at the same time;
step c: and (c) solving the ICP optimization equation set multiplied by the joint probability in the step b, and further obtaining the coordinate transformation from the model point cloud to the scene point cloud.
Further, the coordinate transformation in step S6 includes transformation of two parts, namely, a three-dimensional rotation matrix and a three-dimensional translation vector.
Further, in step S7, in the process of iterative alignment between the model and the object point cloud in the scene, the new coordinates of the model point cloud after each iteration are updated by the model point cloud through the coordinate transformation calculated by each iteration, so that the position and the posture of the model point are continuously close to one model point cloud in the scene.
Further, the threshold in step S7 is an average value of coordinate value change distances of a model point before and after the iteration of the model point cloud.
Compared with the prior art, the invention has the following beneficial effects:
(1) compared with the existing point cloud fine matching method, when the model to be aligned and the scene matching point are selected, the point distance factor and the normal included angle factor are considered at the same time, instead of only depending on the point distance as judgment, the method is more consistent with the real situation when the model and the similar region in the scene are aligned, and the alignment effect is more accurate;
(2) the method and the device can effectively weaken the interference of the model point and obvious wrong pairing conditions with approximate distance and opposite normal direction in the scene point on the matching calculation, so that the model can calculate the accurate pose transformation to the scene similar area under the guidance of correct matching;
(3) according to the method, optimization equations of pose transformation are respectively established according to all candidate point pairs in the neighborhood of the model point cloud, the occurrence probability of the point pair which is a correct matching point pair is multiplied by the two ends of each equation, finally, the pose which enables the error of the left end and the right end of the whole equation set to be minimum is solved, iteration is carried out continuously, and then the accurate matching pose is obtained.
Drawings
FIG. 1 is a block diagram of a process of a point cloud fine matching method according to the present invention.
Detailed Description
In order to make the purpose and technical solution of the present invention clearer, the following will clearly and completely describe the technical solution of the present invention with reference to the embodiments.
Example 1
The point cloud fine matching method based on the normal constraint shown in FIG. 1 includes:
step S1: point cloud rough matching of the model point cloud and the scene point cloud is realized by adopting point pair characteristic matching, the pose under the rough matching of the point cloud is obtained, and the obtained pose is used as an initial value;
step S2: selecting a model point in the model point cloud, searching a plurality of scene points in the scene point cloud by adopting a KD tree searching method, wherein the searched scene points are the scene points with the short distance of the selected model point, and pairing the model point with the scene points;
step S3: calculating a point distance sequence and a normal included angle sequence of a plurality of matched scene points and a model point, deleting all point distances larger than a specified value of the point distances and all normal included angles larger than a specified value of the normal included angles, and using the matched points which are not deleted as candidate matched point pairs;
step S4: for the candidate matching point pair of each model point, calculating the probability that the scene point is the matching point of the model point according to the point distance sequence and the normal included angle sequence;
step S5: for the candidate matching point pair of each model point, multiplying the point distance matching probability by the corresponding normal included angle matching probability, and dividing the result by the sum of the probability products of all the candidate matching point pairs to serve as the joint probability of matching each scene point and the model point under the condition of simultaneously considering the point distance and the normal included angle;
step S6: establishing a coordinate transformation equation set used by an ICP algorithm based on all candidate matching point pairs of the model point cloud and the scene point cloud, wherein each equation uses a pair of candidate matching point pairs, and the joint probability corresponding to the point pairs is multiplied at the two ends of the equation at the same time to be used as a weight coefficient of the equation, and the equation set is solved to obtain the optimal transformation array and translation amount from the model point cloud to the scene point cloud in the current iteration step;
step S7: if the model point cloud coordinates obtained after the execution of the current iteration step are compared with the initial values of the model point cloud coordinates when the current iteration step is not executed, the condition that the average deviation of each corresponding point pair of the model point clouds before the execution of the current iteration step and after the execution of the iteration step is smaller than a given threshold value is not met, the steps S2 to S7 are repeated, and the model point cloud coordinates after the execution of the current iteration step are used as the initial values of the model point cloud coordinates of the next time after the iteration step for iteration; and if the average deviation of the model point clouds before and after the current iteration step is executed is smaller than a given threshold value compared with the initial value of the model point cloud coordinate when the current iteration step is not executed, the iteration is finished.
Specifically, the pose in step S1 refers to a point cloud coordinate transformed from the model point cloud coordinate to the aligned region in the scene, three rotation angles of the rotation matrix and a three-dimensional translation vector by the required coordinate transformation, so that the pose is a six-dimensional vector consisting of three displacement amounts and rotation angles around three coordinate axes. The initial value is the pose obtained after coarse matching.
Specifically, the KD tree search method in step S2 is to establish a KD tree of the scene point cloud, and then, for each model point, several scene points closest to the model point can be quickly found out through the KD tree.
Specifically, the point distance in step S3 is the euclidean distance between two points, and the normal angle is the dot product of two normal lines, and then the angle is calculated by using the arccosine function. Wherein, the normal included angle refers to an included angle between the normal of the model point and the normal of the adjacent scene point.
Specifically, the scene point cloud is the output of the 3D camera after shooting, is a 3D coordinate set of the scene surface, and may include a plurality of models and other background objects placed in the scene. The model point cloud may be obtained by shooting a single model with a 3D camera and deleting the background point cloud, which is a 3D set of coordinates on one model. In the process of iterative alignment of the model and the object point cloud in the scene, the model point cloud is subjected to coordinate transformation calculated by each iteration, and new coordinates of the model point cloud coordinates after each iteration are updated, so that the position and the posture of the model point are continuously close to one model point cloud in the scene.
Specifically, the coordinate transformation in step S6 includes two parts, a three-dimensional rotational matrix and a three-dimensional translational vector, which are obtained by the point-to-plane ICP algorithm. The coordinate transformation is to multiply each point of the model point cloud by a rotation matrix obtained by an ICP algorithm, and add the obtained translation vector, thereby obtaining a new coordinate of the model after one iteration. In the process of coordinate transformation, the model point cloud and the scene point cloud are established based on a scene coordinate system. At the initial moment, the distance between the model point cloud and the scene point cloud is large, and finally the model point cloud is transformed to be close to the scene point cloud until the model point cloud and the scene point cloud coincide as much as possible. That is, the matching iteration will bring the model point cloud closer to the scene point cloud gradually until the final model point cloud and scene point cloud coincide.
Specifically, the threshold in step S7 is an average value of coordinate value variation distances of the same point of the model point cloud before and after the iteration. The threshold is determined according to the degree of coincidence between the expected model point cloud and the scene point cloud, and if the final model point cloud is expected to be closer to the scene point cloud, the threshold is smaller, but the iteration times are increased by an excessively small threshold, so that the matching efficiency is influenced. The threshold setting needs to be set experimentally according to actual requirements.
Specifically, the probability calculation method for matching points of which the scene points in the point distance sequence are model points comprises the following steps:
step 1: firstly, calculating the point distances between a plurality of scene points and the selected model points respectively, and summing the calculated point distances to obtain the sum of all the point distances;
step 2: comparing the point distances of each scene point and each model point with the sum of all the point distances in the step 1 to obtain a plurality of point distance ratios;
and step 3: and (3) respectively subtracting the ratios of the point distances in the step (2) by 1 to obtain the matching probability of each point pair under the condition of only considering the size of the point distance.
Specifically, the probability calculation method for the scene points in the included angle sequence being matching points of the model points includes the steps of:
step A: calculating normal included angles between a plurality of scene points and the selected model points respectively, and summing the plurality of normal included angles obtained by calculation to obtain the sum of all the normal included angles;
and B: comparing the normal included angle of each scene point and each model point with the sum of all normal included angles in the step 1 to obtain a plurality of normal included angle ratios;
and C: and (3) subtracting the ratio of the plurality of normal included angles in the step (2) from 1 respectively to obtain the matching probability of each point pair under the condition of only considering the size of the normal included angle.
Specifically, the method for transforming the coordinate from the model point cloud to the scene point cloud in step S6 includes:
step a: acquiring a plane passing through a model point based on all candidate matching point pairs of the model point cloud and the scene point cloud, wherein the plane is vertical to the normal direction of the model point, and establishing an ICP (inductively coupled plasma) optimization equation system from the model point to the plane;
step b: multiplying two ends of the ICP optimization equation set established in the step a by joint probability at the same time;
step c: and (c) solving the ICP optimization equation set multiplied by the joint probability in the step b, and further obtaining the coordinate transformation from the model point cloud to the scene point cloud.
Specifically, the point-to-plane ICP equation is a set of coordinate transformation equations established based on the model points and the scene points to which they are aligned. Each equation is established by: the 3D coordinates of the model point are multiplied by the 3D rotation matrix, plus the 3D translation vector minus the 3D coordinates of the scene point to which the model point is to be aligned, and the dot product of the new vector obtained with the normal to the scene point is equal to 0. The variables to be solved comprise three rotation angles forming a three-dimensional rotation array and six unknowns of a three-dimensional translation vector.
The above are merely embodiments of the present invention, which are described in detail and with particularity, and therefore should not be construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes and modifications can be made without departing from the spirit of the present invention, and these changes and modifications are within the scope of the present invention.

Claims (9)

1. A point cloud fine matching method based on normal constraint is characterized by comprising the following steps:
step S1: point cloud rough matching of the model point cloud and the scene point cloud is realized by adopting point pair characteristic matching, the pose under the rough matching of the point cloud is obtained, and the obtained pose is used as an initial value;
step S2: selecting a model point in the model point cloud, searching a plurality of scene points in the scene point cloud by adopting a KD tree searching method, wherein the searched scene points are the scene points with the short distance of the selected model point, and pairing the model point with the scene points;
step S3: calculating a point distance sequence and a normal included angle sequence of a plurality of matched scene points and a model point, deleting all point distances larger than a specified value of the point distances and all normal included angles larger than a specified value of the normal included angles, and using the matched points which are not deleted as candidate matched point pairs;
step S4: for the candidate matching point pair of each model point, calculating the probability that the scene point is the matching point of the model point according to the point distance sequence and the normal included angle sequence;
step S5: for the candidate matching point pair of each model point, multiplying the point distance matching probability by the corresponding normal included angle matching probability, and dividing the result by the sum of the probability products of all the candidate matching point pairs to serve as the joint probability of matching each scene point and the model point under the condition of simultaneously considering the point distance and the normal included angle;
step S6: establishing a coordinate transformation equation set used by an ICP algorithm based on all candidate matching point pairs of the model point cloud and the scene point cloud, wherein each equation uses a pair of candidate matching point pairs, and the joint probability corresponding to the matching of the scene point and the model point is multiplied at the two ends of the equation at the same time to be used as a weight coefficient of the equation, so that the equation set is solved, and the optimal transformation array and the translation amount from the model point cloud to the scene point cloud in the current iteration step can be obtained;
step S7: if the model point cloud coordinate obtained after the execution of the current iteration step is compared with the initial value of the model point cloud coordinate when the current iteration step is not executed, the condition that the average deviation of the model point clouds before the execution of the current iteration step and after the execution of the iteration step is smaller than a given threshold value is not met, the steps S2 to S7 are repeated, and the model point cloud coordinate after the execution of the current iteration step is used as the initial value of the next model point cloud coordinate after iteration for iteration; and if the average deviation of the model point clouds before and after the current iteration step is executed is smaller than a given threshold value compared with the initial value of the model point cloud coordinate when the current iteration step is not executed, the iteration is finished.
2. The method of claim 1, wherein the pose in step S1 is a six-dimensional vector formed by transformation of model point cloud coordinates to scene point cloud coordinates of an alignment area in a scene, three rotation angles of a required coordinate transformation rotation matrix, and a three-dimensional translation vector.
3. The method of claim 1, wherein the normal angle in step S3 is an angle between a normal of the model point and a normal of the adjacent scene point.
4. The method of claim 1, wherein the step S4 is a method for calculating a probability that the point in the point distance sequence is a matching point of the model point, and the method comprises the steps of:
step 1: firstly, calculating the point distances between a plurality of scene points and the selected model points respectively, and summing the calculated point distances to obtain the sum of all the point distances;
step 2: comparing the point distances of each scene point and each model point with the sum of all the point distances in the step 1 to obtain a plurality of point distance ratios;
and step 3: and (3) respectively subtracting the ratios of the point distances in the step (2) by 1 to obtain the matching probability of each point pair under the condition of only considering the size of the point distance.
5. The point cloud fine matching method based on normal constraint according to claim 1, wherein the probability calculation method for matching points of which scene points are model points in the normal angle sequence in step S4 includes the steps of:
step A: calculating normal included angles between a plurality of scene points and the selected model points respectively, and summing the plurality of normal included angles obtained by calculation to obtain the sum of all the normal included angles;
and B: comparing the normal included angle of each scene point and each model point with the sum of all normal included angles in the step 1 to obtain a plurality of normal included angle ratios;
and C: and (3) subtracting the ratio of the plurality of normal included angles in the step (2) from 1 respectively to obtain the matching probability of each point pair under the condition of only considering the size of the normal included angle.
6. The method for fine matching of point cloud based on normal constraint according to claim 1, wherein the method for optimal transformation array and translation amount from model point cloud to scene point cloud in step S6 includes:
step a: acquiring a plane passing through a model point based on all candidate matching point pairs of the model point cloud and the scene point cloud, wherein the plane is vertical to the normal direction of the model point, and establishing an ICP (inductively coupled plasma) optimization equation system from the model point to the plane;
step b: multiplying two ends of the ICP optimization equation set established in the step a by joint probability at the same time;
step c: and (c) solving the ICP optimization equation set multiplied by the joint probability in the step b, and further obtaining the coordinate transformation from the model point cloud to the scene point cloud.
7. The method of claim 1, wherein the coordinate transformation in step S6 includes transformation of two parts, namely a three-dimensional rotation matrix and a three-dimensional translation vector.
8. The method of claim 1, wherein in step S7, in the process of iterative alignment of the model and the object point cloud in the scene, the model point cloud is transformed by the coordinates calculated in each iteration to update new coordinates of the model point cloud coordinates after each iteration, so that the position and the posture of the model point continuously approach to a model point cloud in the scene.
9. The method of claim 1, wherein the threshold value in step S7 is an average value of coordinate value variation distances of a model point before and after the iteration of the model point cloud.
CN202110685466.XA 2021-06-21 2021-06-21 Point cloud precision matching method based on normal constraint Active CN113313200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110685466.XA CN113313200B (en) 2021-06-21 2021-06-21 Point cloud precision matching method based on normal constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110685466.XA CN113313200B (en) 2021-06-21 2021-06-21 Point cloud precision matching method based on normal constraint

Publications (2)

Publication Number Publication Date
CN113313200A true CN113313200A (en) 2021-08-27
CN113313200B CN113313200B (en) 2024-04-16

Family

ID=77379685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110685466.XA Active CN113313200B (en) 2021-06-21 2021-06-21 Point cloud precision matching method based on normal constraint

Country Status (1)

Country Link
CN (1) CN113313200B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642681A (en) * 2021-10-13 2021-11-12 中国空气动力研究与发展中心低速空气动力研究所 Matching method of aircraft model surface mark points
CN114442101A (en) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN110276790A (en) * 2019-06-28 2019-09-24 易思维(杭州)科技有限公司 Point cloud registration method based on shape constraining
CN111815686A (en) * 2019-04-12 2020-10-23 四川大学 Coarse-to-fine point cloud registration method based on geometric features
US20200342614A1 (en) * 2019-04-24 2020-10-29 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for point cloud registration, and computer readable medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN111815686A (en) * 2019-04-12 2020-10-23 四川大学 Coarse-to-fine point cloud registration method based on geometric features
US20200342614A1 (en) * 2019-04-24 2020-10-29 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for point cloud registration, and computer readable medium
CN110276790A (en) * 2019-06-28 2019-09-24 易思维(杭州)科技有限公司 Point cloud registration method based on shape constraining

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642681A (en) * 2021-10-13 2021-11-12 中国空气动力研究与发展中心低速空气动力研究所 Matching method of aircraft model surface mark points
CN114442101A (en) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar
CN114442101B (en) * 2022-01-28 2023-11-14 南京慧尔视智能科技有限公司 Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar

Also Published As

Publication number Publication date
CN113313200B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
CN108509848B (en) The real-time detection method and system of three-dimension object
CN109960402B (en) Virtual and real registration method based on point cloud and visual feature fusion
CN104933755B (en) A kind of stationary body method for reconstructing and system
CN112669359B (en) Three-dimensional point cloud registration method, device, equipment and storage medium
CN112017220A (en) Point cloud accurate registration method based on robust constraint least square algorithm
CN113313200B (en) Point cloud precision matching method based on normal constraint
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN112200915B (en) Front-back deformation detection method based on texture image of target three-dimensional model
CN107492107B (en) Object identification and reconstruction method based on plane and space information fusion
CN112132752B (en) Fine splicing method for multi-view scanning point cloud of large complex curved surface
CN111612728A (en) 3D point cloud densification method and device based on binocular RGB image
CN111815686A (en) Coarse-to-fine point cloud registration method based on geometric features
CN113269094A (en) Laser SLAM system and method based on feature extraction algorithm and key frame
CN112257722A (en) Point cloud fitting method based on robust nonlinear Gaussian-Hummer model
CN110796691A (en) Heterogeneous image registration method based on shape context and HOG characteristics
CN111820545A (en) Method for automatically generating sole glue spraying track by combining offline and online scanning
CN117132630A (en) Point cloud registration method based on second-order spatial compatibility measurement
CN111429571B (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
CN114088081A (en) Map construction method for accurate positioning based on multi-segment joint optimization
CN116309026A (en) Point cloud registration method and system based on statistical local feature description and matching
CN113706381A (en) Three-dimensional point cloud data splicing method and device
JPH07103715A (en) Method and apparatus for recognizing three-dimensional position and attitude based on visual sense
CN111553410B (en) Point cloud identification method based on key point local curved surface feature histogram and spatial relationship
Makovetskii et al. An algorithm for rough alignment of point clouds in three-dimensional space
Hernandez et al. Puzzling engine: a digital platform to aid the reassembling of fractured fragments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant