CN113362463A - Workpiece three-dimensional reconstruction method based on Gaussian mixture model - Google Patents

Workpiece three-dimensional reconstruction method based on Gaussian mixture model Download PDF

Info

Publication number
CN113362463A
CN113362463A CN202110535760.2A CN202110535760A CN113362463A CN 113362463 A CN113362463 A CN 113362463A CN 202110535760 A CN202110535760 A CN 202110535760A CN 113362463 A CN113362463 A CN 113362463A
Authority
CN
China
Prior art keywords
point cloud
model
gaussian mixture
point
workpiece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110535760.2A
Other languages
Chinese (zh)
Inventor
禹鑫燚
张毅凯
欧林林
程兆赢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202110535760.2A priority Critical patent/CN113362463A/en
Publication of CN113362463A publication Critical patent/CN113362463A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

A three-dimensional reconstruction method for a workpiece based on a Gaussian mixture model comprises the following steps: step 1, multi-view point cloud data acquisition, point cloud pretreatment, point cloud registration and curved surface reconstruction; step 2, a depth camera is used for collecting point cloud data, and multi-view information of the workpiece is obtained by using a rotating platform; step 3, carrying out target extraction and equal point cloud preprocessing on the acquired data; and 4, registering the workpiece point clouds, establishing a point cloud model of a full visual angle, performing curved surface reconstruction on the point cloud model, and reconstructing a three-dimensional model close to a real object. The invention has the advantages of simple realization and higher reconstruction speed.

Description

Workpiece three-dimensional reconstruction method based on Gaussian mixture model
Technical Field
The invention relates to the technical field of 3D vision, in particular to a workpiece three-dimensional reconstruction method based on a Gaussian mixture model.
Background
The application demand of the three-dimensional reconstruction technology in the fields of data visualization, medical technology, industry and the like is higher and higher, taking an industrial production scene as an example, on a spraying production line, a spraying track is automatically planned by using a three-dimensional reconstruction model of a workpiece to be processed and is handed to a robot for execution, so that manual operation can be reduced, and the working efficiency and the automation degree are improved while the safety degree of spraying operation is improved.
Driven by the visual sensor technology, various three-dimensional reconstruction methods are rapidly developed. However, the three-dimensional reconstruction method based on the monocular two-dimensional image has a large error and is easily affected by camera parameters. The three-dimensional reconstruction method based on deep learning requires a large number of samples and requires model training using high-performance equipment. The three-dimensional reconstruction method of the workpiece based on the Gaussian mixture model is to perform registration and curved surface reconstruction on multi-view point cloud data of a target workpiece on the basis of point cloud data obtained by a 3D vision sensor to obtain a three-dimensional model.
Disclosure of Invention
The invention provides a workpiece three-dimensional reconstruction method based on a Gaussian mixture model, aiming at overcoming the defects of the prior art.
A three-dimensional reconstruction method of a workpiece based on a Gaussian mixture model comprises the following steps:
step 1: and acquiring multi-view point cloud data, namely acquiring scene point cloud data containing multi-angle target workpieces. This step collects point cloud data using a fused binocular depth camera that obtains depth data based on binocular stereo imaging principles and infrared structured light ranging principles. The method comprises the steps of placing a target workpiece to be scanned on a rotary platform with a controllable rotation angle, forming a fixed relative position between a depth camera and the rotary platform, placing the workpiece on the rotary platform at a certain initial position, and performing stepping rotation by taking a fixed angle as increment to obtain multi-angle information of the target workpiece. The depth camera scans and records each angle in the scene and then transmits the angle back to the computer, and the point cloud data is stored in a PCD file form according to the time sequence, named as View1, View2 and View 3.
Step 2: point cloud preprocessing, namely removing irrelevant data in the multi-View scene point clouds View1, View2 and View3 acquired in the step 1, and extracting target workpiece point cloud data. The method comprises the following specific steps:
step 2-1: setting ROI parameters according to the relative positions of the depth camera and the rotary platform in the step 1, and carrying out ROI region segmentation screening on space points in the complex scene point clouds View1, View2 and View 3. And preliminarily segmenting small scene point clouds only comprising three parts of the ground, a rotating platform and a workpiece.
Step 2-2: and expressing the point cloud data set in the small scene point cloud obtained in the last step as follows:
A{a1,a2,a3,...an}
and (3) performing plane fitting in the point cloud set A by using a random sample consensus (RANSAC) algorithm, fitting a plane and plane parameters thereof in a small scene by using the RANSAC algorithm, and dividing points in the set A into plane points and out-of-plane points. And recording subscript indexes of the plane points and subscript indexes of the out-of-plane points, and performing surface removing processing according to the subscripts to remove the plane points. After the RANSAC algorithm obtains plane parameters, the position of a plane in a small scene point cloud can be determined, the height of a rotary platform is obtained through measurement and is H, all points with the height H above the plane can be removed, and therefore the point cloud of the rotary platform is removed, and preliminary target workpiece data are obtained.
Step 2-3: the preliminary target workpiece data obtained in step 2-2 may have outliers generated by insufficient efficiency in the algorithm, and surface burr noise and edge noise left when the 3D vision sensor acquires data. Therefore, a statistical outlierremove filter in the PCL point cloud library is used, the result is used as input to carry out filtering, and outlier and surface outlier noise are removed. Finally, multi-View target workpiece point cloud data Obj1, Obj2, Obj3,. ObjN are extracted from View1, View2 and View 3.
And step 3: and point cloud registration, namely performing point cloud registration based on a Gaussian mixture model. The process includes the steps of pairwise registering the Obj1, the Obj2 and the Obj3 obtained in the step 2-3 to obtain a transformation matrix, and performing global splicing to obtain a complete point cloud model.
Step 3-1: and establishing a Gaussian mixture model for the point clouds of two adjacent visual angles. Selecting two adjacent view point clouds needing to be registered from Obj1, Obj2 and Obj 3. The gaussian continuous probability density distribution function is known as:
Figure BDA0003069772690000031
where μ is the mean vector, Σ is the covariance matrix, and d is the dimensionality of the data.
The Gaussian mixture model is established according to the following criteria:
s1, the number of Gaussian components in the Gaussian mixture model is equal to the number of point clouds in each point cloud data set.
And S2, setting the average value vector of the Gaussian components in each Gaussian mixture model according to the spatial position of the point.
And S3, all Gaussian components in the Gaussian mixture model share the same covariance matrix.
Finally, all gaussian components as described above are added with the same weight, which results in:
Figure BDA0003069772690000032
wherein wiAnd weight coefficients of the Gaussian mixture model. And (3) establishing a Gaussian mixture Model gmm (S) and gmm (M) for the Scene and the Model according to the above rule, wherein gmm represents the functional relation in the step (2), and the input S represents the point cloud Scene and the input M represents the point cloud Model.
Step 3-2: establishing a transformation matrix between two point clouds with a parameter theta, wherein the Model point cloud after parameter transformation is expressed as Transform (M, theta), and a Gaussian mixture Model of the Model point cloud can be expressed as gmm (M, theta), wherein the Transform represents a function for performing corresponding rigid body transformation according to the transformation matrix with the parameter theta;
step 3-3: and performing difference square integration on the two Gaussian mixture models to establish a differentiation objective function:
∫(gmm(S)-gmm(T(M,θ)))2dx (3)
and (3) taking the fixed rotation parameter used in the step (1) as an initial value of the parameter theta, performing iterative optimization operation by using a Gauss-Newton algorithm to minimize the objective function, and recording the value of the parameter theta. And calculating a transformation matrix T according to the parameter values.
Step 3-4: the Obj1 is taken as a reference for registration, and the coordinate system in which the Obj1 is located is taken as a reference coordinate system. According to the step 3-3, the transformation matrix T is obtained by carrying out registration processing on Obj1 and Obj2(Obj1 is regarded as Scene, and Obj2 is regarded as Model)12Obj2 may pass through T12And transforming the matrix to a reference coordinate system. Carrying out Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T23Obj3 may pass through T12*T23And transforming the matrix to a reference coordinate system. And sequentially carrying out registration calculation on every two visual angles, converting the two visual angles into a reference coordinate system according to the transformation matrix, and splicing the multi-visual angle point clouds to obtain the full-visual angle point cloud model.
And 4, step 4: and (4) reconstructing the curved surface by using a greedy projection triangulation algorithm. And performing curved surface reconstruction on the scattered point cloud on the surface of the model.
Step 4-1: and establishing a kd-tree spatial structure index for the full view point cloud model, and accelerating the point cloud query speed. And searching the K neighborhood of the target point by using the kd-tree structure.
Step 4-2: selecting an initial triangle from the point cloud set, acquiring a K neighborhood of the midpoint of a growing side of the triangle, and projecting points in the neighborhood to a two-dimensional plane.
Step 4-3: and selecting a projection point of an included angle with the minimum cosine value formed by the growth side in the two-dimensional plane as an optimal expansion point.
Step 4-4: and mapping the optimal expansion points back to the three-dimensional space to form a new triangle in the triangular mesh.
And 4-5: and repeating the steps 4-2 to 4-5, and continuously generating new triangles until all points on the surface of the point cloud model form a complete triangular mesh curved surface.
The invention has the advantages that:
1. the used data acquisition system is simple to build and obtains comprehensive information.
2. The algorithm flow is clear and concise.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
FIG. 2-a is a small scene point cloud of ROI preliminary segmentation of the present invention.
FIG. 2-b is the target extraction result point cloud data of the present invention.
Fig. 3 is a working process of the point cloud registration step of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
A work piece three-dimensional reconstruction method based on a Gaussian mixture model is shown in a working flow diagram in figure 1 and comprises 4 steps of multi-view point cloud data acquisition, point cloud preprocessing, point cloud registration and curved surface reconstruction. FIG. 2-a shows the small scene point cloud data after ROI segmentation, and FIG. 2-b shows the extracted target workpiece point cloud data after removing the plane. Fig. 3 shows three perspective registration processes in point cloud registration.
Referring to fig. 1-3, and taking a fender workpiece of a front wheel of an electric vehicle as an example, the specific embodiment of the invention is as follows:
step 1: and collecting multi-view point cloud data. Working scene point cloud data was acquired using an Intel Realsense D435 depth camera. The depth camera is placed on a tripod with the height of 1.8m, and is connected with a USB3.0 port of a computer by using an elongated USB3.0 transmission data line. The workpiece is placed on a rotating platform (the initial pose can be just opposite to a depth camera), ROI parameters between the rotating platform and the depth camera are measured and recorded, the rotating angle (taking 45 degrees as an example) of the rotating platform is set, the rotating platform controls the rotating angle through a motor, and the depth camera sends acquired data to a computer when the rotating platform rotates 45 degrees every time until the workpiece rotates 360 degrees. Scene point cloud data of one angle is collected.
The depth camera and the computer transmit images and point cloud data through a USB connecting line. The computer adopts an ubuntu16.04 operating system + ROS Kinetic + PCL as a software platform to collect, process and present data.
And a realsense-ROS function package supported by the ROS community issues point cloud information transmitted from the realsense depth camera to the computer. The cloud information can be subscribed by looking up the topic name corresponding to the point cloud information through the rostopic, issued sensor _ msgs/PointCloud2 format data are processed by using a fromrOSMsg conversion function contained in PCL _ coverionin a PCL function library to generate corresponding PCL format point cloud data, the PCL format point cloud data are exported to PCD files through IO operation and stored in a hard disk, and the files are respectively View1, View2 and View 3.
Step 2: point cloud preprocessing is carried out, and target workpiece point cloud data are extracted.
Step 2-1: ROI parameters are set according to the relative positions of the depth camera and the rotating platform in step 1,
performing ROI region segmentation screening on spatial points in complex scene point clouds View1, View2 and View3, namely selecting three-dimensional points of which x, y and z (three-dimensional space coordinates of each point) are located in an ROI region. And preliminarily segmenting small scene point clouds only comprising three parts of the ground, a rotating platform and a workpiece. The segmentation effect is shown in fig. 2-a.
Step 2-2: and (4) carrying out surface removing treatment on the basis of 2-1, and further removing the surface and the rotating platform. And expressing the point cloud data set in the small scene point cloud obtained in the step 2-1 as follows:
A{a1,a2,a3,...an}
and (3) performing plane fitting in the point cloud set A by using a random sample consensus (RANSAC) algorithm, fitting a plane and plane parameters thereof in the small scene by using the RANSAC algorithm, and dividing points in the set A into points on the plane and points which do not belong to the plane. And recording the subscripts of the data points belonging to the plane and the subscripts of the data points not belonging to the plane, and performing surface removing processing according to the subscripts to remove the points belonging to the plane. After the RANSAC algorithm obtains plane parameters, the position of a plane in a small scene point cloud can be determined, the height of a rotary platform is obtained through measurement and is H, all points which are away from the plane and are at the height of H can be removed, and therefore data of the rotary platform are removed, and preliminary target workpiece data are obtained.
Step 2-3: the preliminary target workpiece data obtained in the step 2-2 has outliers generated by insufficient efficiency in the algorithm and surface burr noise and edge noise left when the 3D vision sensor acquires data, so that errors are generated in the subsequent steps. Therefore, a statistical outlierremove filter in the PCL library is used, and the result is used as input to carry out filtering to remove outliers and surface noise. Finally, multi-View target workpiece point cloud data Obj1, Obj2, Obj3,. ObjN are extracted from View1, View2 and View 3. The extracted target workpiece data is shown in fig. 3-b.
And step 3: point cloud registration, wherein the process is to register the Obj1, Obj2 and Obj3 obtained in the step 2-3 in pairs, and carry out global splicing to obtain a complete model.
Step 3-1: and the point clouds of two adjacent visual angles of Obj1 and Obj1 are registered, wherein Obj1 is target point cloud Scene, and Obj2 is a point cloud Model to be registered. The gaussian continuous probability density distribution function is known as:
Figure BDA0003069772690000061
where μ is the mean vector, Σ is the covariance matrix, and d is the dimensionality of the data.
The Gaussian mixture models of Obj1 and Obj2 were established according to the following criteria:
1) the number of gaussian components in the gaussian mixture model is equal to the number of point clouds in each point cloud dataset.
2) For each gaussian component in the gaussian mixture model, its mean vector is set according to the spatial position of the point.
3) All gaussian components in the gaussian mixture model share the same covariance matrix.
Finally, all gaussian components as described above are added with the same weight, which results in:
Figure BDA0003069772690000071
wherein wiThe weight coefficient of each Gaussian component in the Gaussian mixture model. And (3) establishing a Gaussian mixture Model gmm (S) and gmm (M) for the Scene and the Model according to the above rule, wherein gmm represents the functional relation in the step (2), and the input S represents the point cloud Scene and the input M represents the point cloud Model.
Step 3-2: establishing a transformation matrix between two point clouds with a parameter theta, wherein the Model point cloud after parameter transformation is expressed as Transform (M, theta), and a Gaussian mixture Model of the Model point cloud can be expressed as gmm (M, theta), wherein the Transform represents a function for performing corresponding rigid body transformation according to the transformation matrix with the parameter theta;
step 3-3: and performing difference square integration on the two Gaussian mixture models to establish a differentiation objective function:
∫(gmm(S)-gmm(T(M,θ)))2dx (3)
taking the fixed rotation parameter used in the step 1 as an initial value of the parameter theta (taking 45 degrees as an example), and performing iterative optimization operation by using a gauss-newton algorithm to obtain a parameter value when the objective function is minimum. And calculating a transformation matrix T according to the parameter values.
Step 3-4: the Obj1 is taken as a reference for registration, and the coordinate system in which the Obj1 is located is taken as a reference coordinate system. The registration results in a transformation matrix T12(transformation matrices from Obj2 to Obj 2), Obj2 may pass through T12And transforming the matrix to a reference coordinate system. Carrying out Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T23Obj3 may pass through T12*T23And transforming the matrix to a reference coordinate system. And sequentially carrying out registration calculation on every two visual angles, converting the two visual angles into a reference coordinate system according to the transformation matrix, and splicing the multi-visual angle point clouds to obtain the full-visual angle point cloud model.
And 4, step 4: and (4) curved surface reconstruction, namely performing curved surface reconstruction on the scattered point cloud on the surface of the model by using a greedy projection triangulation algorithm.
Step 4-1: and establishing a kd-tree space structure index for the full view corner point cloud model, and searching a K neighborhood of a target point by using the kd-tree structure, wherein the value of K can be adjusted according to actual conditions and reconstruction effects.
Step 4-2: selecting an initial triangle from the point cloud set, acquiring a K neighborhood of the midpoint of a growing side of the triangle, and projecting points in the neighborhood to a two-dimensional plane.
Step 4-3: and selecting a projection point of an included angle with the minimum cosine value formed by the growth side in the two-dimensional plane as an optimal expansion point.
Step 4-4: and mapping the optimal expansion points back to the three-dimensional space to form a new triangle in the triangular mesh.
And 4-5: and repeating the steps 4-2 to 4-5, and continuously generating new triangles until all points on the surface of the point cloud model form a complete triangular mesh curved surface.
It should be emphasized that the embodiments described herein are illustrative and not restrictive, and thus the present invention includes, but is not limited to, the embodiments described in the detailed description, and that other embodiments similar to those described herein may be made by those skilled in the art without departing from the scope of the present invention.

Claims (2)

1. A three-dimensional reconstruction method of a workpiece based on a Gaussian mixture model comprises the following steps:
step 1: collecting multi-view point cloud data, namely collecting scene point cloud data containing a multi-angle target workpiece; acquiring point cloud data by using a fusion binocular depth camera, wherein the depth camera acquires depth data based on a binocular stereo imaging principle and an infrared structured light distance principle; placing a target workpiece to be scanned on a rotary platform with a controllable rotation angle, forming a fixed relative position between a depth camera and the rotary platform, placing the workpiece on the rotary platform at a certain initial position, and performing stepping rotation by taking a fixed angle as an increment to obtain multi-angle information of the target workpiece; the depth camera scans and records each angle in a scene and then transmits the angle back to a computer, and point cloud data are stored in a PCD file form according to a time sequence, wherein the PCD file form is named as a View1, a View2 and a View 3;
step 2: point cloud preprocessing, namely removing irrelevant data in the multi-View scene point clouds View1, View2 and View3 collected in the step 1, and extracting a target workpiece; the method comprises the following specific steps:
step 2-1: setting ROI parameters according to the relative positions of the depth camera and the rotary platform in the step 1, and carrying out ROI region segmentation screening on space points in the complex scene point clouds View1, View2 and View3,. namely selecting three-dimensional points of which x, y and z (three-dimensional space coordinates of each point) are located in an ROI region; preliminarily segmenting small scene point clouds only comprising three parts of the ground, a rotary platform and a workpiece;
step 2-2: and expressing the point cloud data set in the small scene point cloud obtained in the last step as follows:
A{a1,a2,a3,...an}
performing plane fitting in a point cloud set A by using a random sample consensus (RANSAC) algorithm, fitting a plane and plane parameters thereof in a small scene by using the RANSAC algorithm, and dividing points in the set A into plane points and out-of-plane points; recording subscript indexes of the plane points and subscript indexes of the out-of-plane points, performing surface removing processing according to the subscripts, and removing the plane points; after the RANSAC algorithm obtains plane parameters, the position of a plane in a small scene point cloud can be determined, the height of a rotary platform is obtained through measurement and is H, all points with the height H above the plane can be removed, and therefore the point cloud of the rotary platform is removed, and preliminary target workpiece point cloud data are obtained;
step 2-3: the preliminary target workpiece point cloud data obtained in the step 2-2 has outliers generated by insufficient efficiency on the algorithm and surface burr noise and edge noise left when the 3D vision sensor acquires data; therefore, a statistical outlierremove filter in the PCL point cloud library is used, the result is used as input for filtering, and outliers and surface noise are removed; extracting multi-View target workpiece point cloud data Obj1, Obj2 and Obj 3.. objN from View1, View2 and View 3.;
and step 3: point cloud registration, namely performing point cloud registration based on a Gaussian mixture model; the process comprises the steps of registering the Obj1, the Obj2 and the Obj3 obtained in the step 2-3 in pairs to obtain a transformation matrix, and carrying out global splicing to obtain a complete point cloud model;
step 3-1: establishing a Gaussian mixture model for the point clouds of two adjacent visual angles; selecting two adjacent view point clouds needing to be registered from Obj1, Obj2 and Obj 3.. objN, setting a target point cloud as Scene and a point cloud to be registered as a Model; the gaussian continuous probability density distribution function is known as:
Figure FDA0003069772680000021
wherein mu is a mean vector, sigma is a covariance matrix, and d is a data dimension;
the Gaussian mixture model is established according to the following criteria:
s1, the number of Gaussian components in a Gaussian mixture model is equal to the number of point clouds in each point cloud data set;
s2, setting the average value vector of the Gaussian components in each Gaussian mixture model according to the spatial position of the point;
s3, all Gaussian components in the Gaussian mixture model share the same covariance matrix;
finally, all gaussian components as described above are added with the same weight, which results in:
Figure FDA0003069772680000022
wherein wiWeighting coefficients of the Gaussian mixture model; establishing a Gaussian mixture Model gmm (S) and gmm (M) for Scene and Model according to the rule, wherein gmm represents the functional relation in the step (2), S is input to represent the point cloud Scene, and M represents the point cloud Model;
step 3-2: establishing a transformation matrix between two point clouds with a parameter theta, wherein the Model point cloud after parameter transformation is expressed as Transform (M, theta), and a Gaussian mixture Model of the Model point cloud can be expressed as gmm (M, theta), wherein the Transform represents a function for performing corresponding rigid body transformation according to the transformation matrix with the parameter theta;
step 3-3: and performing difference square integration on the two Gaussian mixture models to establish a differentiation objective function:
∫(gmm(S)-gmm(T(M,θ)))2dx (3)
taking the fixed rotation parameter used in the step 1 as an initial value of a parameter theta, performing iterative optimization operation by using a Gauss-Newton algorithm to minimize the objective function, and recording a parameter theta value; calculating a transformation matrix T according to the parameter values;
step 3-4: taking Obj1 as a reference for registration, and taking a coordinate system where Obj1 is located as a reference coordinate system; according to the step 3-3, the transformation matrix T is obtained by carrying out registration processing on Obj1 and Obj2(Obj1 is regarded as Scene, and Obj2 is regarded as Model)12Obj2 may pass through T12Transforming the matrix to a reference coordinate system; carrying out Gaussian mixture model point cloud registration processing on Obj2 and Obj3 to obtain a transformation matrix T23Obj3 may pass through T12*T23Transforming the matrix to a reference coordinate system; sequentially carrying out registration calculation on every two visual angles, converting the registration calculation into a reference coordinate system according to a transformation matrix, and splicing multi-visual angle point clouds to obtain a full-visual angle point cloud model;
and 4, step 4: reconstructing a curved surface by using a greedy projection triangulation algorithm; performing curved surface reconstruction on the scattered point cloud on the surface of the model;
step 4-1: establishing a kd-tree spatial structure index for the full view point cloud model, and accelerating the point cloud query speed; searching a K neighborhood of the target point by using a kd-tree structure;
step 4-2: selecting an initial triangle from the point cloud set, acquiring a K neighborhood of the midpoint of a growing side of the triangle, and projecting points in the neighborhood to a two-dimensional plane;
step 4-3: selecting a projection point of an included angle with the minimum cosine value formed by the growth side in the two-dimensional plane as an optimal expansion point;
step 4-4: mapping the optimal expansion points back to a three-dimensional space to form a new triangle in the triangular mesh;
and 4-5: and repeating the steps 4-2 to 4-5, and continuously generating new triangles until all points on the surface of the point cloud model form a complete triangular mesh curved surface.
2. The method for reconstructing three-dimensional workpiece based on Gaussian mixture model as claimed in claim 1, wherein the multi-view point cloud data acquisition method in step 1 only uses one depth camera to acquire point cloud data, and obtains multi-view information of workpiece by means of a rotating platform.
CN202110535760.2A 2021-05-17 2021-05-17 Workpiece three-dimensional reconstruction method based on Gaussian mixture model Pending CN113362463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110535760.2A CN113362463A (en) 2021-05-17 2021-05-17 Workpiece three-dimensional reconstruction method based on Gaussian mixture model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110535760.2A CN113362463A (en) 2021-05-17 2021-05-17 Workpiece three-dimensional reconstruction method based on Gaussian mixture model

Publications (1)

Publication Number Publication Date
CN113362463A true CN113362463A (en) 2021-09-07

Family

ID=77526778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110535760.2A Pending CN113362463A (en) 2021-05-17 2021-05-17 Workpiece three-dimensional reconstruction method based on Gaussian mixture model

Country Status (1)

Country Link
CN (1) CN113362463A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888612A (en) * 2021-09-18 2022-01-04 北京市农林科学院信息技术研究中心 Animal point cloud multi-view real-time acquisition and 3D reconstruction method, device and system
CN114299079A (en) * 2021-12-07 2022-04-08 北京航空航天大学 Dense point cloud data-oriented engine blade section line data acquisition method
CN117974741A (en) * 2024-04-01 2024-05-03 北京理工大学长三角研究院(嘉兴) 360-Degree point cloud depth zone triangulation composition method, device and system
WO2024114038A1 (en) * 2022-12-02 2024-06-06 先临三维科技股份有限公司 Scanned data processing method and apparatus, device, and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264567A (en) * 2019-06-19 2019-09-20 南京邮电大学 A kind of real-time three-dimensional modeling method based on mark point
US20190319851A1 (en) * 2018-04-11 2019-10-17 Nvidia Corporation Fast multi-scale point cloud registration with a hierarchical gaussian mixture
CN112308961A (en) * 2020-11-05 2021-02-02 湖南大学 Robot rapid robust three-dimensional reconstruction method based on layered Gaussian mixture model

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190319851A1 (en) * 2018-04-11 2019-10-17 Nvidia Corporation Fast multi-scale point cloud registration with a hierarchical gaussian mixture
CN110264567A (en) * 2019-06-19 2019-09-20 南京邮电大学 A kind of real-time three-dimensional modeling method based on mark point
CN112308961A (en) * 2020-11-05 2021-02-02 湖南大学 Robot rapid robust three-dimensional reconstruction method based on layered Gaussian mixture model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIAN LIU 等: "3-D Point Cloud Registration Algorithm Based on Greedy Projection Triangulation", APPLIED SCIENCES, 30 September 2018 (2018-09-30) *
林桂潮 等: "融合高斯混合模型和点到面距离的点云配准", 计算机辅助设计与图形学学报, vol. 30, no. 4, 30 April 2018 (2018-04-30) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888612A (en) * 2021-09-18 2022-01-04 北京市农林科学院信息技术研究中心 Animal point cloud multi-view real-time acquisition and 3D reconstruction method, device and system
CN114299079A (en) * 2021-12-07 2022-04-08 北京航空航天大学 Dense point cloud data-oriented engine blade section line data acquisition method
CN114299079B (en) * 2021-12-07 2024-05-28 北京航空航天大学 Dense point cloud data-oriented engine blade section line data acquisition method
WO2024114038A1 (en) * 2022-12-02 2024-06-06 先临三维科技股份有限公司 Scanned data processing method and apparatus, device, and medium
CN117974741A (en) * 2024-04-01 2024-05-03 北京理工大学长三角研究院(嘉兴) 360-Degree point cloud depth zone triangulation composition method, device and system

Similar Documents

Publication Publication Date Title
CN111325843B (en) Real-time semantic map construction method based on semantic inverse depth filtering
CN109816664B (en) Three-dimensional point cloud segmentation method and device
CN113362463A (en) Workpiece three-dimensional reconstruction method based on Gaussian mixture model
CN110340891B (en) Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
CN107063228B (en) Target attitude calculation method based on binocular vision
CN107886528B (en) Distribution line operation scene three-dimensional reconstruction method based on point cloud
CN112862878B (en) Mechanical arm blank repairing method based on 3D vision
CN111696210A (en) Point cloud reconstruction method and system based on three-dimensional point cloud data characteristic lightweight
CN113178009B (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN109272524B (en) Small-scale point cloud noise denoising method based on threshold segmentation
CN102411779B (en) Image-based object model matching posture measurement method
CN108428255A (en) A kind of real-time three-dimensional method for reconstructing based on unmanned plane
CN112801977B (en) Assembly body part relative pose estimation and monitoring method based on deep learning
CN111724433A (en) Crop phenotype parameter extraction method and system based on multi-view vision
CN106709950A (en) Binocular-vision-based cross-obstacle lead positioning method of line patrol robot
CN113628263A (en) Point cloud registration method based on local curvature and neighbor characteristics thereof
CN114332348B (en) Track three-dimensional reconstruction method integrating laser radar and image data
CN112669385A (en) Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics
CN111476841A (en) Point cloud and image-based identification and positioning method and system
CN112712589A (en) Plant 3D modeling method and system based on laser radar and deep learning
CN111340834B (en) Lining plate assembly system and method based on laser radar and binocular camera data fusion
CN111798453A (en) Point cloud registration method and system for unmanned auxiliary positioning
CN113536959A (en) Dynamic obstacle detection method based on stereoscopic vision
CN116476070B (en) Method for adjusting scanning measurement path of large-scale barrel part local characteristic robot
CN110634160B (en) Method for constructing target three-dimensional key point extraction model and recognizing posture in two-dimensional graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination