CN115512137A - Random stacked workpiece positioning method based on point cloud pose estimation - Google Patents

Random stacked workpiece positioning method based on point cloud pose estimation Download PDF

Info

Publication number
CN115512137A
CN115512137A CN202211183631.2A CN202211183631A CN115512137A CN 115512137 A CN115512137 A CN 115512137A CN 202211183631 A CN202211183631 A CN 202211183631A CN 115512137 A CN115512137 A CN 115512137A
Authority
CN
China
Prior art keywords
point
model
pose
points
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211183631.2A
Other languages
Chinese (zh)
Inventor
王化明
杨雪峰
郝琳博
徐轲
沈颖
曹文卓
叶剑锋
陈亚卿
洪颖
吕享
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202211183631.2A priority Critical patent/CN115512137A/en
Publication of CN115512137A publication Critical patent/CN115512137A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a random stacked workpiece positioning method based on point cloud pose estimation, which comprises an offline training part and an online matching part; the method comprises the following steps that model point cloud is obtained in a digital modeling mode, and scene point cloud is obtained by a depth camera; in the off-line training stage, normal redirection, quantitative sampling and weight calculation are carried out on the model data so as to improve the feature discrimination and the dispersion; the online matching is divided into a coarse registration part and a fine registration part, a scene point feature descriptor is calculated in a coarse registration stage, a matching relation with a model point is searched in a pre-training result, and coarse postures of a plurality of examples in a scene are calculated in a voting mode; and in the fine registration stage, poses belonging to different examples are screened, and an ICP algorithm combined with the overlap rate hypothesis verification is provided to calculate the fine poses so as to improve the registration speed and precision.

Description

Randomly stacked workpiece positioning method based on point cloud pose estimation
Technical Field
The invention belongs to the technical field of workpiece positioning, and particularly relates to a randomly stacked workpiece positioning method based on point cloud pose estimation.
Background
In the process of automatic assembly of workpieces, the workpieces need to be identified and grabbed by a manipulator. The application of the industrial robot based on vision improves the intelligentization level of the manipulator and the production efficiency, and can be used for workpiece identification, workpiece positioning and the like. However, for workpieces in a stacked state, with different poses and small sizes, the existing visual positioning method has limited positioning accuracy, low pose recognition degree and high error reporting rate, and cannot meet the actual production requirements.
Disclosure of Invention
The invention aims to provide a random stacked workpiece positioning method based on point cloud pose estimation, and aims to solve the pose estimation problem in the process of randomly grabbing workpieces.
The invention is realized in such a way that a randomly stacked workpiece positioning method based on point cloud pose estimation comprises the following steps:
s1, acquiring 3D point cloud data of a workpiece by using digital modeling software, and removing partial redundant data;
s2, calculating a workpiece point cloud data normal in the step S1, constraining the normal direction to point to the outside of the model in a unified mode, carrying out down-sampling on the point cloud with the normal, and calculating voting weight according to the shape index features and the visible probability of each point;
s3, in an off-line training stage, point pair characteristics are extracted, and the characteristics are stored in 16 neighborhoods in a lookup table; calculating a local reference system of each model point for later hypothesis verification;
s4, in an online matching stage, point pair characteristics of the scene point cloud are calculated, a matching relation is searched in a pre-training lookup table, and voting is carried out in a voting table by utilizing pre-calculated voting weight; calculating a pose matrix generated by voting verification by using a scene point local reference system; clustering the verified pose matrix to obtain a pose estimation result with higher precision;
s5, performing Euclidean clustering by using pose assumptions with a certain percentage of high ticket number, and extracting poses belonging to different examples; extracting visible points of the model in the coarse pose from the clustering result, and performing accurate registration by using an improved ICP (inductively coupled plasma) algorithm to position a target; and calculating the overlapping rate of the model point and the scene points, and screening the unoccluded examples as the objects to be captured.
Preferably, step S2 specifically includes the steps of:
s2-1, calculating the resolution of the point cloud;
s2-2, calculating normal lines of all points by using a principal component analysis method;
s2-3, redirecting the calculated normal line, and uniformly pointing the normal line to the outside of the model;
s2-4, performing voxel grid uniform sampling on the point cloud of the model, and calculating shape index features and visible probability of the point cloud data after down sampling;
and S2-5, calculating the voting weight of the point pair.
Preferably, the step S2-3 includes the following specific steps:
s2-31, constructing a model point cloud Euclidean distance minimum spanning tree;
s2-32, adding neighborhood information into the European minimum spanning tree to construct a Riemannian diagram;
s2-33, calculating a minimum spanning tree of the Riemannian diagram;
s2-34, selecting an initial point, and traversing the Riemann minimum spanning tree for normal propagation by taking the normal of the point as a reference.
Preferably, step S3 comprises the following specific steps:
s3-1, calculating point pair characteristics
Computing point pair characteristics using two points in a model and their normalsPoint m, point m 1 And its normal n 1 And point m 2 And its normal n 2 Constituent point pair characteristics F (m) 1 ,m 2 ) The calculation method comprises the following steps:
F(m1,m 2 )=(||d||2,∠(n1,d),∠(n2,d),∠(n1,n 2 )) (8)
in the formula, d represents a two-point connection line, and angle (n) 1 ,d),∠(n 2 ,d),∠(n 1 ,n 2 ) The included angle between the two normal lines and the connecting line of the two points is formed by combining two lines;
s3-2, calculating a local reference system of the model points:
searching for the adjacent points in the neighborhood of the radius R of the model point p, assuming that n adjacent points exist, and calculating a covariance matrix M by taking the point p approximation as the center of gravity:
Figure BDA0003866317940000031
in the formula (9), d i Representing the euclidean distance of the proximity point pi to the gravity point p;
using singular value decomposition to calculate an eigenvalue and an eigenvector of a covariance matrix M, and constraining the vector direction to be used as a local reference system of a point p, wherein the constraint method comprises the following steps:
and (3) taking the feature vector corresponding to the minimum feature value as a Z axis of a local coordinate system, and constraining the feature vector to be consistent with the normal np direction:
Figure BDA0003866317940000032
wherein, Z + Direction of eigenvector corresponding to minimum eigenvalue, Z - In its opposite direction, n p Is p point normal;
the direction constraint method of the X axis of the local reference system comprises the following steps:
Figure BDA0003866317940000033
wherein X + For maximum feature value pairDirection of the corresponding feature vector, X - In the opposite direction;
Figure BDA0003866317940000034
and
Figure BDA0003866317940000035
the calculating method comprises the following steps:
Figure BDA0003866317940000036
in the case where the Z-axis is fixed to the X-axis direction, the Y-axis direction is calculated by multiplying Z by X:
Y=Z×X (13)。
preferably, step S4 comprises the steps of:
s4-1, firstly, down-sampling a scene, calculating point pair characteristics and hash values of the point pair characteristics in the scene, wherein each point pair characteristic can quickly search a matched pair in a pre-trained hash table according to the hash value of each point pair characteristic, calculating the similarity between corresponding point pair characteristic vectors of each hash matched pair, and voting in a voting table by using pre-calculated voting weight if the similarity requirement is met;
s4-2, performing hypothesis verification by using a local reference system:
assume a scene point s r After voting is completed for the pair of reference points (first point), at (m) rn ) The highest number of votes is obtained, and the transformation T from the model to the scene can be calculated according to the voting result m->s . Suppose a scene point s r Corresponding local reference system is R s Model point m r The corresponding local reference system is R m Firstly, the local reference frame of the model point is transformed to the scene coordinate system, and the transformed local reference frame of the model point is recorded as
Figure BDA0003866317940000041
Figure BDA0003866317940000042
According to the formula of Rodrigues, the geodesic distance D of two coordinate systems is calculated:
Figure BDA0003866317940000043
if the geodesic distance meets the constraint condition, the pose is assumed to be T m->s Accepted as a candidate pose, otherwise rejected;
s4-3, pose hypothesis clustering:
suppose that the voting poses are ordered from high to low according to the number of votes to be { T 1 ,T 2 ,…,T k Greedy clustering is carried out by taking each candidate pose as a clustering gravity center, one pose is allowed to be added into a plurality of clustering centers in the clustering process, and the pose clustering conditions are as follows: the translational deviation and geodesic distance (rotational deviation) are within set thresholds.
Preferably, step S5 comprises the steps of:
s5-1, candidate pose European clustering:
carrying out Euclidean clustering on the candidate poses in percentage before the number of votes is obtained, ensuring that the candidate poses of each category after clustering belong to different examples in a scene by setting Euclidean distance, and taking the pose with the highest number of votes in each category as a rough pose estimation result of one example;
s5-2, extracting model visible points under a scene view angle in a rough pose:
the calculation of the visible points of the model is calculated by a three-dimensional grid fast traversing method, firstly, the model is subjected to grid division, light rays are emitted from a view point, the first grid which comprises the model points and is encountered on a light ray path is regarded as a visible grid, the points in the grid are regarded as visible points, and the points in other grids on the path are invisible points.
S5-3, accurately registering visible points in the model coarse pose in a scene by using an ICP algorithm from coarse to fine;
and S5-4, calculating the overlapping rate, and screening the unmasked examples as the target to be grabbed.
Preferably, step S3 comprises the following specific steps:
s5-31, model data downsampling;
s5-32, searching a matching relationship, and keeping the matching relationship with the nearest distance in the one-to-many pairing relationship;
s5-33, calculating a standard deviation according to the matching point pairs, and further rejecting the pairing relation with a longer distance as a threshold value;
s5-34, constructing a point-to-surface error matrix, decomposing a minimized error by using QR, and solving a corresponding rotation translation transformation matrix;
and S5-35, updating the model pose according to the transformation matrix calculated in the step S5-34, and then continuously repeating the processes from S5-31 to S5-34 to iterate a better rigid body transformation matrix until the maximum iteration times is reached or the error requirement is met.
The invention overcomes the defects of the prior art and provides a random stacked workpiece positioning method based on point cloud pose estimation, which is divided into an off-line training part and an on-line matching part; the method comprises the following steps that model point cloud is obtained in a digital modeling mode, and scene point cloud is obtained by a depth camera; in the off-line training stage, normal redirection, quantitative sampling and weight calculation are carried out on the model data so as to improve the feature discrimination and the dispersion; the online matching is divided into a coarse registration part and a fine registration part, a scene point feature descriptor is calculated in the coarse registration stage, the matching relation with the model point is searched in the pre-training result, and the coarse poses of a plurality of examples in the scene are calculated in a voting mode; and in the fine registration stage, poses belonging to different examples are screened, and an ICP algorithm combined with the overlap rate hypothesis verification is provided to calculate the fine poses so as to improve the registration speed and precision.
Compared with the defects and shortcomings of the prior art, the invention has the following beneficial effects: the invention aims to realize the pose recognition of industrial parts in a random stacking scene, can robustly extract a plurality of target parts in the scene through single shooting, has the advantages of high workpiece recognition rate, strong noise resistance, good shielding resistance and the like, and can meet the actual application requirements of industrial fields.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention discloses a randomly-stacked workpiece positioning method based on point cloud pose estimation, which comprises the following steps:
s1, acquiring 3D point cloud data of a workpiece by using digital modeling software, and eliminating partial redundant data
Workpieces are randomly stacked on the carrying platform, the 3D camera is fixed above the workpieces for data acquisition, and the acquired data comprise workpiece point clouds and background point clouds due to the fact that the visual field range of the camera exceeds that of the carrying platform. In order to facilitate the operation of a subsequent positioning algorithm, the method firstly needs to remove redundant data in a scene.
The step S1 specifically includes the following processes:
s1-1, background interference point clouds are generally distributed at the camera view boundary and at a distance from the camera, so a through filter is adopted to remove such interference, namely only the point clouds within each dimension limit range are kept:
x 1 ≤x≤x 2 ,y 1 ≤y≤y 2 ,z 1 ≤z≤z 2 ,
wherein (x) 1 ,x 2 )、(y 1 ,y 2 )、(z 1 ,z 2 ) Range thresholds in the X, Y, and Z directions, respectively.
S1-2, because the target object is a stacked workpiece, the point cloud of the loading platform after direct filtering also belongs to redundant data.
S2, calculating the normal of the workpiece point cloud data in the step S1, constraining the normal direction to point to the outside of the model in a unified mode, carrying out down-sampling on the point cloud with the normal, and calculating voting weight according to the shape index features and the visible probability of each point
Step S2 specifically includes the following processes:
s2-1, calculating the resolution ratio of the point cloud:
assuming that the model point cloud consists of n points, find the nearest point pi for each point p, and calculate the distance d i . At a distance d of all points i Average value of (2)
Figure BDA0003866317940000071
As the point cloud resolution, the following equation is given:
Figure BDA0003866317940000072
s2-2, calculating a template point cloud normal: calculating the normal lines of all points by using a principal component analysis method, which comprises the following specific steps: for any point p, search for a neighboring point { p) in the R neighborhood i ,p j ,…,p k And calculating a covariance matrix M:
Figure BDA0003866317940000073
wherein k represents the number of adjacent points in the R neighborhood, and T represents the matrix transposition.
And carrying out singular value decomposition on the covariance matrix to calculate an eigenvalue and an eigenvector, and taking the eigenvector corresponding to the minimum eigenvalue as a normal.
S2-3, redirecting the normal calculated in the step S2-2, and uniformly pointing the normal to the outside of the model, wherein the specific method comprises the following steps:
s2-31, constructing a model point cloud Euclidean distance minimum spanning tree;
s2-32, adding neighborhood information into the European minimum spanning tree to construct a Riemannian diagram;
s2-33, calculating a minimum spanning tree of the Riemannian diagram;
s2-34, selecting an initial point, and traversing the Riemann minimum spanning tree for normal propagation by taking the normal of the point as a reference.
S2-4, performing voxel grid uniform sampling on the model point cloud in the S2-3, and calculating shape index characteristics and visible probability of point cloud data after down sampling, wherein the specific method comprises the following steps:
s2-41, calculating shape index characteristic S I (p):
Figure BDA0003866317940000074
In the formula, k 1 (p) and k 2 (p) represents the principal curvature at point p;
s2-42, calculating the visible probability of the point cloud: observing the model point cloud under m visual angles uniformly distributed on the spherical surface, counting the occurrence times k of each point, and defining the ratio of the total occurrence times to the number of the visual angles as the visible probability V (p) of the point, as shown in the following formula:
Figure BDA0003866317940000081
s2-5, calculating point pairs (p) i ,p j ) Voting weight W (p) i ,p j ) As shown in the following formula:
W(p i ,p j )=Vis(p i ,p j )+λDis(p i ,p j ) (5)
Figure BDA0003866317940000082
Figure BDA0003866317940000083
in the formula, vis (p) i ,p j ) Indicates the visibility weight, dis (p) of a point pair i ,p j ) Representing the discriminative weight of a point pair.
S3, in an off-line training stage, point pair characteristics are extracted, and the characteristics are stored in 16 neighborhoods in a lookup table; calculating local reference frame of each model point for later hypothesis verification
Step S3 specifically includes the following processes:
s3-1, calculating point pair characteristics
Computing point pair characteristics using two points in the model and their normals, point m 1 And its normal n 1 And point m 2 And its normal n 2 Constituent point pair characteristics F (m) 1 ,m 2 ) The calculation method comprises the following steps:
F(m 1 ,m 2 )=(||d|| 2 ,∠(n 1 ,d),∠(n 2 ,d),∠(n 1 ,n 2 )) (8)
in the formula, d represents a two-point connection line, and angle (n) 1 ,d),∠(n 2 ,d),∠(n 1 ,n 2 ) The included angle between the two normal lines and the connecting line of the two points is formed by combining two lines.
S3-2, calculating a local reference system of the model points:
searching for the adjacent points in the neighborhood of the radius R of the model point p, assuming that n adjacent points exist, taking the point p approximation as the gravity center, and calculating a covariance matrix M:
Figure BDA0003866317940000084
wherein, d i Representing the proximity point p i Euclidean distance to the center of gravity point p where each nearby point closer to the point p is assigned a greater weight to combat noise interference in the scene.
Then, using singular value decomposition to calculate the eigenvalue and eigenvector of the covariance matrix M, and constraining the vector direction as the local reference system of the point p, wherein the constraint method comprises the following steps:
using the eigenvector corresponding to the minimum eigenvalue as the Z axis of the local coordinate system, and constraining the eigenvector to be in contact with the normal n p The directions are consistent:
Figure BDA0003866317940000091
wherein Z is + Direction of eigenvector corresponding to minimum eigenvalue, Z - In its opposite direction, n p Is the p point normal.
The direction constraint method of the X axis of the local reference system comprises the following steps:
Figure BDA0003866317940000092
wherein, X + Direction of eigenvector corresponding to maximum eigenvalue, X - In the opposite direction;
Figure BDA0003866317940000094
and
Figure BDA0003866317940000095
the calculation method comprises the following steps:
Figure BDA0003866317940000093
in the case where the Z-axis and X-axis directions are fixed, the Y-axis direction is calculated by multiplying Z by X:
Y=Z×X (13)
s4, in an online matching stage, point pair characteristics of the scene point cloud are calculated, a matching relation is searched in a pre-training lookup table, and the pre-calculated voting weight is utilized to vote in a voting table; calculating a pose matrix generated by voting verification by using a scene point local reference system; clustering the verified pose matrix to obtain a pose estimation result with higher precision
The specific process of step S4 is:
s4-1, firstly, down-sampling processing is carried out on the scene, point pair characteristics and hash values of the point pair characteristics in the scene are calculated, each point pair characteristic can quickly search a matched pair in a pre-trained hash table according to the hash value of the point pair characteristic, for each hash matched pair, the similarity between corresponding point pair characteristic vectors is calculated, and if the similarity requirement is met, voting is carried out in a voting table by utilizing pre-calculated voting weight.
S4-2, performing hypothesis verification by using a local reference system:
assume a scene point s r After voting is completed for the point pair consisting of the reference point (first point), at (m) rn ) The highest number of votes is obtained, and the transformation T from the model to the scene can be calculated according to the voting result m->s . Suppose a scene point s r Corresponding local reference system is R s Model point m r Corresponding local reference system is R m Firstly, the local reference frame of the model point is transformed to the scene coordinate system, and the transformed local reference frame of the model point is recorded as
Figure BDA0003866317940000101
Figure BDA0003866317940000102
According to the formula of Rodrigues, the geodesic distance D of two coordinate systems is calculated:
Figure BDA0003866317940000103
if the geodesic distance meets the constraint condition, the pose is assumed to be T m->s And accepting the pose as a candidate pose, and otherwise rejecting the pose.
S4-3, pose hypothesis clustering:
suppose that the voting pose is T after being sorted from high to low according to the number of votes 1 ,T 2 ,…,T k Greedy clustering is carried out by taking each candidate pose as a clustering gravity center, one pose is allowed to be added into a plurality of clustering centers in the clustering process, and the pose clustering conditions are as follows: the translational offset and geodesic distance (rotational offset) are within set thresholds.
S5, performing Euclidean clustering by using pose hypotheses with a certain percentage of high ticket number, and extracting poses belonging to different examples; extracting visible points of the model in the coarse pose from the clustering result, and performing accurate registration by using an improved ICP (inductively coupled plasma) algorithm to position a target; calculating the overlapping rate of the model point and the scene points, and screening the unoccluded examples as the objects to be captured
The step S5 comprises the following specific processes:
s5-1, candidate pose European clustering:
and performing Euclidean clustering on the candidate poses in percentage before the number of votes is obtained, ensuring that the candidate poses of each category after clustering belong to different examples in a scene by setting Euclidean distance, and taking the pose with the highest number of votes in each category as a rough pose estimation result of one example.
S5-2, extracting model visible points under a scene view angle in a rough pose:
the calculation of the visible points of the model is calculated by a three-dimensional grid fast traversal method, firstly, the model is subjected to grid division, light rays are emitted from a viewpoint, the first grid which contains the model points and is encountered on the light ray path is regarded as a visible grid, the points in the grid are regarded as visible points, and the points in other grids on the path are invisible points.
S5-3, carrying out accurate registration on the visible points in the model coarse pose in a scene by using an ICP algorithm from coarse to fine, wherein the ICP registration specifically comprises the following steps:
s5-31, model data downsampling;
s5-32, searching a matching relation, and keeping the matching relation with the closest distance in the one-to-many matching relation;
s5-33, calculating a standard deviation according to the matching point pairs, and further rejecting the pairing relation with a longer distance as a threshold value;
s5-34, constructing a point-to-surface error matrix, decomposing a minimized error by using QR, and solving a corresponding rotation translation transformation matrix;
and S5-35, updating the model pose according to the transformation matrix calculated in the step S5-34, and then continuously repeating the processes from S5-31 to S5-34 to iterate a better rigid body transformation matrix until the maximum iteration times is reached or the error requirement is met.
S5-4, calculating the overlapping rate, and screening the unshielded examples as the objects to be grabbed:
and constructing a scene point cloud multi-dimensional tree-shaped query structure, counting the number of overlapped points of the model in the target accurate pose calculated in the step S5-3, taking the ratio of the number of overlapped points to the total number of model points as an overlap rate, and taking the example with the overlap rate larger than a set threshold value as the example to be captured.
The above description is intended to be illustrative of the preferred embodiment of the present invention and should not be taken as limiting the invention, but rather, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention.

Claims (7)

1. A randomly piled workpiece positioning method based on point cloud pose estimation is characterized by comprising the following steps:
s1, acquiring 3D point cloud data of a workpiece by using digital modeling software, and removing partial redundant data;
s2, calculating a workpiece point cloud data normal in the step S1, constraining the normal direction to point to the outside of the model in a unified mode, carrying out down-sampling on the point cloud with the normal, and calculating voting weight according to the shape index features and the visible probability of each point;
s3, in an off-line training stage, extracting point pair characteristics, and storing the characteristics to 16 neighborhoods in a lookup table; calculating a local reference system of each model point for later hypothesis verification;
s4, in an online matching stage, point pair characteristics of the scene point cloud are calculated, a matching relation is searched in a pre-training lookup table, and voting is carried out in a voting table by utilizing pre-calculated voting weight; calculating a local reference system of the scene points to verify a pose matrix generated by voting; clustering the verified pose matrix to obtain a pose estimation result with higher precision;
s5, performing Euclidean clustering by using pose assumptions with a certain percentage of high ticket number, and extracting poses belonging to different examples; extracting visible points of the model in the coarse pose from the clustering result, and performing accurate registration by using an improved ICP (inductively coupled plasma) algorithm to position a target; and calculating the overlapping rate of the model point and the scene points, and screening the unoccluded examples as the objects to be captured.
2. The method according to claim 1, wherein step S2 comprises in particular the steps of:
s2-1, calculating the resolution of the point cloud;
s2-2, calculating normals of all points by using a principal component analysis method;
s2-3, redirecting the calculated normal and uniformly pointing the normal to the outside of the model;
s2-4, performing voxel grid uniform sampling on the point cloud of the model, and calculating shape index features and visible probability of the point cloud data after down sampling;
and S2-5, calculating the voting weight of the point pair.
3. The method according to claim 2, wherein the step S2-3 comprises the specific steps of:
s2-31, constructing a model point cloud Euclidean distance minimum spanning tree;
s2-32, adding neighborhood information into the European minimum spanning tree to construct a Riemannian diagram;
s2-33, calculating a minimum spanning tree of the Riemannian diagram;
s2-34, selecting an initial point, and traversing the Riemann minimum spanning tree for normal propagation by taking the normal of the point as a reference.
4. The method according to claim 1, wherein step S3 comprises the specific steps of:
s3-1, calculating point pair characteristics
Computing point pair characteristics using two points in the model and their normals, point m 1 And its normal n 1 And point m 2 And its normal n 2 Constituent Point-to-Point characteristics F (m) 1 ,m 2 ) The calculation method comprises the following steps:
F(m 1 ,m 2 )=(||d|| 2 ,∠(n 1 ,d),∠(n 2 ,d),∠(n 1 ,n 2 )) (8)
in the formula, d represents a two-point connection line, and angle (n) 1 ,d),∠(n 2 ,d),∠(n 1 ,n 2 ) The included angle is formed by combining two normal lines and a connecting line of two points in pairs;
s3-2, calculating a local reference system of the model points:
searching for the adjacent points in the neighborhood of the radius R of the model point p, assuming that n adjacent points exist, and calculating a covariance matrix M by taking the point p approximation as the center of gravity:
Figure FDA0003866317930000021
in the formula (9), d i Representing the proximity point p i Euclidean distance to the center of gravity point p;
using singular value decomposition to calculate the eigenvalue and the eigenvector of the covariance matrix M, and constraining the vector direction as a local reference system of the point p, wherein the constraint method comprises the following steps:
using the eigenvector corresponding to the minimum eigenvalue as the Z axis of the local coordinate system, and constraining the eigenvector to be in contact with the normal n p The directions are consistent:
Figure FDA0003866317930000031
wherein, Z + Direction of eigenvector corresponding to minimum eigenvalue, Z - In its opposite direction, n p Is p point normal;
the direction constraint method of the X axis of the local reference system comprises the following steps:
Figure FDA0003866317930000032
wherein, X + Is the direction of the eigenvector corresponding to the largest eigenvalue, X - In the opposite direction;
Figure FDA0003866317930000033
and
Figure FDA0003866317930000034
the calculation method comprises the following steps:
Figure FDA0003866317930000035
in the case where the Z-axis is fixed to the X-axis direction, the Y-axis direction is calculated by multiplying Z by X:
Y=Z×X (13)。
5. the method of claim 1, wherein step S4 comprises the steps of:
s4-1, firstly, down-sampling is carried out on a scene, point pair characteristics and hash values of the point pair characteristics in the scene are calculated, each point pair characteristic can quickly search a matched pair in a pre-trained hash table according to the hash value of the point pair characteristic, for each hash matched pair, the similarity between corresponding point pair characteristic vectors is calculated, and if the similarity requirement is met, voting is carried out in a voting table by using pre-calculated voting weight;
s4-2, performing hypothesis verification by using a local reference system:
assume a scene point s r After voting is completed for the point pair consisting of the reference point (first point), at (m) rn ) The highest vote number is obtained, and the transformation T from the model to the scene can be calculated according to the voting result m->s . Suppose a scene point s r Corresponding local reference system is R s Model point m r The corresponding local reference system is R m Firstly, the local reference frame of the model point is transformed to the scene coordinate system, and the transformed local reference frame of the model point is recorded as
Figure FDA0003866317930000036
Figure FDA0003866317930000037
According to the formula of Rodrigues, the geodesic distance D of two coordinate systems is calculated:
Figure FDA0003866317930000041
if the geodesic distance meets the constraint condition, the pose is assumed to be T m->s Accepted as a candidate pose, otherwise rejected;
s4-3, pose hypothesis clustering:
suppose that the voting poses are ordered from high to low according to the number of votes to be { T 1 ,T 2 ,…,T k Greedy clustering is carried out by taking each candidate pose as a clustering gravity center, one pose is allowed to be added into a plurality of clustering centers in the clustering process, and the pose clustering conditions are as follows: the translational offset and geodesic distance (rotational offset) are within set thresholds.
6. The method of claim 1, wherein step S5 comprises the steps of:
s5-1, candidate pose European clustering:
carrying out Euclidean clustering on the candidate poses in percentage before the number of votes is obtained, ensuring that the candidate poses of each category after clustering belong to different examples in a scene by setting Euclidean distance, and taking the pose with the highest number of votes in each category as a rough pose estimation result of one example;
s5-2, extracting model visible points under a scene view angle in a rough pose:
the calculation of the visible points of the model is calculated by a three-dimensional grid fast traversing method, firstly, the model is subjected to grid division, light rays are emitted from a view point, the first grid which comprises the model points and is encountered on a light ray path is regarded as a visible grid, the points in the grid are regarded as visible points, and the points in other grids on the path are invisible points.
S5-3, accurately registering visible points in the model coarse pose in a scene by using an ICP algorithm from coarse to fine;
and S5-4, calculating the overlapping rate, and screening the non-shielded examples as the target to be captured.
7. The method according to claim 6, wherein step S3 comprises the specific steps of:
s5-31, model data downsampling;
s5-32, searching a matching relation, and keeping the matching relation with the closest distance in the one-to-many matching relation;
s5-33, calculating a standard deviation according to the matching point pairs, and further rejecting the pairing relation with a longer distance as a threshold value;
s5-34, constructing a point-to-surface error matrix, decomposing a minimized error by using QR, and solving a corresponding rotation translation transformation matrix;
and S5-35, updating the model pose according to the transformation matrix calculated in the step S5-34, and then continuously repeating the processes from S5-31 to S5-34 to iterate a better rigid body transformation matrix until the maximum iteration times is reached or the error requirement is met.
CN202211183631.2A 2022-09-27 2022-09-27 Random stacked workpiece positioning method based on point cloud pose estimation Pending CN115512137A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211183631.2A CN115512137A (en) 2022-09-27 2022-09-27 Random stacked workpiece positioning method based on point cloud pose estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211183631.2A CN115512137A (en) 2022-09-27 2022-09-27 Random stacked workpiece positioning method based on point cloud pose estimation

Publications (1)

Publication Number Publication Date
CN115512137A true CN115512137A (en) 2022-12-23

Family

ID=84505194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211183631.2A Pending CN115512137A (en) 2022-09-27 2022-09-27 Random stacked workpiece positioning method based on point cloud pose estimation

Country Status (1)

Country Link
CN (1) CN115512137A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597082A (en) * 2023-05-17 2023-08-15 杭州电子科技大学 Hub workpiece digitizing method based on implicit three-dimensional reconstruction
CN117788539A (en) * 2024-02-28 2024-03-29 菲特(天津)检测技术有限公司 Point cloud data registration method and system and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
CN108009557A (en) * 2017-12-25 2018-05-08 浙江工业大学 Three-dimensional model local feature description method based on shared weight convolution network
CN108830902A (en) * 2018-04-19 2018-11-16 江南大学 A kind of workpiece identification at random and localization method based on points cloud processing
CN110930456A (en) * 2019-12-11 2020-03-27 北京工业大学 Three-dimensional identification and positioning method of sheet metal part based on PCL point cloud library

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
CN108009557A (en) * 2017-12-25 2018-05-08 浙江工业大学 Three-dimensional model local feature description method based on shared weight convolution network
CN108830902A (en) * 2018-04-19 2018-11-16 江南大学 A kind of workpiece identification at random and localization method based on points cloud processing
CN110930456A (en) * 2019-12-11 2020-03-27 北京工业大学 Three-dimensional identification and positioning method of sheet metal part based on PCL point cloud library

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭交通: ""基于点云的目标识别和位姿估计方法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 February 2020 (2020-02-15), pages 34 - 56 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597082A (en) * 2023-05-17 2023-08-15 杭州电子科技大学 Hub workpiece digitizing method based on implicit three-dimensional reconstruction
CN117788539A (en) * 2024-02-28 2024-03-29 菲特(天津)检测技术有限公司 Point cloud data registration method and system and electronic equipment

Similar Documents

Publication Publication Date Title
CN115512137A (en) Random stacked workpiece positioning method based on point cloud pose estimation
CN109887015B (en) Point cloud automatic registration method based on local curved surface feature histogram
CN109272523B (en) Random stacking piston pose estimation method based on improved CVFH (continuously variable frequency) and CRH (Crh) characteristics
CN111553409B (en) Point cloud identification method based on voxel shape descriptor
CN106600639B (en) The ICP pose location technology of genetic algorithm combining adaptive threshold value constraint
CN106373118B (en) The complex curved surface parts point cloud compressing method of border and local feature can be effectively retained
CN109241901B (en) A kind of detection and recognition methods to the three-dimensional point cloud with hole
CN110942515A (en) Point cloud-based target object three-dimensional computer modeling method and target identification method
CN114972377B (en) 3D point cloud segmentation method and device based on mobile least square method and super-voxel
CN112669385A (en) Industrial robot workpiece identification and pose estimation method based on three-dimensional point cloud characteristics
CN108876852B (en) Online real-time object identification and positioning method based on 3D vision
CN105046694A (en) Quick point cloud registration method based on curved surface fitting coefficient features
CN113034600A (en) Non-texture planar structure industrial part identification and 6D pose estimation method based on template matching
CN112712589A (en) Plant 3D modeling method and system based on laser radar and deep learning
WO2022237225A1 (en) Online real-time registration method for incomplete three-dimensional scanning point cloud having plane reference
WO2021082380A1 (en) Laser radar-based pallet recognition method and system, and electronic device
CN115861397A (en) Point cloud registration method based on improved FPFH-ICP
CN115409886A (en) Part geometric feature measuring method, device and system based on point cloud
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN110633749B (en) Three-dimensional point cloud identification method based on improved viewpoint feature histogram
CN114648445B (en) Multi-view high-resolution point cloud splicing method based on feature point extraction and fine registration optimization
CN100553349C (en) Determine the method for target topological relation and the camera calibration target that can put arbitrarily
JP3101674B2 (en) 3D recognition method and apparatus using CAD information
CN115147433A (en) Point cloud registration method
Wang et al. 6D pose estimation from point cloud using an improved point pair features method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination