CN117710396A - 3D point cloud-based recognition method for nonstandard parts in light steel industry - Google Patents

3D point cloud-based recognition method for nonstandard parts in light steel industry Download PDF

Info

Publication number
CN117710396A
CN117710396A CN202311724140.9A CN202311724140A CN117710396A CN 117710396 A CN117710396 A CN 117710396A CN 202311724140 A CN202311724140 A CN 202311724140A CN 117710396 A CN117710396 A CN 117710396A
Authority
CN
China
Prior art keywords
point cloud
contour
dimensional
circumscribed rectangle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311724140.9A
Other languages
Chinese (zh)
Other versions
CN117710396B (en
Inventor
潘铭洪
王伟昌
韦超凡
钱宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Gongbu Zhizao Industrial Technology Co ltd
Original Assignee
Anhui Gongbu Zhizao Industrial Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Gongbu Zhizao Industrial Technology Co ltd filed Critical Anhui Gongbu Zhizao Industrial Technology Co ltd
Priority to CN202311724140.9A priority Critical patent/CN117710396B/en
Publication of CN117710396A publication Critical patent/CN117710396A/en
Application granted granted Critical
Publication of CN117710396B publication Critical patent/CN117710396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a 3D point cloud-based method for identifying nonstandard parts in the light steel industry, which relates to the technical field of the light steel industry and comprises the following steps: carrying out primary 3D point cloud acquisition on the material table by a 3D camera; step two: projecting all the point clouds onto a two-dimensional plane through a projection formula to obtain a two-dimensional depth map; step three: preprocessing the projected two-dimensional depth map to obtain an image to be processed; and (3) carrying out point cloud capturing and processing on nonstandard parts stacked on a material table through a 3D camera, extracting the outline of each part from point cloud data, converting the outline to a two-dimensional plane through a projection formula, convolving the image on the two-dimensional plane through four custom operators to obtain a complete outline image, converting four points of the corresponding minimum circumscribed rectangle into coordinates in a three-dimensional space, converting the points on the two-dimensional image into coordinates in the three-dimensional space, and further obtaining the accurate outline of each material.

Description

3D point cloud-based recognition method for nonstandard parts in light steel industry
Technical Field
The invention relates to the technical field of light steel industry, in particular to a 3D point cloud-based method for identifying nonstandard parts in the light steel industry.
Background
The intelligent and automatic are the necessary development direction of the manufacturing industry, along with the rapid development of the intelligent welding field, intelligent assembly welding robots are widely applied, for the assembly robots, the automatic assembly of non-standard parts is to be realized, and the key information of the non-standard parts is naturally identified automatically, so that the assembly robots can automatically grasp required materials to assemble at specified positions;
however, when the nonstandard part is automatically identified through the 3D camera, due to temperature drift existing in the 3D camera, the original coordinate system is easy to deviate, so that the difference existing between the foreground and the background of the nonstandard part image originally acquired by the 3D camera is wiped off or amplified, and further the discharging part cannot be accurately identified, or the information of the identified discharging part is inaccurate, and based on the method, the identification method of the nonstandard part in the light steel industry based on the 3D point cloud is provided.
Disclosure of Invention
The invention aims to provide a 3D point cloud-based recognition method for nonstandard parts in the light steel industry, which solves the technical problems that due to temperature drift existing in a 3D camera, an original coordinate system is easily caused to deviate, the difference between the foreground and the background of an nonstandard part image originally acquired by the 3D camera is wiped off or amplified, and then a discharging part cannot be accurately recognized.
The aim of the invention can be achieved by the following technical scheme:
A3D point cloud-based recognition method for nonstandard parts in the light steel industry comprises the following steps:
step one: placing various stacked parts of different types on a fixed material table, and carrying out one-time 3D point cloud acquisition on the material table through a 3D camera;
step two: extracting a maximum plane in point cloud information by adopting a random sampling consistency algorithm to serve as a material table plane, then acquiring all point clouds within a preset distance above the plane, and projecting all point clouds onto a two-dimensional plane through a projection formula to acquire a two-dimensional depth map;
step three: preprocessing the projected two-dimensional depth map to obtain an image to be processed;
step four: carrying out convolution operation on the image to be processed by using four custom operators through a Filter2D to obtain four images to be synthesized, and carrying out weighted synthesis on the four images to be synthesized through an Addweight function to obtain a complete contour image;
step five: and screening out the required material contour from the complete contour image according to the area and circumference information, solving the minimum circumscribed rectangle of each material contour, and converting the position of four points of the minimum circumscribed rectangle corresponding to each material contour in the complete contour image coordinate system into coordinates in a three-dimensional space.
Step six: and (3) cutting out the point cloud of each material piece according to the corresponding range of the circumscribed rectangle of each material piece in the three-dimensional space through spatial direct filtering, searching the maximum plane for the point cloud of each material piece by adopting a random sampling consistency algorithm, obtaining the material piece plane of each material piece, and projecting the material piece plane of each material piece onto the two-dimensional plane in the same mode as in the step two, so as to obtain the accurate profile of each material piece.
As a further scheme of the invention: the specific mode of obtaining the two-dimensional depth map by projecting all the point clouds onto a two-dimensional plane through a projection formula is as follows:
passing through a projection formula; projecting all point clouds onto a two-dimensional plane, wherein fx, fy, u and v are fixed internal references of the 3D camera, (Xw, yw, zw) are coordinates corresponding to all point clouds in a world coordinate system, (Xc, yc) are coordinates corresponding to all point clouds on the two-dimensional plane, w is the number w of all point clouds not less than 1, c is the number of coordinate points projected onto the two-dimensional plane, w is not less than 1, and w is not less than 1.
As a further scheme of the invention: the specific mode for converting the four points of the minimum circumscribed rectangle corresponding to each material profile into the coordinates in the three-dimensional space is as follows:
firstly, selecting a minimum circumscribed rectangle corresponding to a material piece outline as a circumscribed rectangle of a target material piece, acquiring coordinates of four points of the circumscribed rectangle of the target material piece in a three-dimensional space through a conversion formula, and repeating the above mode to acquire the coordinates of the four points of the circumscribed rectangle of each material piece in a world coordinate system.
As a further scheme of the invention: the specific conversion formula for converting four points of the circumscribed rectangle of the target material piece into coordinates in the three-dimensional space is as follows:
Xw1=(Xc1-u)*Zw1/fx;
Yw1=(Yc1-v)*Zw1/fy;
Zw1=Zc1;
wherein c1 is the number of four points of the circumscribed rectangle of the target material, c is more than or equal to 4 and more than or equal to c1 and more than or equal to w is more than or equal to 4 and more than or equal to w1 and more than or equal to 1, (Xc 1 and Yc 1) is the position of the four points of the circumscribed rectangle of the target material in a complete contour image coordinate system, and (Xw 1, yw1 and Zw 1) is the coordinate of the four points of the circumscribed rectangle of the target material in a world coordinate system.
As a further scheme of the invention: the specific mode for selecting the required material profile from the complete profile image through the area and perimeter information is as follows:
calculating the area and the perimeter of each material contour in the complete contour image, and selecting the contours of which the screen contour area and the contour perimeter belong to a preset area interval and a preset perimeter interval as the required material contours, wherein specific numerical values of the preset area interval and the preset perimeter interval are preset values.
The invention has the beneficial effects that:
according to the invention, point cloud capturing and processing are carried out on nonstandard parts stacked on a material table through a 3D camera, contour and shape information of each part are extracted from point cloud data, all point clouds are converted to a two-dimensional plane through a projection formula, images on the two-dimensional plane are convolved through four custom operators, then the images are weighted and synthesized to obtain a complete contour image, required material part contours are selected from the complete contour image through information such as area, perimeter and the like, four points of the minimum circumscribed rectangle corresponding to each material part contour are obtained and converted into coordinates in a three-dimensional space to obtain accurate positions of the parts in the space, points on the two-dimensional image are converted into actual coordinates in the three-dimensional space, and then the accurate positions of the parts in the space and the accurate contour of each material part are obtained, so that differences between the foreground and the background of the nonstandard part images originally obtained by the 3D camera are reduced, the separation and the separation operation of the nonstandard parts are facilitated, the production error rate is improved, and the accurate data provided for the subsequent production and processing operations are realized.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of a method framework of the present invention;
fig. 2 is a schematic diagram of the present invention for weighting four images to be synthesized to obtain a complete contour image.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1-2, the invention discloses a 3D point cloud-based method for identifying nonstandard parts in the light steel industry, which comprises the following steps:
step one: placing various stacked parts of different types on a fixed material table, arranging a 3D camera above the fixed material table, and capturing point clouds of the parts of different types on the material table through the 3D camera;
it should be noted that, when the 3D camera is selected, the field of view of the 3D camera defaults to a large field of view, and after the 3D camera is placed, the field of view that can be placed by the 3D camera can cover the whole material table;
step two: carrying out one-time 3D point cloud acquisition on a material table through a 3D camera, extracting the maximum plane in point cloud information by adopting a random sampling consistency algorithm, marking the maximum plane as a material table plane, regarding the material table plane as a plane for placing a material, acquiring all point clouds within a preset distance G above the material table plane, converting all acquired point clouds onto a two-dimensional plane through a projection formula, and further obtaining a projected two-dimensional depth map, wherein the specific mode for acquiring the projected two-dimensional depth map is as follows:
a1: acquiring fixed internal references fx, fy, u and v of the 3D camera;
a2: acquiring coordinates corresponding to all point clouds in a world coordinate system within a preset distance G above a material table plane, and marking the coordinates as (Xw, yw, zw);
a3: combining (Xw, yw, zw) of all the point clouds with fixed internal references fx, fy, u and v of the 3D camera, and projecting all the point clouds onto a two-dimensional plane through a projection formula to obtain positions (Xc, yc) of all the point clouds in a two-dimensional plane coordinate system, wherein the projection formula is as follows; u+xw fx/zw=xc;
v+Yw*fy/Zw=Yc;
simultaneously, through zw=zc, the position depths of all the point clouds are recorded, wherein w is the number w of the corresponding point clouds which is more than or equal to 1, c is the number of corresponding coordinate points after all the point clouds are projected on a two-dimensional plane, and w is more than or equal to 1;
when a camera captures a point cloud, it records the three-dimensional position of each point cloud under the camera coordinate system, where the Zc coordinate represents the distance between the corresponding point cloud and the camera, that is, the position depth of the point cloud, and refers to the distance value of each point in the point cloud data captured by the camera, so as to further represent the vertical distance between each point and the camera in the three-dimensional space;
step three: preprocessing the projected two-dimensional depth map to obtain a to-be-processed image, wherein the preprocessing operation comprises mean value filtering, binarization, closing operation and the like, and the preprocessing operation belongs to the existing and mature technology, so that details are omitted, the preprocessing operation aims at reducing noise of the two-dimensional depth map and highlighting the contour of a workpiece, and an accurate part contour is obtained;
step four: convolving the images to be processed on the two-dimensional plane by adopting a Filter2D through four custom operators to obtain four images to be synthesized, and weighting and synthesizing the four images to be synthesized through an AddWeight function to obtain a complete contour image;
the four custom operators are respectively:
operator 1: { -1,0,1}, { -1,0,1};
operator 2: {1,0, -1}, {1,0, -1};
operator 3: { -1, -1, -1}, {0, 0}, {1, 1};
operator 4: {1, 1}, {0, 0}, { -1, -1, -1};
it should be noted that these operators are all used for performing convolution operation on the image to be processed, where the convolution operation is a filter-based image processing technique, and a new image is obtained by performing product sum and sum operation on each pixel of the image and the filter;
the specific using method comprises the following steps: for each operator, carrying out pixel-by-pixel convolution operation on the operator and the image to be processed, for each pixel position in the image to be processed, carrying out product sum and summation on the operator and pixel values around the corresponding position to obtain a new pixel value, wherein the new pixel value can be used for enhancing certain characteristics of the image or extracting required image information, and carrying out convolution operation on the image is an existing and mature technology, so detailed operation methods are not described herein, and the operator is used for carrying out convolution operation, so that the information of the horizontal and vertical edges of the image to be processed can be enhanced, and the contour characteristics of each material can be extracted better;
step five: selecting a required material contour from the complete contour image through the area and perimeter information, and acquiring a corresponding minimum circumscribed rectangle of each material contour, namely a minimum rectangular frame capable of surrounding the contour, converting the positions of four points of the minimum circumscribed rectangle corresponding to each material contour in a complete contour image coordinate system into coordinates in a three-dimensional space, so that the precision loss of a two-dimensional image is reduced;
the specific mode for converting the four points of the minimum circumscribed rectangle corresponding to each material profile into the coordinates in the three-dimensional space is as follows:
s1: firstly, selecting a minimum circumscribed rectangle corresponding to a material piece outline as a circumscribed rectangle of a target material piece, and acquiring coordinates of four points of the circumscribed rectangle of the target material piece in a three-dimensional space through the following conversion formula:
Xw1=(Xc1-u)*Zw1/fx;
Yw1=(Yc1-v)*Zw1/fy;
Zw1=Zc1;
wherein fx, fy, u and v are internal parameters of the camera, c1 is the number of four points of the external rectangle of the target material, c is more than or equal to 4 and more than or equal to c1 and more than or equal to 1, w is more than or equal to 4 and more than or equal to w1 and more than or equal to 1, (Xc 1 and Yc 1) are the positions of the four points of the external rectangle of the target material in a complete contour image coordinate system, and (Xw 1, yw1 and Zw 1) are the coordinates of the four points of the external rectangle of the target material in a world coordinate system, and the corresponding numerical value of Zw1 is obtained from the step A3;
s2: the step S1 is repeated to obtain coordinates of four points of each material circumscribed rectangle in a world coordinate system;
the specific mode for selecting the required material profile from the complete profile image through the area and perimeter information is as follows:
calculating the area and the perimeter of each material contour in the complete contour image, selecting the contour of which the screen contour area and the contour perimeter belong to the preset area interval and the contour within the preset perimeter interval as required material contours according to the preset area intervals [ M1, M2] and the preset perimeter intervals [ L1, L2], wherein the area of each material contour in the complete contour image can be estimated through the number of pixels on the contour, the perimeter can be calculated through the number of edge pixels on the contour, M1, M2 in the preset area interval and L1 and L2 in the preset perimeter interval are preset values, and specific values are required to be drawn by related staff according to actual requirements; the method is beneficial to selecting the required material profile meeting the conditions from the complete profile image;
step six: the method comprises the steps of using spatial direct filtering, obtaining a range corresponding to each material piece according to coordinates of four points of an external rectangle of each material piece in a three-dimensional space, cutting out point cloud data of each material piece according to the range corresponding to each material piece, searching a maximum plane in the point cloud data of each material piece by adopting a random sampling consistency algorithm, obtaining a plane which is a material piece plane of each material piece, representing the surface shape of the material piece by the material piece plane of each material piece, projecting the point cloud in the material piece plane corresponding to each material piece onto a two-dimensional plane in the same mode as in the second step, and further obtaining a two-dimensional depth map of each material piece after the material piece plane is projected, wherein the projection process can obtain the accurate contour of each material piece, and can more accurately represent the shape and the edge of the material piece than the previous processing, so that the accurate contour of the material piece can be obtained;
the method comprises the steps of carrying out point cloud capturing and processing on nonstandard parts stacked on a material table through a 3D camera, extracting contour and shape information of each part from point cloud data, converting all acquired point clouds to a two-dimensional plane through a projection formula through fixed internal parameters of the camera, convolving images on the two-dimensional plane through four custom operators to obtain four images to be synthesized, carrying out weighted synthesis on the four images to be synthesized to obtain a complete contour image, selecting a required material contour from the complete contour image through information such as area, perimeter and the like, acquiring a minimum external rectangle corresponding to each material contour, converting positions of four points of the minimum external rectangle in a two-dimensional image coordinate system into coordinates in a three-dimensional space to obtain accurate positions of the parts in the space, converting the points on the two-dimensional image into actual coordinates in the three-dimensional space to obtain accurate positions of the parts in the space, and carrying out accurate contour of each material, facilitating separation and material dividing operation on nonstandard parts, improving production efficiency, reducing production errors, providing accurate data and realizing accurate production errors and accurate data dividing and processing of the non-standard parts, and realizing the fact that the error D can exist between the accurate data and the non-standard images and the accurate data of the non-standard parts can be obtained.
Example two
As a second embodiment of the present invention, when the present application is specifically implemented, compared with the first embodiment, the technical solution of the present embodiment is different from the first embodiment only in the present embodiment;
in the implementation, a convex hull is taken out by adopting a roller method, and four straight lines of a contour edge are randomly sampled and fitted for each material contour;
selecting a minimum circumscribed rectangle corresponding to a material contour as a target rectangular contour, taking out a convex hull by adopting a roller method, and then fitting four straight lines on the edge of the target rectangular contour by random sampling;
wherein the straight line equation 1 is:
(XB-XB1)/m1=(XB-YB1)/n1=(XB-ZB1)/p1;
wherein m1, n1 and p1 are constant values, (XB 1, YB1 and ZB 1) are any point on a straight line;
the straight line equation 2 is:
(XB-XB2)/m2=(XB-YB2)/n2=(XB-ZB2)/p2;
wherein m2, n2 and p2 are constant values, and (XB 2, YB2 and ZB 2) is any point on a straight line
The parameter equation for line 1 is:
XB=XB1+m*t
YB=YB1+n*t
ZB=ZB1+p*t
substituting the linear equation 2 into the linear equation 1 can obtain:
(XB1+m1*t-XB2)/m2=(YB1+n1*t-YB2)/n2=(ZB1+p1*t-ZB2)/p2;
and then obtaining;
t=[m2*(YB1-YB2)-n2*(XB1-XB2)]/m1n2-m2n1;
then, t is brought into the linear equation 1, so that the intersection point coordinates of the straight lines L1 and L2 can be calculated, wherein the intersection point coordinates are as follows: (xb1+m×t, yb1+n1×t, zb1+p1×t);
then repeating the steps, namely, the four-point space coordinates of the target rectangular outline can be sequentially obtained, and further the aim of converting the four points of the minimum circumscribed rectangle corresponding to the material piece of the target rectangular outline into the coordinates in the three-dimensional space is fulfilled;
through defining three parameters, then using the parameters to represent two given linear equations, then making the two given linear equations equal, solving the corresponding parameter t, finally substituting the t into any given linear equation to obtain the intersection point coordinate, repeating the steps, and then sequentially obtaining the four-point space coordinate of the target rectangular outline, thereby completing the purpose of converting the four points of the minimum circumscribed rectangle corresponding to the material piece of the target rectangular outline into the coordinates in the three-dimensional space.
Example III
As an embodiment three of the present invention, in the present application, the technical solution of the present embodiment is to combine the solutions of the above embodiment one and embodiment two, compared with the embodiment one and embodiment two.
The above formulas are all formulas with dimensionality removed and numerical calculation, the formulas are formulas with the latest real situation obtained by software simulation through collecting a large amount of data, and preset parameters and threshold selection in the formulas are set by those skilled in the art according to the actual situation.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The method for identifying the nonstandard parts in the light steel industry based on the 3D point cloud is characterized by comprising the following steps of:
step one: placing various stacked parts of different types on a fixed material table, and carrying out one-time 3D point cloud acquisition on the material table through a 3D camera;
step two: extracting a maximum plane in point cloud information by adopting a random sampling consistency algorithm to serve as a material table plane, then acquiring all point clouds within a preset distance above the plane, and projecting all point clouds onto a two-dimensional plane through a projection formula to acquire a two-dimensional depth map;
step three: preprocessing the projected two-dimensional depth map to obtain an image to be processed;
step four: carrying out convolution operation on the image to be processed by using four custom operators through a Filter2D to obtain four images to be synthesized, and carrying out weighted synthesis on the four images to be synthesized through an AddWei ghted function to obtain a complete contour image;
step five: and screening out the required material contour from the complete contour image according to the area and circumference information, solving the minimum circumscribed rectangle of each material contour, and converting the position of four points of the minimum circumscribed rectangle corresponding to each material contour in the complete contour image coordinate system into coordinates in a three-dimensional space.
Step six: and (3) cutting out the point cloud of each material piece according to the corresponding range of the circumscribed rectangle of each material piece in the three-dimensional space through spatial direct filtering, searching the maximum plane for the point cloud of each material piece by adopting a random sampling consistency algorithm, obtaining the material piece plane of each material piece, and projecting the material piece plane of each material piece onto the two-dimensional plane in the same mode as in the step two, so as to obtain the accurate profile of each material piece.
2. The 3D point cloud-based recognition method for nonstandard parts in the light steel industry is characterized by comprising the specific steps of projecting all point clouds onto a two-dimensional plane through a projection formula to obtain a two-dimensional depth map:
passing through a projection formula; u+xw fx/zw=xc; v+yw fy/zw=yc; and projecting all the point clouds onto a two-dimensional plane, wherein fx, fy, u and v are fixed internal references of the 3D camera, (Xw, yw, zw) are coordinates corresponding to all the point clouds in a world coordinate system, (Xc, yc) are coordinates corresponding to all the point clouds on the two-dimensional plane, w is the number w of all the point clouds not less than 1, c is the number of coordinate points projected onto the two-dimensional plane, w is not less than 1, and simultaneously, recording the position depth of all the point clouds through zw=zc.
3. The method for identifying nonstandard parts in light steel industry based on 3D point cloud according to claim 1, wherein the preprocessing operation in the second step comprises mean filtering, binarization and closing operation.
4. The 3D point cloud-based recognition method for nonstandard parts in the light steel industry, according to claim 1, is characterized in that the specific mode of converting four points of the minimum circumscribed rectangle corresponding to each material contour into coordinates in a three-dimensional space is as follows:
firstly, selecting a minimum circumscribed rectangle corresponding to a material piece outline as a circumscribed rectangle of a target material piece, acquiring coordinates of four points of the circumscribed rectangle of the target material piece in a three-dimensional space through a conversion formula, and repeating the above mode to acquire the coordinates of the four points of the circumscribed rectangle of each material piece in a world coordinate system.
5. The 3D point cloud based recognition method for nonstandard parts in the light steel industry, according to claim 3, wherein the specific conversion formula for converting four points of a target material circumscribed rectangle into coordinates in a three-dimensional space is as follows:
Xw1=(Xc1-u)*Zw1/fX;
Yw1=(Yc1-v)*Zw1/fy;
Zw1=Zc1
wherein c1 is the number of four points of the circumscribed rectangle of the target material, c is more than or equal to 4 and more than or equal to c1 and more than or equal to w is more than or equal to 4 and more than or equal to w1 and more than or equal to 1, (Xc 1 and Yc 1) is the position of the four points of the circumscribed rectangle of the target material in a complete contour image coordinate system, and (Xw 1, yw1 and Zw 1) is the coordinate of the four points of the circumscribed rectangle of the target material in a world coordinate system.
6. The 3D point cloud based method for identifying nonstandard parts in the light steel industry according to claim 1, wherein the specific manner of selecting the required material profile from the complete profile image through the area and perimeter information is as follows:
calculating the area and the perimeter of each material contour in the complete contour image, and selecting the contours of which the screen contour area and the contour perimeter belong to a preset area interval and a preset perimeter interval as the required material contours, wherein specific numerical values of the preset area interval and the preset perimeter interval are preset values.
7. The method for identifying nonstandard parts in light steel industry based on 3D point cloud according to claim 6, wherein the area of each material contour in the complete contour image can be estimated by the number of pixels on the contour, and the perimeter can be calculated by the number of edge pixels on the contour.
8. The 3D point cloud-based recognition method for nonstandard parts in the light steel industry, which is characterized in that four custom operators are respectively:
operator 1: { -1,0,1}, { -1,0,1};
operator 2: {1,0, -1}, {1,0, -1};
operator 3: { -1, -1, -1}, {0, 0}, {1, 1};
operator 4: {1,1,1},{0,0,0},{ -1, -1, -1}.
CN202311724140.9A 2023-12-14 2023-12-14 3D point cloud-based recognition method for nonstandard parts in light steel industry Active CN117710396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311724140.9A CN117710396B (en) 2023-12-14 2023-12-14 3D point cloud-based recognition method for nonstandard parts in light steel industry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311724140.9A CN117710396B (en) 2023-12-14 2023-12-14 3D point cloud-based recognition method for nonstandard parts in light steel industry

Publications (2)

Publication Number Publication Date
CN117710396A true CN117710396A (en) 2024-03-15
CN117710396B CN117710396B (en) 2024-06-14

Family

ID=90160128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311724140.9A Active CN117710396B (en) 2023-12-14 2023-12-14 3D point cloud-based recognition method for nonstandard parts in light steel industry

Country Status (1)

Country Link
CN (1) CN117710396B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109174A (en) * 2017-12-13 2018-06-01 上海电气集团股份有限公司 A kind of robot monocular bootstrap technique sorted at random for part at random and system
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109903327A (en) * 2019-03-04 2019-06-18 西安电子科技大学 A kind of object dimension measurement method of sparse cloud
CN112070838A (en) * 2020-09-07 2020-12-11 洛伦兹(北京)科技有限公司 Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN112102397A (en) * 2020-09-10 2020-12-18 敬科(深圳)机器人科技有限公司 Method, equipment and system for positioning multilayer part and readable storage medium
CN113192054A (en) * 2021-05-20 2021-07-30 清华大学天津高端装备研究院 Method and system for detecting and positioning complex parts based on 2-3D vision fusion
DE102021103726A1 (en) * 2020-03-13 2021-09-16 Omron Corporation Measurement parameter optimization method and device as well as computer control program
CN114140486A (en) * 2021-12-09 2022-03-04 易思维(杭州)科技有限公司 Point cloud depth obtaining method based on normal projection
CN115187556A (en) * 2022-07-19 2022-10-14 中航沈飞民用飞机有限责任公司 Method for positioning parts and acquiring point cloud on production line based on machine vision
CN116385356A (en) * 2023-02-17 2023-07-04 中国重汽集团济南动力有限公司 Method and system for extracting regular hexagonal hole features based on laser vision
CN116778288A (en) * 2023-06-19 2023-09-19 燕山大学 Multi-mode fusion target detection system and method
CN116843631A (en) * 2023-06-20 2023-10-03 安徽工布智造工业科技有限公司 3D visual material separating method for non-standard part stacking in light steel industry

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109174A (en) * 2017-12-13 2018-06-01 上海电气集团股份有限公司 A kind of robot monocular bootstrap technique sorted at random for part at random and system
CN108555908A (en) * 2018-04-12 2018-09-21 同济大学 A kind of identification of stacking workpiece posture and pick-up method based on RGBD cameras
CN109903327A (en) * 2019-03-04 2019-06-18 西安电子科技大学 A kind of object dimension measurement method of sparse cloud
DE102021103726A1 (en) * 2020-03-13 2021-09-16 Omron Corporation Measurement parameter optimization method and device as well as computer control program
CN112070838A (en) * 2020-09-07 2020-12-11 洛伦兹(北京)科技有限公司 Object identification and positioning method and device based on two-dimensional-three-dimensional fusion characteristics
CN112102397A (en) * 2020-09-10 2020-12-18 敬科(深圳)机器人科技有限公司 Method, equipment and system for positioning multilayer part and readable storage medium
CN113192054A (en) * 2021-05-20 2021-07-30 清华大学天津高端装备研究院 Method and system for detecting and positioning complex parts based on 2-3D vision fusion
CN114140486A (en) * 2021-12-09 2022-03-04 易思维(杭州)科技有限公司 Point cloud depth obtaining method based on normal projection
CN115187556A (en) * 2022-07-19 2022-10-14 中航沈飞民用飞机有限责任公司 Method for positioning parts and acquiring point cloud on production line based on machine vision
CN116385356A (en) * 2023-02-17 2023-07-04 中国重汽集团济南动力有限公司 Method and system for extracting regular hexagonal hole features based on laser vision
CN116778288A (en) * 2023-06-19 2023-09-19 燕山大学 Multi-mode fusion target detection system and method
CN116843631A (en) * 2023-06-20 2023-10-03 安徽工布智造工业科技有限公司 3D visual material separating method for non-standard part stacking in light steel industry

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭清达;全燕鸣;: "采用空间投影的深度图像点云分割", 光学学报, no. 18, 31 December 2020 (2020-12-31) *

Also Published As

Publication number Publication date
CN117710396B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN107818303B (en) Unmanned aerial vehicle oil and gas pipeline image automatic contrast analysis method, system and software memory
CN110717872B (en) Method and system for extracting characteristic points of V-shaped welding seam image under laser-assisted positioning
CN109543665B (en) Image positioning method and device
CN109911481B (en) Cabin frame target visual identification and positioning method and system for metallurgical robot plugging
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN106952262B (en) Ship plate machining precision analysis method based on stereoscopic vision
CN114677601A (en) Dam crack detection method based on unmanned aerial vehicle inspection and combined with deep learning
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN109781737A (en) A kind of detection method and its detection system of hose surface defect
CN113822810A (en) Method for positioning workpiece in three-dimensional space based on machine vision
CN111652844B (en) X-ray defect detection method and system based on digital image region growing
CN111340833B (en) Power transmission line extraction method for least square interference-free random Hough transformation
CN111046782B (en) Quick fruit identification method for apple picking robot
Chaloeivoot et al. Building detection from terrestrial images
CN117710396B (en) 3D point cloud-based recognition method for nonstandard parts in light steel industry
CN114241436A (en) Lane line detection method and system for improving color space and search window
CN113570587A (en) Photovoltaic cell broken grid detection method and system based on computer vision
CN110899147B (en) Laser scanning-based online stone sorting method for conveyor belt
CN110349129B (en) Appearance defect detection method for high-density flexible IC substrate
CN116645418A (en) Screen button detection method and device based on 2D and 3D cameras and relevant medium thereof
CN116797757A (en) Urban building group data acquisition method based on live-action three-dimension
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN113744263B (en) Method for rapidly detecting surface defects of small-size mosaic ceramic
CN115187556A (en) Method for positioning parts and acquiring point cloud on production line based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant