CN115526851A - Weld joint detection method based on 4D space-time convolution model - Google Patents

Weld joint detection method based on 4D space-time convolution model Download PDF

Info

Publication number
CN115526851A
CN115526851A CN202211150681.0A CN202211150681A CN115526851A CN 115526851 A CN115526851 A CN 115526851A CN 202211150681 A CN202211150681 A CN 202211150681A CN 115526851 A CN115526851 A CN 115526851A
Authority
CN
China
Prior art keywords
space
dimensional
point cloud
data
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211150681.0A
Other languages
Chinese (zh)
Inventor
蒋琦
朱勐
陈西北
王君侠
焦俊勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yixing Zijia Intelligent Technology Co ltd
Original Assignee
Yixing Zijia Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yixing Zijia Intelligent Technology Co ltd filed Critical Yixing Zijia Intelligent Technology Co ltd
Priority to CN202211150681.0A priority Critical patent/CN115526851A/en
Publication of CN115526851A publication Critical patent/CN115526851A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30152Solder
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a weld joint detection method based on a 4D space-time convolution model, which is used for establishing a convolution neural network with four dimensions and using the offset of generalized sparse convolution
Figure DDA0003856996730000011
Forming a basic frame of a mixed kernel, utilizing generalized sparse convolution to create a high-dimensional network, performing semantic segmentation, creating a high-dimensional conditional random field, making a data set, and performing a binary-series computation on the high-dimensional conditional random fieldThe invention has the beneficial effects that the three-dimensional point cloud data is subjected to denoising treatment, and the acquired original point cloud data is subjected to sparse tensor treatment: the deep learning model based on the point cloud carries out end-to-end training on the multilayer multi-pass welding, training samples are sent into the input end of the model to be learned, the characteristics of the groove multilayer multi-pass welding are extracted, and errors between the training samples and true values are reduced by adjusting convolution kernels and applying a conditional random field and a high-dimensional convolution kernel in a segmentation module, so that the model can more accurately identify welding seams of the multilayer multi-pass welding.

Description

Weld joint detection method based on 4D space-time convolution model
Technical Field
The invention relates to the technical field of weld joint detection, in particular to a weld joint detection method based on a 4D space-time convolution model.
Background
The traditional point cloud processing algorithm can only process workpieces with few welding layers and single welding seam shapes in groove welding, and when the workpieces with complex welding seam areas and unobvious welding seam characteristics are faced, over-segmentation or under-segmentation is easy to occur, which does not meet the current complex welding requirements. When the traditional point cloud extraction algorithm faces multilayer multi-pass welding, the extracted area is incomplete, and then the characteristics of a welding seam cannot be completely reflected. Meanwhile, the conventional unsupervised point cloud segmentation algorithm has the defect of time consumption when facing large-batch data.
Disclosure of Invention
The invention aims to provide a weld joint detection method based on a 4D space-time convolution model, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a weld joint detection method based on a 4D space-time convolution model comprises the following steps:
the method comprises the following steps: on the basis of three-dimensional point cloud data, increasing dimensionality representing time and establishing a convolutional neural network with four dimensionalities;
step two: computing using a hybrid kernel as a convolution kernel using the offset of a generalized sparse convolution
Figure BDA0003856996710000011
To form a basic framework of a mixed kernel, which is a combination of a cross kernel and a hypercube kernel, which represents a spaceFor capturing geometrical features of the space, the cross kernel representing a time dimension for connecting at the same point in the space at different time periods;
step three: the method comprises the steps that a high-dimensional network is established by utilizing generalized sparse convolution, an original two-dimensional convolution network ResNet is modified, a plurality of stepping sparse convolutions and transposed convolutions are added on a basic residual error network, a spanning convolution structure of the residual error network is reserved, and U-Net is used between modules for feature fusion;
step four: performing a semantic segmentation experiment by using a plurality of variants of the same framework, and randomly refining semantic segmentation data by using high-dimensional conditions aiming at the experimental data of the semantic segmentation;
step five: the high-dimensional conditional random field prevents information from being leaked to different regions by creating intervals between adjacent points with different colors in a space domain, and one node of the conditional random field in a seven-dimensional space is made to be x i Its potential function is phi u (x i ) The potential function between adjacent points is phi p (x i ,x j ) Then the conditional random field in this domain can be defined as:
Figure BDA0003856996710000021
wherein Z is a distribution function and represents the sum of potential function states of all nodes, X represents the set of the whole point clouds in the space, and potential function phi between adjacent points p Must satisfy the stationarity condition phi p =(u,v)=φ p (u+τ u ,v+τ v ) In which τ is u ,τ v ∈R D Wherein R represents the number in the set as a real number, D represents a D-dimensional real number set, and P is a probability, which represents a random field in the formula; x is the set of global point clouds in the space; z is a normalization factor; n represents a space; (u, v) represents a node on a two-dimensional space, phi p (u, v) represents the potential function at that point, τ u And τ v Representing a node increment.
Step six: manufacturing a data set, namely scanning the welding seam of the current pass by using a three-dimensional scanner after the welding seam is welded, and acquiring three-dimensional point cloud data of the current welding seam;
step seven: denoising the three-dimensional point cloud data, marking the outline of a material increasing area by the denoised point cloud data through point cloud processing software to manufacture the real value of a network model, loosening the packaged point cloud data in order to obtain a complete and smooth characteristic extraction line, and extracting the outline of a welding area as the characteristic of input data through a boundary establishing tool in the three-dimensional point cloud processing software;
step eight: carrying out sparse tensor processing on the acquired original point cloud data, and finally obtaining a sparse tensor containing all original data coordinates and characteristics thereof;
step nine: and taking the sparse tensor as the input of the 4D space-time convolution model to obtain the 3D segmentation result of the welding seam.
Further, in the first step, the training samples are sent to the input end of the model for learning, the learning rate is set to be 0.1-0.3, and the generalization capability of the network is improved by random scaling, rotation around a gravity axis, spatial translation, spatial elastic distortion, and chrominance translation and jitter.
Furthermore, in the third step, for the initial convolutional layer, the original 7 × 7 convolutional kernel is replaced by a sparse convolutional kernel of 5 × 5 × 5 × 1, and for the rest of the network, the same structure as the original residual network is maintained.
Furthermore, in step four, the same architecture used is embodied as a ResNet structure, and the variant represents a combination of different hypercube cores and cross cores.
Further, in step five, the high-dimensional conditional random field includes seven dimensions of spatial coordinates xyz, time t and color space rgb.
Furthermore, in the sixth step, in order to test the extraction effect of the network on the multilayer and multi-pass welding, the number of welding layers of the adopted weldment is two, the number of welding channels is two, and a scanner is used for scanning the pass welding seam immediately after one welding pass is finished.
Further, in the eighth step, the feature map is marked by a method of manually setting colors to highlight the color difference between original data and feature contour lines, then the input data is augmented, the data is voxelized, turned over and transformed, dithering transformation is carried out on the colors, then a sparse convolution network compiles the voxelized down-sampled point cloud after being processed into a sparse tensor, coordinates of the point cloud data are quantized to form a batch processing format, numbers are added in front of the point cloud coordinates to distinguish the local different properties of the point cloud, finally a sparse tensor conversion function is called to convert the original floating point type coordinates into integer coordinates, the features of the data and the original coordinates are fused and integrated into an input variable, the input data is deleted and reduced on the premise of keeping the general structure of the point cloud, and finally a sparse tensor containing all the original data coordinates and the features is obtained.
Compared with the prior art, the invention has the beneficial effects that:
1. aiming at the defects that three-dimensional point cloud data has high-dimensional characteristics and is difficult to process, the input point cloud data is subjected to sparse tensor processing, the input point cloud data is fused with the characteristics of the input point cloud data to form a sparse tensor, the number of parameters is reduced compared with the original data, and meanwhile, a calculation method is provided for extracting three-dimensional and even higher-dimensional data;
2. aiming at the problem that the traditional cross entropy loss function can not learn the characteristics between adjacent points, a stable conditional random field is added and is expanded from a space to a time and color space, so that fine-grained segmentation is realized, and the accuracy is improved;
3. and obtaining contour features of the target area, and dyeing the extracted features through related point cloud processing. In the experimental process, the performance of the network is regulated in a mode of changing the shape of a convolution kernel and replacing a loss function with a conditional random field, and the improved model is evaluated through the cross-over ratio and the accuracy rate, so that the method realizes good segmentation operation on a data set, and the optimal value of the F1-score of one-layer one-pass welding and multi-layer multi-pass welding can be achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 illustrates the morphology of various convolution kernels of the present invention in the time and space domains;
FIG. 2 is a comparison between the residual network-based sparse convolution network and the residual network ResNet18 according to the present invention;
FIG. 3 is a diagram of a UNet 32-based sparse convolutional network architecture of the present invention;
FIG. 4 is a profile view of a standard drawing weld at different angular viewing angles in accordance with the present invention;
FIG. 5 is a diagram of the effect of the 3D MinkowskiNet18 of the present invention extracted at different angles;
FIG. 6 is a diagram of the extraction effect of the 4D Tesseract MinkowskiNet18 of the present invention at different angle views;
FIG. 7 shows the extraction effect of 4D MinkNet18+ TS-CRF of the invention at different angles;
FIG. 8 shows the extraction effect of 4D Tesseract MinkNet18+ TS-CRF of the present invention at different angles;
FIG. 9 is the actual value of the manually marked weld after segmentation according to the present invention;
FIG. 10 is a diagram of the effect of the segmented 3D MinkowskiNet18 extraction;
FIG. 11 is a diagram showing the extraction effect of the 4D Tesseract MinkowskiNet18 of the divided weld joint of the present invention;
FIG. 12 is a diagram of the extraction effect of 4D MinkNet18+ TS-CRF on the weld joint after segmentation according to the present invention;
FIG. 13 shows the extraction effect of 4D Tesseract MinkNet18+ TS-CRF on the weld after segmentation.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 13, in an embodiment of the present invention, a weld detection method based on a 4D space-time convolution model is characterized by including the following steps:
the method comprises the following steps: on the basis of three-dimensional point cloud data, increasing dimensionality representing time and establishing a convolutional neural network with four dimensionalities;
step two: calculating by using a mixed kernel as a convolution kernel using the offset of the generalized sparse convolution
Figure BDA0003856996710000042
The method comprises the following steps of forming a basic framework of a mixed kernel, wherein the mixed kernel is a combination of a cross kernel and a hypercube kernel, the hypercube kernel represents the dimension of space and is used for capturing the geometrical characteristics of the space, and the cross kernel represents the time dimension and is used for connecting the same point in the space of different time periods;
step three: utilizing generalized sparse convolution to create a high-dimensional network, transforming an original two-dimensional convolution network ResNet, adding a plurality of stepping sparse convolutions and transposed convolutions on a basic residual error network, and keeping a spanning convolution structure of the residual error network, as shown in FIG. 2, and performing feature fusion between modules by using U-Net;
step four: performing a semantic segmentation experiment by using a plurality of variants with the same framework, and randomly refining semantic segmentation data by using a high-dimensional condition aiming at the experimental data of the semantic segmentation;
step five: the high-dimensional conditional random field prevents information from being leaked to different regions by creating intervals between adjacent points with different colors in a space domain, and one node of the conditional random field in a seven-dimensional space is made to be x i Its potential function is phi u (x i ) The potential function between adjacent points is phi p (x i ,x j ) Then the conditional random field in this domain can be defined as:
Figure BDA0003856996710000041
wherein Z is a partition function and represents the sum of potential function states of all nodes, X represents the set of the whole point clouds in the space, and potential function phi between adjacent points p Must satisfy the stationarity condition phi p (u,v)=φ p (u+τ u ,v+τ v ) In which τ is u ,τ v ∈R D Wherein R represents the number in the set as a real number, D represents a D-dimensional real number set, and P is a probability, which represents a random field in the formula; x is the set of global point clouds in the space; z is a normalization factor; n represents a space; (u, v) represents a node on a two-dimensional space, phi p (u, v) represents the potential function at that point, τ u And τ v Representing node deltas.
Step six: manufacturing a data set, namely scanning the welding seam of the current pass by using a three-dimensional scanner after the welding seam is welded, and acquiring three-dimensional point cloud data of the current welding seam;
step seven: denoising the three-dimensional point cloud data, marking the outline of the material increasing area by the denoised point cloud data through point cloud processing software to manufacture the real value of a network model, loosening the packaged point cloud data in order to obtain a complete and smooth characteristic extraction line, and extracting the contour line of the welding area through a boundary creating tool in the three-dimensional point cloud processing software to be used as the characteristic of input data;
step eight: carrying out sparse tensor processing on the acquired original point cloud data, and finally obtaining a sparse tensor containing all original data coordinates and characteristics thereof;
step nine: and taking the sparse tensor as the input of the 4D space-time convolution model to obtain the 3D segmentation result of the welding seam.
The embodiment is as follows:
the data set manufactured in the embodiment is subjected to augmentation transformation such as rotation, translation, scaling, color dithering and the like to improve the generalization capability of the network, eighty data sets are divided into six groups, the data sets of the first five groups are used for training the network, the data of the last group are used for testing the network, the test is carried out on the sparse network of a mixed kernel, and the result of network segmentation welding seams is evaluated;
in order to obtain the best experimental effect, a basic model 3D MinkowskiNet18 of the network model and three variants thereof are selected, wherein 3D MinkowskiNet18 can be analogized to a traditional three-dimensional point cloud convolution network, and the difference is only that the input data types are different, namely tensor, and point cloud. Its variants are respectively 4D Tesseract MinkowskiNet18 based on hypercube convolution kernel, 4D MinkNet18+ TS-CRF with conditional random field and four-dimensional convolution kernel added and 4D Tesseract MinkNet18+ TS-CRF with hypercube convolution kernel and conditional random field simultaneously. The characteristics of the welding seam can be described more specifically by adding information of one dimension, and the global characteristics are further refined by utilizing the relation between adjacent points in the point cloud on the basis of adding the conditional random field to obtain the global characteristics, so that the segmentation effect is refined. In order to objectively evaluate the extraction effect of the welding line, two evaluation indexes are adopted to measure the experimental effect of the networks, and the optimal segmentation model is obtained by comparing the sizes of the network model on the indexes, wherein the indexes are the distance errors between the F1-score and the point cloud point. The comparison of the test effect of each network model on the first layer, the second layer, and the second welding is shown in table 1 and table 2. Wherein, table 1 adopts F1-score as an evaluation index, and table 2 adopts PDE as an evaluation index.
TABLE 1 test results of different model tests on groove weldments (F1-score)
Figure BDA0003856996710000051
TABLE 2 test results (PDE/mm) of different models tested on groove weldments
Figure BDA0003856996710000061
As can be seen from the table 1, the sparse network of both the conditional random field and the high-dimensional convolution kernel has good adaptability, good segmentation operation can be realized on the manufactured data set, and the F1-score of one-layer one-pass welding and multi-layer multi-pass welding can reach optimal values which are 91.22% and 87.64% respectively; the point distance error is respectively 0.10mm and 0.07mm, the resolution ratio is close to that of a laser scanner, and the maximum precision is achieved. Experiments prove that the method can be used for guiding multilayer multi-pass welding;
the operation effect of different network models on a single layer and a single channel is shown in figures 4-8, and the operation effect of multilayer and multi-channel welding is shown in figures 9-13, so that the models are not influenced by the multilayer and multi-channel welding seam area, and each model can find the outline of the area where the welding seam is located, thereby achieving higher precision.
Different from the conventional three-dimensional point cloud convolutional network, the 4D space-time convolutional network extracts higher-dimensional features by converting three-dimensional point cloud data into sparse tensors, improves the segmentation precision, and solves the problem that the existing welding seam detection technology is easy to generate over-segmentation or under-segmentation when facing a workpiece with unobvious welding seam features; meanwhile, the defect that the traditional unsupervised point cloud segmentation algorithm consumes time when facing large batch of data is overcome.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
1. Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (7)

1. A weld joint detection method based on a 4D space-time convolution model is characterized by comprising the following steps:
the method comprises the following steps: on the basis of three-dimensional point cloud data, increasing dimensionality representing time, and establishing a convolutional neural network with four dimensionalities;
step two: computing using a hybrid kernel as a convolution kernel using the offset of a generalized sparse convolution
Figure FDA0003856996700000011
The method comprises the following steps of forming a basic framework of a mixed kernel, wherein the mixed kernel is a combination of a cross kernel and a hypercube kernel, the hypercube kernel represents the dimension of space and is used for capturing the geometrical characteristics of the space, and the cross kernel represents the time dimension and is used for connecting the same point in the space of different time periods;
step three: the method comprises the steps that a high-dimensional network is established by utilizing generalized sparse convolution, an original two-dimensional convolution network ResNet is modified, a plurality of stepping sparse convolutions and transposed convolutions are added on a basic residual error network, a spanning convolution structure of the residual error network is reserved, and U-Net is used between modules for feature fusion;
step four: performing a semantic segmentation experiment by using a plurality of variants of the same framework, and randomly refining semantic segmentation data by using high-dimensional conditions aiming at the experimental data of the semantic segmentation;
step five: the high-dimensional conditional random field prevents information from leaking to different regions by creating intervals between adjacent points in a space domain but with different colors, and one node of the conditional random field in a seven-dimensional space is made to be x i Its potential function is phi u (x i ) The potential function between adjacent points is phi P (x i ,x j ),Then the conditional random field in this domain can be defined as:
Figure FDA0003856996700000012
wherein Z is a partition function and represents the sum of potential function states of all nodes, X represents the set of the whole point clouds in the space, and potential function phi between adjacent points p The stationary condition phi must be satisfied p (u,v)=φ p (u+τ u ,v+τ v ) In which τ is u ,τ v ∈R D Wherein R represents the number in the set as a real number, D represents a D-dimensional real number set, and P is a probability, which represents a random field in the formula; x is the set of global point clouds in the space; z is a normalization factor; n represents a space; (u, v) represents a node on a two-dimensional space, phi p (u, v) represents the potential function at that point, τ u And τ v Representing node deltas.
Step six: manufacturing a data set, namely scanning the welding seam of the current pass by using a three-dimensional scanner after the welding seam is welded, and acquiring three-dimensional point cloud data of the current welding seam;
step seven: denoising the three-dimensional point cloud data, marking the outline of the material adding region by the denoised point cloud data through point cloud processing software to manufacture the real value of the network model, loosening the packaged point cloud data in order to obtain a complete and smooth characteristic extraction line, and extracting the outline of the welding region as the characteristic of input data through a boundary creating tool in the three-dimensional point cloud processing software;
step eight: carrying out sparse tensor processing on the acquired original point cloud data, and finally obtaining a sparse tensor containing all original data coordinates and characteristics thereof;
step nine: and taking the sparse tensor as the input of the 4D space-time convolution model to obtain the 3D segmentation result of the welding seam.
2. The weld detection method based on the 4D space-time convolution model according to claim 1, characterized in that: in the first step, training samples are sent to an input end of the model for learning, a learning rate is set to be 0.1-0.3, and the generalization capability of the network is improved by random scaling, rotation around a gravity axis, spatial translation, spatial elastic distortion, chrominance translation and jitter.
3. The weld detection method based on the 4D space-time convolution model according to claim 1, characterized in that: in step three, for the initial convolutional layer, the original 7 × 7 convolutional kernel is replaced by a sparse convolutional kernel of 5 × 5 × 5 × 1, and for the rest of the network, the same structure as the original residual network is maintained.
4. The weld detection method based on the 4D space-time convolution model according to claim 1, characterized in that: in step four, the same architecture used is embodied as the ResNet structure, the variants representing combinations of different hypercube kernels and cross kernels.
5. The weld detection method based on the 4D space-time convolution model according to claim 1, characterized in that: in step five, the high dimensional conditional random field comprises seven dimensions of spatial coordinates xyz, time t, color space rgb.
6. The weld detection method based on the 4D space-time convolution model according to claim 1, characterized in that: in the sixth step, in order to test the extraction effect of the network on the multilayer multi-pass welding, the number of welding layers of the adopted weldment is two, the number of welding channels is two, and a scanner is used for scanning the pass welding seam immediately after one welding channel is finished.
7. The weld detection method based on the 4D space-time convolution model according to claim 1 is characterized in that: in the step eight, the characteristic diagram is marked by a method of manually setting colors to highlight the color difference between original data and characteristic contour lines, then the input data is augmented, the data is subjected to voxelization and turnover transformation, the dithering transformation is carried out on the colors, then a sparse convolution network compiles the processed voxelization down-sampling point cloud into a sparse tensor, coordinates of the point cloud data are quantized to form a batch processing format, numbers are added in front of the point cloud coordinates to distinguish the local different properties of the point cloud, finally a sparse tensor conversion function is called to convert original floating point type coordinates into integer coordinates, the characteristics of the data and the original coordinates are fused and integrated into an input variable, the input data is deleted on the premise of keeping the general structure of the point cloud unchanged, and finally a sparse tensor containing all the original data coordinates and the characteristics of the point cloud is obtained.
CN202211150681.0A 2022-09-21 2022-09-21 Weld joint detection method based on 4D space-time convolution model Pending CN115526851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211150681.0A CN115526851A (en) 2022-09-21 2022-09-21 Weld joint detection method based on 4D space-time convolution model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211150681.0A CN115526851A (en) 2022-09-21 2022-09-21 Weld joint detection method based on 4D space-time convolution model

Publications (1)

Publication Number Publication Date
CN115526851A true CN115526851A (en) 2022-12-27

Family

ID=84699522

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211150681.0A Pending CN115526851A (en) 2022-09-21 2022-09-21 Weld joint detection method based on 4D space-time convolution model

Country Status (1)

Country Link
CN (1) CN115526851A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953167A (en) * 2024-03-27 2024-04-30 贵州道坦坦科技股份有限公司 Expressway auxiliary facility modeling method and system based on point cloud data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117953167A (en) * 2024-03-27 2024-04-30 贵州道坦坦科技股份有限公司 Expressway auxiliary facility modeling method and system based on point cloud data
CN117953167B (en) * 2024-03-27 2024-05-28 贵州道坦坦科技股份有限公司 Expressway auxiliary facility modeling method and system based on point cloud data

Similar Documents

Publication Publication Date Title
CN115049936B (en) High-resolution remote sensing image-oriented boundary enhanced semantic segmentation method
CN110738697A (en) Monocular depth estimation method based on deep learning
Ren et al. Defect detection from X-ray images using a three-stage deep learning algorithm
CN110880176B (en) Semi-supervised industrial image defect segmentation method based on countermeasure generation network
CN108764250B (en) Method for extracting essential image by using convolutional neural network
CN116630626B (en) Connected double-attention multi-scale fusion semantic segmentation network
CN114708313A (en) Optical and SAR image registration method based on double-branch neural network
CN115526851A (en) Weld joint detection method based on 4D space-time convolution model
CN114549307A (en) High-precision point cloud color reconstruction method based on low-resolution image
CN115170427A (en) Image mirror surface highlight removal method based on weak supervised learning
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN111563577A (en) Unet-based intrinsic image decomposition method for skip layer frequency division and multi-scale identification
CN117670820B (en) Plastic film production defect detection method and system
CN114820541A (en) Defect detection method based on reconstructed network
CN114863266A (en) Land use classification method based on deep space-time mode interactive network
CN112132798A (en) Method for detecting complex background PCB mark point image based on Mini ARU-Net network
CN116630723A (en) Hyperspectral ground object classification method based on large-kernel attention mechanism and MLP (Multi-level particle swarm optimization) mixing
CN116485892A (en) Six-degree-of-freedom pose estimation method for weak texture object
CN112927304B (en) Fish-eye lens calibration method based on convolutional neural network
CN114882197A (en) High-precision three-dimensional face reconstruction method based on graph neural network
CN115131563A (en) Interactive image segmentation method based on weak supervised learning
CN112102394B (en) Remote sensing image ship size integrated extraction method based on deep learning
CN114491841A (en) Machining feature recognition method based on NX secondary development and graph neural network
Lu et al. NCTR: neighborhood consensus transformer for feature matching
CN114882292B (en) Remote sensing image ocean target identification method based on cross-sample attention mechanism graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination