CN116484921A - Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network - Google Patents

Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network Download PDF

Info

Publication number
CN116484921A
CN116484921A CN202310741330.5A CN202310741330A CN116484921A CN 116484921 A CN116484921 A CN 116484921A CN 202310741330 A CN202310741330 A CN 202310741330A CN 116484921 A CN116484921 A CN 116484921A
Authority
CN
China
Prior art keywords
neural network
physical
convolutional neural
crack
feature fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310741330.5A
Other languages
Chinese (zh)
Other versions
CN116484921B (en
Inventor
王汉
李伟
袁新安
张西赫
田正磊
王运才
王志仁
朱挺
韩东辰
兴雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN202310741330.5A priority Critical patent/CN116484921B/en
Publication of CN116484921A publication Critical patent/CN116484921A/en
Application granted granted Critical
Publication of CN116484921B publication Critical patent/CN116484921B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of electromagnetic nondestructive testing, and particularly relates to a crack size accurate quantification method of a multi-physical-quantity characteristic fusion convolutional neural network. The precise quantification method for the crack size realizes precise quantification of the crack length and depth through the fusion of the characteristics of multiple physical quantities and the convolutional neural network, simultaneously precisely quantifies the angle of the crack with less calculated quantity and smaller network scale, and provides technical support for crack propagation monitoring of in-service structures. The crack size accurate quantification method of the multi-physical-quantity feature fusion convolutional neural network specifically comprises the following steps: establishing a crack monitoring database; designing a convolutional neural network structure; designing a multi-physical-quantity feature fusion loss function; selecting a convolutional neural network optimization algorithm fused with multiple physical quantity characteristics; training a multi-physical-quantity feature fusion convolutional neural network; and evaluating the multi-physical-quantity feature fusion convolutional neural network; and deploying and applying the multi-physical-quantity feature fusion convolutional neural network.

Description

Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network
Technical Field
The invention belongs to the technical field of electromagnetic nondestructive testing, and particularly relates to a crack size accurate quantification method of a multi-physical-quantity characteristic fusion convolutional neural network.
Background
In the existing in-service structure, due to the influence of the working condition environment, corrosive defects are easy to generate on the surface of the structure, and crack defects are easy to form in the areas of pipe nodes, welding seams and the like. Both defects can lead to failure or damage of the workpiece, and the safety operation of the whole equipment can be threatened when the defects are serious.
The alternating current magnetic field detection (AlternatingCurrentFieldMeasurement, ACFM) technology is used as a novel electromagnetic nondestructive detection technology and can be used for detecting crack defects on the surface of a conductive material. Specifically, the ACFM technology realizes detection and evaluation of cracks by using a space distortion magnetic field caused by disturbance around the cracks of induced uniform current, has the advantages of non-contact detection, quantitative measurement, small lift-off disturbance and the like, and is widely applied and developed in the aspect of crack detection of structures.
However, the inventors have found after further study that there are still a number of technical drawbacks to the related art, such as: the existing detection method can only finish the identification of crack defects and cannot evaluate the defects quantitatively under the influence of factors such as attachments, covering coatings, rough surfaces at welding seams, lift-off vibration and the like of the in-service structural objects. Therefore, it is needed to provide a new method for precisely quantifying the size of the crack.
Disclosure of Invention
The invention provides a crack size accurate quantification method of a multi-physical-quantity feature fusion convolutional neural network, which realizes accurate quantification of crack length and depth through the multi-physical-quantity feature fusion convolutional neural network, simultaneously accurately quantifies the angle of a crack with less calculated amount and smaller network scale, solves the defects of large calculated amount, complex neural network scale and high training cost in the traditional crack quantification method, timely and effectively acquires the multi-physical-size information of the crack, and provides technical support for crack propagation monitoring of in-service structures.
In order to solve the technical problems, the invention adopts the following technical scheme:
the crack size accurate quantification method of the multi-physical-quantity characteristic fusion convolutional neural network comprises the following steps of:
establishing a crack monitoring database;
designing a convolutional neural network structure;
designing a multi-physical-quantity feature fusion loss function;
selecting a convolutional neural network optimization algorithm fused with multiple physical quantity characteristics;
and training the multi-physical-quantity characteristic fusion convolutional neural network.
Preferably, after training to obtain the multi-physical-quantity feature fusion convolutional neural network, the method further comprises the following steps:
evaluating a multi-physical-quantity feature fusion convolutional neural network;
and deploying and applying the multi-physical-quantity feature fusion convolutional neural network.
Preferably, the step of creating the crack monitoring database may be specifically described as:
the method comprises the steps of obtaining crack images containing characteristic signals Bz through monitoring experiments on cracks with different lengths, different depths and different angles, and establishing an experiment database of the cracks;
after obtaining a plurality of groups of crack images, the crack images are manufactured into a data set required by neural network training.
Preferably, the step of designing the convolutional neural network structure may be specifically described as:
determining an interlayer structure of the convolutional neural network structure, and determining various parameters in the convolutional neural network structure;
the interlayer structure of the convolutional neural network structure obtained by determination comprises: an initialization layer, a cross fusion layer and a regression prediction layer;
the initialization layer is used for preprocessing crack data obtained by converting the crack image, and inputting the processed characteristics into a length and depth quantization network; the cross fusion layer is used for learning higher-level angle features through length and depth features in crack signals and fusing the features so as to provide feature representation for the regression prediction layer; the regression prediction layer is used for mapping the feature representation learned by the cross fusion layer into the space of the target variable to obtain a crack size prediction result.
Preferably, the step of designing the multi-physical-quantity feature fusion loss function may be specifically described as:
assuming that n samples are provided, designing a multi-physical-quantity feature fusion loss function, and meeting the following conditions:
the Loss functions of the corresponding length, angle and depth in the multi-physical-quantity characteristic fusion Loss functions are respectively defined as Loss1, loss2 and Loss 3; loss is the total Loss function in the multi-physical-quantity feature fusion Loss function; N_D i 、N_A i 、N_L i Respectively representing predicted values of depth, angle and length of the ith sample crack of the model; R_D i 、R_A i 、R_L i Actual values of depth, angle and length of the ith sample crack are respectively represented;
taking a single-layer neural network as an example, assuming that there are an input vector x, a weight vector w, a bias b and an output y, the multi-physical-quantity feature fusion loss function can be expressed as a difference between y and a real tag y_true, namely, the following is satisfied: l= (y-y_true)/(2);
using the chain law, the back propagation calculation yields the derivative of each intermediate variable with respect to L.
Preferably, the process of obtaining the derivative of each intermediate variable with respect to L by back propagation calculation using the chain law can be specifically described as:
calculating the derivative of the loss function L with respect to the output y:
wherein the derivative of the loss function L with respect to the output y satisfies: dL/dy=2 (y-y_true);
calculating the derivative of y with respect to the weight w and the bias b:
wherein the derivative of y with respect to the weight w satisfies: dy/dw=x, i.e. the derivative of y with respect to w is equal to the input x;
the derivative of y with respect to bias b satisfies: dy/db=1, i.e. the derivative of y with respect to b is 1;
the derivative of the loss function L for w and b is calculated:
wherein the derivative of the loss function L with respect to w satisfies: dL/dw=dl/dy/dw=2 (y-y_true) x;
the derivative of the loss function L with respect to b satisfies: dL/db=dl/dy/db=2 (y-y_true) 1.
Preferably, the step of selecting the multi-physical-quantity feature fusion convolutional neural network optimization algorithm may be specifically described as:
using an Adam optimization algorithm as a convolutional neural network optimization algorithm, wherein the learning rate is set to be 0.00001, and the iteration times are 5000 times; and updating the neural network parameters based on the calculation result.
The invention provides a crack size accurate quantification method of a multi-physical-quantity feature fusion convolutional neural network, which specifically comprises the following steps: establishing a crack monitoring database; designing a convolutional neural network structure; designing a multi-physical-quantity feature fusion loss function; selecting a convolutional neural network optimization algorithm fused with multiple physical quantity characteristics; training a multi-physical-quantity feature fusion convolutional neural network; and evaluating the multi-physical-quantity feature fusion convolutional neural network; and deploying and applying the multi-physical-quantity feature fusion convolutional neural network. The crack size accurate quantification method for the multi-physical-quantity feature fusion convolutional neural network has the characteristics of the steps, achieves accurate quantification of the length and depth of the crack, accurately quantifies the angle of the crack with less calculated amount and network scale, and overcomes the defects of large calculated amount, complex neural network scale and high training cost of the traditional angle quantification method. Compared with the problems that angle information of defects is difficult to quantify, the calculated amount is large and a neural network model is complex in the traditional defect size quantifying method, the crack size accurate quantifying method for the multi-physical-quantity feature fusion convolutional neural network provided by the invention has the advantages that length and depth features are fused into the angle feature quantifying network, the multi-physical-quantity information is fused into a loss function, and finally, the accurate quantification of the angles of the defects and cracks is realized in a relatively simplified network scale and with smaller calculated amount.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
FIG. 1 is a schematic flow chart of a method for precisely quantifying the crack size of a multi-physical-quantity feature fusion convolutional neural network;
FIG. 2 is a detailed flow schematic of crack image processing into a training crack data sample;
fig. 3 is a schematic diagram of an interlayer structure of the obtained convolutional neural network structure;
FIG. 4 is a test set length regression result of a neural network model;
FIG. 5 is a test set depth regression result of a neural network model;
FIG. 6 is a test set angle regression result of a neural network model;
FIG. 7 is a graph showing the overall regression error statistics for the training set and the test set of the neural network model;
fig. 8 is a schematic diagram of the whole training process of the multi-physical-quantity feature fusion convolutional neural network model.
Detailed Description
The invention provides a crack size accurate quantification method of a multi-physical-quantity feature fusion convolutional neural network, which realizes accurate quantification of crack length and depth through the multi-physical-quantity feature fusion convolutional neural network, simultaneously accurately quantifies the angle of a crack with less calculated amount and smaller network scale, solves the defects of large calculated amount, complex neural network scale and high training cost in the traditional crack quantification method, timely and effectively acquires the multi-physical-size information of the crack, and provides technical support for crack propagation monitoring of in-service structures.
The invention provides a crack size accurate quantification method of a multi-physical-quantity characteristic fusion convolutional neural network, which is shown in figure 1 and comprises the following steps:
and establishing a crack monitoring database.
In one preferred embodiment of the present invention, the step of creating the crack monitoring database may be specifically described as:
the method comprises the steps of obtaining crack images containing characteristic signals Bz through monitoring experiments on cracks with different lengths, different depths and different angles, and establishing an experiment database of the cracks;
after obtaining a plurality of groups of crack images, the crack images are manufactured into a data set required by neural network training.
To be added, taking 800 groups of crack images obtained in the step of establishing the crack monitoring database as an example for explanation, correspondingly processing the 800 groups of crack images to obtain 800 groups of crack data samples, namely obtaining a database containing 800 groups of crack data samples, and dividing the crack database into a training set and a testing set according to the proportion of 3:1 for later steps.
As shown in fig. 2, a specific procedure for processing the crack image into the training crack data sample can be referred to as follows:
1. and (5) standardization treatment. The simulated magnetic field signal image and the experimental voltage signal image were made into an original image of 2450×1050 pixels.
2. And (5) normalization treatment. To eliminate the order of magnitude difference between simulation and experimental data, normalization processing needs to be performed on the raw image data, for example, normalization of the raw image data is achieved using a functional form shown in the following formula.
In the method, in the process of the invention,for normalized data, ++>For data before normalization, ++>、/>Maximum and minimum values in simulation and experimental data, respectively.
3. And (5) graying treatment. In order to save the calculation amount of the neural network, the normalized magnetic field signal image and the voltage signal image are converted into gray level images. The simulation magnetic field signal and the experimental voltage signal RGB image are converted into a gray scale by adopting a currently common weighted average method, and the conversion formula is shown as follows:
And then, after the crack monitoring database is built, continuing to design the convolutional neural network structure.
As a preferred embodiment of the present invention, the steps for designing the convolutional neural network structure can be specifically described as follows:
and determining an interlayer structure of the convolutional neural network structure, and determining various parameters in the convolutional neural network structure.
It should be noted that, the interlayer structure of the convolutional neural network structure obtained by determination includes a three-layer interlayer structure, as shown in fig. 3: an initialization layer, a cross fusion layer and a regression prediction layer.
The initialization layer is used for preprocessing crack data obtained by converting the crack image, and inputting the processed characteristics into a length and depth quantization network. For example, using initial layer selection to apply a layer of convolution network to extract preliminary features of 600 single-channel input images with 121x121, conv (64, 5) represents a convolution operation using 64 convolution kernels with 5x5 size. Wherein 64 represents the number of output channels, namely the number of feature images obtained after convolution operation is 64; after the convolution operation, the output feature map size is 121x121x64. The reason 121 is that, when performing convolution operation, the convolution kernel needs to fill 4 pixels (padding=4) at the edge of the image, so that the size of the feature map after convolution is unchanged, and Max-Pool represents that maximum value pooling is adopted for downsampling. According to the output of the convolution layer, the dimension of the data is further reduced, the window size of Max-Pool is set to be 2x2, the step size is set to be 2, the largest numerical value in 4 pixel points in the window is extracted in each step, and then the initialization layer finally outputs a characteristic image of 60x60 x64.
The cross fusion layer is used for learning higher-level angle features through length and depth features in the crack signals and fusing the features so as to provide feature representation for the regression prediction layer. It is noted that the cross fusion layer serves as a core layer in the neural network, and further comprises a plurality of convolution layers and a pooling layer. The purpose of these layers is to extract crack size features from the input data and fuse these features together. The convolution layer (Conv) can perform sliding operation on input data through a convolution kernel, and the convolution kernel multiplies the input data point by point and then sums and averages the input data to extract local features of crack signals. And the pooling layer (Max-Pool) may downsample the output of the convolutional layer. Specifically, the maximum value is extracted in a window of 2x2, the dimension of the feature is reduced, and the pooling layer further reduces the dimension of the data according to the output of the convolution layer. The maximum pooling operation can reduce the scale of the model, improve the calculation speed, reduce the probability of overfitting and improve the robustness of the network to the feature extraction. For example, the initialization layer performs convolution and pooling downsampling operations on the feature image of 60x60x64, which is initially extracted by the initialization layer, to extract depth features and length features, and obtain a depth and length feature image output with the size of 15x15 x64. And (3) carrying out pixel-by-pixel weighted averaging on the depth characteristic image and the length characteristic image to obtain a characteristic image of 15x15x64, outputting the characteristic image to an angle characteristic extraction network, extracting the angle characteristic through two convolution and pooling downsampling operations, and finally obtaining an angle characteristic image output with the size of 3x3x 64. The four layers of networks with the length and the depth are fused into an angle network, so that the four layers of networks share part of the network; through the combination of the layer networks, the cross fusion layer can learn higher-level angle characteristics through length and depth characteristics in crack signals, and fuse the characteristics, so that richer characteristic representation is provided for the regression prediction layer.
The regression prediction layer is used as the last layer in the neural network and is used for mapping the feature representation learned by the cross fusion layer into the space of the target variable to obtain a crack size prediction result. The regression prediction layer comprises a plurality of flattening layers, a full-connection layer, an output layer and a multi-physical fusion loss function, wherein the full-connection layer can map the output of the cross fusion layer into a space with higher dimension, and the output layer can map vectors in the space with higher dimension into scalar or vector. Further, by means of a back propagation algorithm, the regression prediction layer can learn the optimal weights and biases to minimize the loss function and improve the crack size prediction accuracy of the model. For example, assume that a 15x15x64 depth feature image, a length feature image, is input into a flattening layer (flat), resulting in a feature vector of length 15x15x64 = 14400; angular feature images of size 3x3x64 are input to a flattening layer (flat) that spreads the rectangular output of the convolutional network into a one-dimensional matrix to obtain feature vectors of length 3x3x64 = 576. The flattened feature vector is then input into the fully connected layer of Dense (256), where 256 represents the number of neurons in the layer. The layer linearly transforms the input feature vector to yield a 256-dimensional output feature vector, with all input and output nodes connected. Dropout (0.2) represents that 20% of neurons are randomly temporarily discarded from the neural network while the outputs of other neurons are scaled so that the desired value of the overall output is unchanged when the neural network is trained.
It is added that when the neural network is used, the discarded neurons recover the links, because the complex neural network is over-fitted when training the length and depth by a simple amount, so the method of discarding part of the neurons is adopted to relieve the over-fitting of the neural network. The output feature vector of the Dense (256) layer is then input into the Dense (128) fully connected layer, where 128 represents the number of neurons of that layer. The layer also performs linear transformation on the input feature vector to obtain a 128-dimensional output feature vector. Finally, the output feature vector of the Dense (128) layer is input into the Dense (1) fully connected layer, and the layer has only one neuron and is used for outputting the prediction result of the length, depth and angle of the model.
And then, after the design of the convolutional neural network structure is completed, continuing to design a multi-physical-quantity feature fusion loss function.
Notably, the loss function is used as part of (typically the last layer of) the convolutional neural network model to calculate the error between the predicted and actual values of the neural network. Therefore, the back propagation calculation of the loss function is a key link in the neural network model training process, and the neural network model training aims to continuously optimize network parameters through back propagation so as to minimize the loss function and improve the prediction accuracy of the model.
Specifically, as a preferred embodiment of the present invention, the step of designing the multi-physical-quantity feature fusion loss function may be specifically described as:
assuming that n samples are provided, designing a multi-physical-quantity feature fusion loss function, and meeting the following conditions:
the Loss functions of the corresponding length, angle and depth in the multi-physical-quantity characteristic fusion Loss functions are respectively defined as Loss1, loss2 and Loss 3; loss is the total Loss function in the multi-physical-quantity feature fusion Loss function; N_D i 、N_A i 、N_L i Respectively representing predicted values of depth, angle and length of the ith sample crack of the model; R_D i 、R_A i 、R_L i Actual values of depth, angle and length of the ith sample crack are respectively represented;
taking a single-layer neural network as an example, assuming that there are an input vector x, a weight vector w, a bias b and an output y, the multi-physical-quantity feature fusion loss function can be expressed as a difference between y and a real tag y_true, namely, the following is satisfied: l= (y-y_true)/(2);
using the chain law, the back propagation calculation yields the derivative of each intermediate variable with respect to L.
One point to supplement is that by adjusting the neural network parameters to reduce L, the derivatives of L for w and b need to be calculated. Whereas using the chain law, the derivative can be passed from output y all the way to input x, calculating the derivative of each intermediate variable for L, and finally calculating the derivatives of L for w and b. The process of back-propagation computation to derive the derivative of each intermediate variable with respect to L using the chain law can thus be described in further detail as:
calculating the derivative of the loss function L with respect to the output y:
wherein the derivative of the loss function L with respect to the output y satisfies: dL/dy=2 (y-y_true);
calculating the derivative of y with respect to the weight w and the bias b:
wherein the derivative of y with respect to the weight w satisfies: dy/dw=x, i.e. the derivative of y with respect to w is equal to the input x;
the derivative of y with respect to bias b satisfies: dy/db=1, i.e. the derivative of y with respect to b is 1;
the derivative of the loss function L for w and b is calculated:
wherein the derivative of the loss function L with respect to w satisfies: dL/dw=dl/dy/dw=2 (y-y_true) x;
the derivative of the loss function L with respect to b satisfies: dL/db=dl/dy/db=2 (y-y_true) 1.
It can be seen that the process of back propagation is similar to the process of designing a convolutional neural network structure, except that multiple chain law calculations are required to back propagate weights and offsets from the output layer to each layer. Therefore, the back propagation algorithm is the core of the training process in the neural network, and effectively utilizes the ideas of the chain rule and the computational graph, so that the training of the neural network becomes efficient and feasible.
And then, after the design of the multi-physical-quantity feature fusion loss function is completed, continuously selecting the multi-physical-quantity feature fusion convolutional neural network optimization algorithm.
When training the neural network, the optimization algorithm calculates the gradient of the loss function to each parameter through the back propagation algorithm, and updates the parameters to minimize the output value of the loss function, wherein the smaller the value of the loss function is, the smaller the error between the predicted value and the true value of the model is, and the better the performance of the model is. Therefore, selecting an appropriate optimization algorithm has an important impact on the training and performance of the neural network. Specifically, an Adam optimization algorithm is used as a convolutional neural network optimization algorithm, the learning rate is set to be 0.00001, and the iteration times are 5000 times; and updating the neural network parameters based on the calculation result. Experiments prove that the neural network can be quickly trained after the neural network optimization algorithm (the optimized loss function) and has better generalization performance.
And then, after the optimization algorithm of the multi-physical-quantity feature fusion convolutional neural network is selected, continuously evaluating the multi-physical-quantity feature fusion convolutional neural network.
Specifically, the evaluation process of the neural network model is shown in fig. 4-7, wherein fig. 7 is a regression result of 200 sets of test images of the multi-physical-quantity feature fusion convolutional neural network model, fig. 4-6 are test set regression results of crack length, depth and angle respectively, black solid lines in fig. 4-6 represent true values of crack length, depth and angle size of a current sample image, and discrete dots represent predicted values of crack length, depth and angle size of the current sample image of the network. By comparing the discrete situations of the data points and the base line, the network can be found to have higher quantification accuracy on the length, depth and angle of the crack. The average and overall average errors in length, depth and angle for the training and test sets are shown in fig. 8. Specifically, the length average error of the training set is 0.124mm, the depth average error is 0.046mm, and the angle average error is 1.091 degrees; the average error of the length of the test set is 0.696mm, the average error of the depth is 0.291mm, and the average error of the angle is 1.171 degrees; the overall length average error was 0.41mm, the overall depth average error was 0.169mm, and the overall angle average error was 1.131 °.
And then, after the multi-physical-quantity feature fusion convolutional neural network is evaluated, continuing to deploy and apply the multi-physical-quantity feature fusion convolutional neural network.
Specifically, the application process of the neural network model is shown in fig. 8, which is an intuitive schematic diagram of the training process of the multi-physical-quantity feature fusion convolutional neural network model. The method comprises the steps of performing actual test on cracks with the length of 13mm, the depth of 3mm and the angle of 90 degrees, converting crack images into gray images, inputting the gray images into an initialization layer of a network, inputting features into a length and depth quantization network after the images are simply processed by the initialization layer, and further extracting features of a deeper layer after the length and depth features are subjected to cross fusion to serve as angle quantized data. And finally, the three paths of the neural networks respectively output prediction results through the respective full-connection layers. In the example, the crack size is 13mm long, 3mm deep and 90 degrees, the network predicted crack size is 12.911mm long, 2.963mm deep and 89.819 degrees, the absolute error of the length is 0.089mm, the relative error is 0.7%, the absolute error of the depth is 0.037mm, the relative error is 1.2%, the absolute error of the angle is 0.181 degrees, the relative error is 0.2%, and the predicted result is accurate.
The whole process of the crack size accurate quantization method of the multi-physical-quantity feature fusion convolutional neural network provided by the invention is described in detail, and the application verification is carried out on the multi-physical-quantity feature fusion convolutional neural network, so that the crack size accurate quantization method of the multi-physical-quantity feature fusion convolutional neural network provided by the invention is proved to be real and effective.
The invention provides a crack size accurate quantification method of a multi-physical-quantity feature fusion convolutional neural network, which specifically comprises the following steps: establishing a crack monitoring database; designing a convolutional neural network structure; designing a multi-physical-quantity feature fusion loss function; selecting a convolutional neural network optimization algorithm fused with multiple physical quantity characteristics; training a multi-physical-quantity feature fusion convolutional neural network; and evaluating the multi-physical-quantity feature fusion convolutional neural network; and deploying and applying the multi-physical-quantity feature fusion convolutional neural network. The crack size accurate quantification method for the multi-physical-quantity feature fusion convolutional neural network has the characteristics of the steps, achieves accurate quantification of the length and depth of the crack, accurately quantifies the angle of the crack with less calculated amount and network scale, and overcomes the defects of large calculated amount, complex neural network scale and high training cost of the traditional angle quantification method. Compared with the problems that angle information of defects is difficult to quantify, the calculated amount is large and a neural network model is complex in the traditional defect size quantifying method, the crack size accurate quantifying method for the multi-physical-quantity feature fusion convolutional neural network provided by the invention has the advantages that length and depth features are fused into the angle feature quantifying network, the multi-physical-quantity information is fused into a loss function, and finally, the accurate quantification of the angles of the defects and cracks is realized in a relatively simplified network scale and with smaller calculated amount.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (3)

1. The crack size accurate quantification method of the multi-physical-quantity characteristic fusion convolutional neural network is characterized by comprising the following steps of:
establishing a crack monitoring database;
designing a convolutional neural network structure;
designing a multi-physical-quantity feature fusion loss function;
selecting a convolutional neural network optimization algorithm fused with multiple physical quantity characteristics;
training a multi-physical-quantity feature fusion convolutional neural network;
the step of creating the crack monitoring database may be described as:
the method comprises the steps of obtaining crack images containing characteristic signals Bz through monitoring experiments on cracks with different lengths, different depths and different angles, and establishing an experiment database of the cracks;
after obtaining a plurality of groups of crack images, making the crack images into a data set required by neural network training;
the step of designing the convolutional neural network structure can be specifically described as:
determining an interlayer structure of the convolutional neural network structure, and determining various parameters in the convolutional neural network structure;
the interlayer structure of the convolutional neural network structure obtained by determination comprises: an initialization layer, a cross fusion layer and a regression prediction layer;
the initialization layer is used for preprocessing crack data obtained by converting the crack image, and inputting the processed characteristics into a length and depth quantization network; the cross fusion layer is used for learning higher-level angle features through length and depth features in crack signals and fusing the features so as to provide feature representation for the regression prediction layer; the regression prediction layer is used for mapping the characteristic representation learned by the cross fusion layer into a space of a target variable to obtain a crack size prediction result;
the step of designing the multi-physical-quantity feature fusion loss function can be specifically described as:
assuming that n samples are provided, designing a multi-physical-quantity feature fusion loss function, and meeting the following conditions:
the Loss functions of the corresponding length, angle and depth in the multi-physical-quantity characteristic fusion Loss functions are respectively defined as Loss1, loss2 and Loss 3; loss is the total Loss function in the multi-physical-quantity feature fusion Loss function; N_D i 、N_A i 、N_L i Respectively representing predicted values of depth, angle and length of the ith sample crack of the model; R_D i 、R_A i 、R_L i Actual values of depth, angle and length of the ith sample crack are respectively represented;
taking a single-layer neural network as an example, assuming that there are an input vector x, a weight vector w, a bias b and an output y, the multi-physical-quantity feature fusion loss function can be expressed as a difference between y and a real tag y_true, namely, the following is satisfied: l= (y-y_true)/(2);
using the chain law, back propagation computation to obtain the derivative of each intermediate variable with respect to L;
the step of selecting the multi-physical-quantity feature fusion convolutional neural network optimization algorithm can be specifically described as follows:
using an Adam optimization algorithm as a convolutional neural network optimization algorithm, wherein the learning rate is set to be 0.00001, and the iteration times are 5000 times; and updating the neural network parameters based on the calculation result.
2. The method for precisely quantifying the crack size of the multi-physical-quantity feature fusion convolutional neural network according to claim 1, wherein after training, the multi-physical-quantity feature fusion convolutional neural network is obtained, the method further comprises the following steps:
evaluating a multi-physical-quantity feature fusion convolutional neural network;
and deploying and applying the multi-physical-quantity feature fusion convolutional neural network.
3. The method for precisely quantifying the crack size of the multi-physical-quantity feature fusion convolutional neural network according to claim 1, wherein the process of obtaining the derivative of each intermediate variable with respect to L by back propagation calculation using the chain law can be specifically described as:
calculating the derivative of the loss function L with respect to the output y:
wherein the derivative of the loss function L with respect to the output y satisfies: dL/dy=2 (y-y_true);
calculating the derivative of y with respect to the weight w and the bias b:
wherein the derivative of y with respect to the weight w satisfies: dy/dw=x, i.e. the derivative of y with respect to w is equal to the input x;
the derivative of y with respect to bias b satisfies: dy/db=1, i.e. the derivative of y with respect to b is 1;
the derivative of the loss function L for w and b is calculated:
wherein the derivative of the loss function L with respect to w satisfies: dL/dw=dl/dy/dw=2 (y-y_true) x;
the derivative of the loss function L with respect to b satisfies: dL/db=dl/dy/db=2 (y-y_true) 1.
CN202310741330.5A 2023-06-21 2023-06-21 Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network Active CN116484921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310741330.5A CN116484921B (en) 2023-06-21 2023-06-21 Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310741330.5A CN116484921B (en) 2023-06-21 2023-06-21 Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network

Publications (2)

Publication Number Publication Date
CN116484921A true CN116484921A (en) 2023-07-25
CN116484921B CN116484921B (en) 2023-08-18

Family

ID=87218113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310741330.5A Active CN116484921B (en) 2023-06-21 2023-06-21 Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network

Country Status (1)

Country Link
CN (1) CN116484921B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021032756A (en) * 2019-08-27 2021-03-01 株式会社東芝 Ultrasonic flaw detector and method, and in-furnace structure preservation method
CN114565596A (en) * 2022-03-04 2022-05-31 同济大学 Steel surface crack detection and prediction method based on deep learning and video understanding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021032756A (en) * 2019-08-27 2021-03-01 株式会社東芝 Ultrasonic flaw detector and method, and in-furnace structure preservation method
CN114565596A (en) * 2022-03-04 2022-05-31 同济大学 Steel surface crack detection and prediction method based on deep learning and video understanding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JIANCHAO ZHAO: "A Surface Crack Assessment Method Unaffected by Lift-Off Based on ACFM", 《IEEE》, pages 21942 - 21951 *
万卓: "基于改进 YOLOv4 的电机端盖缺陷检测", 《计算机***应用》, pages 79 - 87 *

Also Published As

Publication number Publication date
CN116484921B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
Hou et al. Deep features based on a DCNN model for classifying imbalanced weld flaw types
Wang et al. Automated crack severity level detection and classification for ballastless track slab using deep convolutional neural network
Wang et al. Semi-supervised semantic segmentation network for surface crack detection
CN106950276B (en) Pipeline defect depth inversion method based on convolutional neural network
Kim et al. Investigation of steel frame damage based on computer vision and deep learning
CN110969088A (en) Remote sensing image change detection method based on significance detection and depth twin neural network
CN109635763B (en) Crowd density estimation method
CN111899225A (en) Nuclear power pipeline defect detection method based on multi-scale pyramid structure
CN112950570B (en) Crack detection method combining deep learning and dense continuous central point
Savino et al. Automated classification of civil structure defects based on convolutional neural network
Gonthina et al. Deep CNN-based concrete cracks identification and quantification using image processing techniques
Zhang et al. Identification of concrete surface damage based on probabilistic deep learning of images
Jiang et al. Automatic pixel-level detection and measurement of corrosion-related damages in dim steel box girders using Fusion-Attention-U-net
CN116484921B (en) Crack size accurate quantification method for multi-physical-quantity feature fusion convolutional neural network
CN117593243A (en) Compressor appearance self-adaptive detection method guided by reliable pseudo tag
CN117132919A (en) Multi-scale high-dimensional feature analysis unsupervised learning video anomaly detection method
CN111238927A (en) Fatigue durability evaluation method and device, electronic equipment and computer readable medium
CN112861670B (en) Transmission line hardware detection method and system
CN112767380B (en) Prediction method for end shape of wide and thick plate
CN113177563B (en) Post-chip anomaly detection method integrating CMA-ES algorithm and sequential extreme learning machine
CN116030292A (en) Concrete surface roughness detection method based on improved ResNext
Ribeiro Machado da Silva et al. Convolutional Neural Networks Applied to Flexible Pipes for Fatigue Calculations
CN111241725B (en) Structure response reconstruction method for generating countermeasure network based on conditions
CN111241614B (en) Engineering structure load inversion method based on condition generation confrontation network model
CN113222919A (en) Industrial weld defect detection method based on multi-branch attention pyramid structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant