CN112446869A - Unsupervised industrial product defect detection method and device based on deep learning - Google Patents

Unsupervised industrial product defect detection method and device based on deep learning Download PDF

Info

Publication number
CN112446869A
CN112446869A CN202011377532.9A CN202011377532A CN112446869A CN 112446869 A CN112446869 A CN 112446869A CN 202011377532 A CN202011377532 A CN 202011377532A CN 112446869 A CN112446869 A CN 112446869A
Authority
CN
China
Prior art keywords
sample
training
encoder
self
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011377532.9A
Other languages
Chinese (zh)
Inventor
王汉凌
段经璞
汪漪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peng Cheng Laboratory
Southern University of Science and Technology
Original Assignee
Peng Cheng Laboratory
Southern University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peng Cheng Laboratory, Southern University of Science and Technology filed Critical Peng Cheng Laboratory
Priority to CN202011377532.9A priority Critical patent/CN112446869A/en
Publication of CN112446869A publication Critical patent/CN112446869A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting defects of an unsupervised industrial product based on deep learning and a computer readable storage medium, wherein the method comprises the following steps: training the first self-encoder by using a training sample, and obtaining the implicit expression of the training sample in the training process; carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample; initializing a memory module in the second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample; inputting the test sample into a trained second self-encoder to obtain a reconstructed sample; and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask. The method and the device realize effective detection of the defects under the condition that the model is trained only by using the defect-free image sample, and improve the effect of defect detection.

Description

Unsupervised industrial product defect detection method and device based on deep learning
Technical Field
The present application relates to the field of defect detection technologies, and in particular, to a method and an apparatus for detecting defects of an unsupervised industrial product based on deep learning, and a computer-readable storage medium.
Background
In the production of industrial products, defective products are inevitably produced due to the limitations of process levels and the influence of environmental factors. If the part of the product can not be detected at the source of the defect as early as possible, the part of the product flows into the subsequent production step, the difficulty is increased for the subsequent detection, and the repair cost is increased. More seriously, if these products are inadvertently introduced into the market for sale, the product image is greatly affected. In order to detect these defective products, the conventional technology adopts a method of deploying an Automatic Optical Inspection (AOI) device on a production line for Automatic defect detection. However, due to the limitation of the system design of the AOI device, a lot of false alarms (i.e. judging a product without defects as a defective product) often occur. Therefore, technical workers are required to perform recheck, and the detection efficiency is greatly reduced.
In recent years, with the development of deep learning, a convolutional neural network has been increasingly researched and applied as an end-to-end model capable of automatically extracting image features. In order to detect whether the industrial product image has defects, the conventional defect detection method based on deep learning generally adopts a supervised model (i.e. a large number of defective and non-defective image samples are collected for training). However, in practical applications, it is often difficult to collect enough defective image samples due to the high acquisition cost of the defective samples, the continuous variation of defect types, and so on, and the defect-free image samples occupy most of the image samples. This may result in insufficient model training or failure to cope with the defect types that have not occurred in the training set during detection, and greatly weaken the defect detection effect of the model.
Disclosure of Invention
The embodiment of the application provides the method and the device for detecting the defects of the unsupervised industrial products based on the deep learning and the computer readable storage medium, so that the problem of poor defect detection effect caused by difficulty in obtaining defective image samples in the traditional technology is solved, the defects are effectively detected under the condition that a model is trained only by using the non-defective image samples, and the defect detection effect is improved.
The embodiment of the application provides an unsupervised industrial product defect detection method based on deep learning, which comprises the following steps:
training a first self-encoder by using a training sample, and obtaining a hidden expression of the training sample in a training process; wherein the training sample is a non-defective image;
carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample;
initializing a memory module in a second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample;
inputting the test sample into a trained second self-encoder to obtain a reconstructed sample;
and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask.
In an embodiment, the method further comprises:
acquiring a preset number of non-defective images;
and scaling the non-defective image to the same size, and carrying out normalization processing on the non-defective image to obtain a training sample.
In an embodiment, the training of the first self-encoder with the training samples and the obtaining of the implicit expression of the training samples in the training process include:
inputting training samples to a first self-encoder one by one to train the first self-encoder;
and after the training of the first self-encoder is finished, acquiring a feature vector output by the training sample through an encoder module of the first self-encoder, and taking the feature vector as an implicit expression of the training sample.
In an embodiment, the step of performing dimension reduction on the implicit expression of the training sample to obtain the implicit expression of the normal sample includes:
and performing dimensionality reduction on the implicit expression of the training sample by using principal component analysis to obtain the implicit expression subjected to dimensionality reduction, and taking the implicit expression as the implicit expression of a normal sample.
In an embodiment, the step of inputting the test samples to the trained second self-encoder to obtain the reconstructed samples includes:
inputting the test sample into a trained second self-encoder to obtain a reconstructed image, and calculating a mean square error between the test sample and the reconstructed image;
and if the mean square error between the test sample and the reconstructed image is less than or equal to a preset threshold value, taking the reconstructed image as a reconstructed sample.
In an embodiment, the step of inputting the test samples to the trained second self-encoder to obtain reconstructed samples further includes:
and if the mean square error between the test sample and the reconstructed image is greater than a preset threshold value, re-inputting the reconstructed image into the trained second self-encoder.
In an embodiment, the step of calculating by using the test sample and the reconstructed sample to obtain a defect mask, and determining whether the test sample has a defect according to the defect mask includes:
calculating an inter-pixel error weighted by structural similarity by using the test sample and the reconstructed sample, and taking the inter-pixel error as a defect mask;
and taking the sum of all pixel values of the defect mask as a defect index, and judging whether the test sample has defects according to the defect index.
In one embodiment, the step of determining whether the test sample has defects according to the defect index includes:
if the defect index is larger than a preset threshold value, judging that the test sample has defects;
and if the defect index is smaller than or equal to a preset threshold value, judging that the test sample has no defects.
The embodiment of the present application further provides an apparatus, which includes a processor, a memory, and a defect detection program stored in the memory and capable of running on the processor, and when the defect detection program is executed by the processor, the method implements the steps of the method for detecting defects of the unsupervised industrial product based on deep learning.
The embodiment of the application also provides a computer readable storage medium, wherein a defect detection program is stored on the computer readable storage medium, and when being executed by a processor, the defect detection program realizes the steps of the unsupervised industrial product defect detection method based on deep learning.
The technical scheme of the unsupervised industrial product defect detection method and device based on deep learning and the computer readable storage medium provided by the embodiment of the application at least has the following technical effects:
the method comprises the steps of training a first self-encoder by using a training sample, and obtaining the implicit expression of the training sample in the training process; wherein the training sample is a non-defective image; carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample; initializing a memory module in a second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample; inputting the test sample into a trained second self-encoder to obtain a reconstructed sample; and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask. Therefore, the problem that the defect detection effect is poor due to the fact that the defect image samples are difficult to obtain in the traditional technology is effectively solved, the defect is effectively detected under the condition that the model is trained only by using the defect-free image samples, and the defect detection effect is improved.
Drawings
FIG. 1 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure;
FIG. 2 is a schematic flowchart of a first embodiment of the method for detecting defects in an unsupervised industrial product based on deep learning according to the present application;
FIG. 3 is a schematic flowchart of a second embodiment of the method for detecting defects in an unsupervised industrial product based on deep learning according to the present application;
fig. 4 is a schematic flowchart of a third embodiment of the method for detecting defects of an unsupervised industrial product based on deep learning according to the present application.
Detailed Description
In order to solve the problem of poor defect detection effect caused by difficulty in obtaining a defective image sample in the traditional technology, the method adopts training of a first self-encoder by using a training sample, and obtains implicit expression of the training sample in the training process; wherein the training sample is a non-defective image; carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample; initializing a memory module in a second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample; inputting the test sample into a trained second self-encoder to obtain a reconstructed sample; and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask. The defect detection method and the defect detection device realize effective detection of the defect under the condition that the model is trained only by using the defect-free image sample, and improve the defect detection effect.
For a better understanding of the above technical solutions, exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Referring to fig. 1, it is a schematic diagram of a hardware structure of an apparatus involved in various embodiments of the present application, where the apparatus may include: processor 101, memory 102, input unit 103, output unit 104, and the like. Those skilled in the art will appreciate that the hardware configuration of the apparatus shown in fig. 1 does not constitute a limitation of the apparatus, which may include more or less components than those shown, or some components in combination, or a different arrangement of components.
The various components of the device are described in detail below with reference to fig. 1:
the processor 101 is a control center of the apparatus, connects various parts of the entire apparatus, and performs various functions of the apparatus or processes data by running or executing a program stored in the memory 102 and calling up the data stored in the memory 102, thereby monitoring the entire apparatus. Further, the processor 101 includes at least a graphics processor GPU.
The memory 102 may be used to store various programs of the device as well as various data. The memory 102 mainly includes a program storage area and a data storage area, wherein the program storage area stores at least a program required for defect detection; the storage data area may store various data of the device. Further, the memory 102 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 103 may be used to input data required for self-encoder training and data required for defect detection from outside the apparatus.
The output unit 104 may be configured to output a defect detection result of the data requiring defect detection.
In the embodiment of the present application, the processor 101 may be configured to call a defect detection program stored in the memory 102, and perform the following operations:
training a first self-encoder by using a training sample, and obtaining a hidden expression of the training sample in a training process; wherein the training sample is a non-defective image;
carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample;
initializing a memory module in a second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample;
inputting the test sample into a trained second self-encoder to obtain a reconstructed sample;
and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask.
In one embodiment, the processor 101 may be configured to invoke a defect detection program stored in the memory 102 and perform the following operations:
acquiring a preset number of non-defective images;
and scaling the non-defective image to the same size, and carrying out normalization processing on the non-defective image to obtain a training sample.
In one embodiment, the processor 101 may be configured to invoke a defect detection program stored in the memory 102 and perform the following operations:
inputting training samples to a first self-encoder one by one to train the first self-encoder;
and after the training of the first self-encoder is finished, acquiring a feature vector output by the training sample through an encoder module of the first self-encoder, and taking the feature vector as an implicit expression of the training sample.
In one embodiment, the processor 101 may be configured to invoke a defect detection program stored in the memory 102 and perform the following operations:
and performing dimensionality reduction on the implicit expression of the training sample by using principal component analysis to obtain the implicit expression subjected to dimensionality reduction, and taking the implicit expression as the implicit expression of a normal sample.
In one embodiment, the processor 101 may be configured to invoke a defect detection program stored in the memory 102 and perform the following operations:
inputting the test sample into a trained second self-encoder to obtain a reconstructed image, and calculating a mean square error between the test sample and the reconstructed image;
and if the mean square error between the test sample and the reconstructed image is less than or equal to a preset threshold value, taking the reconstructed image as a reconstructed sample.
In one embodiment, the processor 101 may be configured to invoke a defect detection program stored in the memory 102 and perform the following operations:
and if the mean square error between the test sample and the reconstructed image is greater than a preset threshold value, re-inputting the reconstructed image into the trained second self-encoder.
In one embodiment, the processor 101 may be configured to invoke a defect detection program stored in the memory 102 and perform the following operations:
calculating an inter-pixel error weighted by structural similarity by using the test sample and the reconstructed sample, and taking the inter-pixel error as a defect mask;
and taking the sum of all pixel values of the defect mask as a defect index, and judging whether the test sample has defects according to the defect index.
In one embodiment, the processor 101 may be configured to invoke a defect detection program stored in the memory 102 and perform the following operations:
if the defect index is larger than a preset threshold value, judging that the test sample has defects;
and if the defect index is smaller than or equal to a preset threshold value, judging that the test sample has no defects.
According to the technical scheme, the first self-encoder is trained by using a training sample, and the implicit expression of the training sample is obtained in the training process; wherein the training sample is a non-defective image; carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample; initializing a memory module in a second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample; inputting the test sample into a trained second self-encoder to obtain a reconstructed sample; and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask. Therefore, the problem that the defect detection effect is poor due to the fact that the defect image samples are difficult to obtain in the traditional technology is effectively solved, the defect is effectively detected under the condition that the model is trained only by using the defect-free image samples, and the defect detection effect is improved.
For better understanding of the above technical solutions, the following detailed descriptions will be provided in conjunction with the drawings and the detailed description of the embodiments.
Referring to fig. 2, in a first embodiment of the present application, a method for detecting defects of an unsupervised industrial product based on deep learning specifically includes the following steps:
step S110, training the first self-encoder by using a training sample, and obtaining the implicit expression of the training sample in the training process.
In this embodiment, the first self-encoder includes an encoder module and a decoder module. Wherein the encoder and decoder are convolutional neural networks. And the training samples are non-defective images. Assuming that the encoder is E, the decoder is D, and the training samples are x, the output image can be obtained after inputting the training samples into the first self-encoder
Figure BDA0002803766170000092
The first self-encoder may be trained by optimizing a loss function of the first self-encoder. And E (x) is a feature vector of the training sample x output by the encoder E, i.e. an implicit expression of the training sample x. After all training samples are used to train the first self-encoder, the implicit expression of all samples can be obtained in the training process.
And step S120, performing dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of the normal sample.
In this embodiment, assuming that the number of the training samples is N, N implicit expressions of the N training samples, that is, an N-dimensional implicit expression, may be obtained in step S110. But in order to make the data easier to process, easier to use and remove data noise, the implicit expression of the N dimension needs to be subjected to dimension reduction. And after the dimensionality reduction treatment is carried out on the N-dimensional implicit expression, the low-dimensional implicit expression can be obtained, namely the implicit expression of the normal sample.
Step S130, initializing a memory module in a second self-encoder using the implicit expression of the normal sample, and training the second self-encoder using the training sample.
In this embodiment, the second self-encoder includes an encoder module, a memory module, and a decoder module. Assuming that the implicit expression of the normal samples obtained in step S120 is M-dimensional, that is, the implicit expressions of the M normal samples are obtained, the memory module in the second self-encoder is initialized by using the implicit expressions of the M normal samples. After initializing a memory module in a second self-encoder using the implicit representation of the normal samples, the second self-encoder may be trained using the training samples.
In one embodiment, assuming the image size of the training sample is W × H × 3, the implicitly expressed dimension is DlThen the network structure of the second self-encoder can be as shown in the following table. The format of the convolutional layer (Conv) and the deconvolution layer (Deconv) is out _ channels × kernel _ size × stride × padding.
Figure BDA0002803766170000091
Figure BDA0002803766170000101
Assuming that the encoder is E, the decoder is D, and the training samples are x, the step of training the second self-encoder using the training samples may include:
and a, carrying out forward propagation on the second self-encoder. Inputting the training samples into the second self-encoder, and obtaining an implicit expression z ═ E (x) of the training samples through forward propagation of the encoder. Calculating the implicit expression z and the implicit expression m of each normal sample in the memory moduleiCosine similarity of
Figure BDA0002803766170000111
Then, the memory module is normalized by using a softmax function to obtain the weight of the implicit expression of each normal sample in the memory module
Figure BDA0002803766170000112
To reconstruct the training samples using the small number of most similar normal samples, w isiSorting from large to small, and taking the value of q quantile (0 < q < 1, the smaller the q, the smaller the number of used normal samples) as a threshold value lambda to cut off, namely
Figure BDA0002803766170000113
The implicit expression then used by the decoder is
Figure BDA0002803766170000114
The reconstructed image obtained finally is
Figure BDA0002803766170000115
And b, carrying out backward propagation on the second self-encoder. The second self-encoder has a loss function of
Figure BDA0002803766170000116
Wherein the first term is the reconstruction error and the second term is
Figure BDA0002803766170000117
Entropy of, for further boosting
Figure BDA0002803766170000118
Sparsity of (a). Optimizing parameters in the second autoencoder by optimizing the loss function.
In the training process, the memory module in the second self-encoder needs to be updated in real time, and the parameters of the memory module are fixed after the training is finished.
Step S140, inputting the test sample to the trained second self-encoder to obtain a reconstructed sample.
In the present embodiment, the test sample is a sample that may have defects, i.e., may be a non-defective image or may be a defective image. Given a test sample, after the test sample is input to the trained second self-encoder, the trained second self-encoder may be used to reconstruct the test sample by an iterative optimization method, so as to obtain a reconstructed sample corresponding to the test sample, that is, a corresponding defect-free sample. By adopting the iterative optimization method, the reconstructed sample only contains the normal sample.
And S150, calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask.
In this embodiment, the defect mask refers to a segmentation result of a defect in an image, and different regions are represented by different pixel values. For example, the pixel value of the defective position may be 255, and the pixel value of the non-defective position may be 0. By using the test sample and the reconstructed sample, a defect mask of the test sample can be calculated. In one embodiment, the defect mask may be obtained by calculating the inter-pixel error weighted by the structural similarity of the test sample and the reconstructed sample. After the defect mask of the test sample is obtained, whether the test sample has defects or not can be judged by analyzing the defect mask of the test sample. In one embodiment, the presence of defects in a test sample may be determined by calculating the sum of pixel values of the defect mask.
The method has the advantages that the training of the first self-encoder is carried out by using the training sample, and the implicit expression of the training sample is obtained in the training process; wherein the training sample is a non-defective image; carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample; initializing a memory module in a second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample; inputting the test sample into a trained second self-encoder to obtain a reconstructed sample; and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask. Therefore, the problem that the defect detection effect is poor due to the fact that the defect image samples are difficult to obtain in the traditional technology is effectively solved, the defect is effectively detected under the condition that the model is trained only by using the defect-free image samples, and the defect detection effect is improved.
Referring to fig. 3, in a second embodiment of the present application, the method for detecting defects of an unsupervised industrial product based on deep learning specifically includes the following steps:
in step S211, a preset number of non-defective images are acquired.
In this embodiment, in order to train the self-encoder sufficiently, a sufficient number of training samples need to be acquired. Since the training samples used for training the self-encoder in this application are non-defective images, a sufficient number of non-defective images need to be acquired as training samples. Accordingly, the preset number needs to be set to a value matching the required number of non-defective images.
Step S212, scaling the non-defective image to the same size, and performing normalization processing on the non-defective image to obtain a training sample.
In this embodiment, after a sufficient number of non-defective images are acquired, the non-defective images also need to be preprocessed. First, the non-defective image needs to be scaled to the same size so that the size of the image input from the encoder can be unified. After the non-defective image is scaled, it needs to be further normalized, i.e. the data on each pixel of the non-defective image is mapped to a floating point number between 0 and 1. By normalizing the non-defective images, the data processing during the training of the self-encoder can be simplified. After the non-defective image is processed, the non-defective image may be used as a training sample for self-encoder training.
Step S221, inputting training samples to a first self-encoder one by one, so as to train the first self-encoder.
In this embodiment, assume that the image size of the training sample is W × H × 3, and the implicitly expressed dimension is DlThen the network structure of the first self-encoder can be as shown in the following table. The format of the convolutional layer (Conv) and the deconvolution layer (Deconv) is out _ channels × kernel _ size × stride × padding.
Figure BDA0002803766170000131
Figure BDA0002803766170000141
Assuming that the encoder is E, the decoder is D, and the training samples are x, the output image can be obtained after inputting the training samples into the first self-encoder
Figure BDA0002803766170000142
In order to make the first self-encoder learn the purer normal samples, the output image is obtained
Figure BDA0002803766170000143
Then, the output image can be used as the input of the first self-encoder to forward propagate again to obtain the output image
Figure BDA0002803766170000144
Wherein the loss function is
Figure BDA0002803766170000145
Training the first self-encoder by optimizing the loss function. Training of the first self-encoder may be completed after all training samples are input to the first self-encoder one by one to train the first self-encoder.
Step S222, after the training of the first self-encoder is completed, obtaining a feature vector output by the encoder module of the first self-encoder from the training sample, and using the feature vector as an implicit expression of the training sample.
In this embodiment, assuming that the training sample is x, after the training of the first self-encoder is completed, it may be obtained that a feature vector of the training sample x output by an encoder module of the first self-encoder is z ═ e (x), which is an implicit expression of the training sample x. Assuming the dimension of the implicit expression is DlIf the number of the training samples is N, an implicit expression matrix composed of implicit expressions of all the training samples can be finally obtained, and the dimension of the implicit expression matrix is nxdl
And S231, performing dimensionality reduction on the implicit expression of the training sample by using principal component analysis to obtain the implicit expression subjected to dimensionality reduction, and taking the implicit expression subjected to dimensionality reduction as the implicit expression of the normal sample.
In this embodiment, after obtaining the implicit expressions of all the training samples and obtaining the implicit expression matrix, the implicit expression matrix may be transposed to change the dimension thereof to DlAnd (4) times N. Assuming that the number of the finally obtained normal samples is M, after the dimensionality reduction processing is performed on the transformed implicit expression matrix by using principal component analysis, the dimensionality D can be obtainedlAn implicit expression matrix of xm. Then again for said dimension DlTransposing the hidden expression matrix of XM to obtain the dimension of MxDlThe implicit expression matrix is the implicit expression moment of the normal sampleAnd (5) arraying.
Step S240, initializing a memory module in a second self-encoder using the implicit expression of the normal sample, and training the second self-encoder using the training sample.
Step S250, inputting the test sample to the trained second self-encoder to obtain a reconstructed sample.
And step S260, calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask.
The method has the beneficial effects that the steps of obtaining the trained second self-encoder are supplemented and refined on the basis of the first embodiment. Therefore, the problem that the defect detection effect is poor due to the fact that the defect image samples are difficult to obtain in the traditional technology is further effectively solved, the defect is effectively detected under the condition that the model is trained only by using the defect-free image samples, and the defect detection effect is improved.
Referring to fig. 4, in a third embodiment of the present application, the method for detecting defects of an unsupervised industrial product based on deep learning specifically includes the following steps:
step S310, training the first self-encoder by using a training sample, and obtaining the implicit expression of the training sample in the training process.
And S320, performing dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of the normal sample.
Step S330, initializing a memory module in a second self-encoder using the implicit expression of the normal sample, and training the second self-encoder using the training sample.
Step S341, inputting the test sample to the trained second self-encoder to obtain a reconstructed image, and calculating a mean square error between the test sample and the reconstructed image.
In this embodiment, given a test sample, assuming the test sample is x, the test sample x is inputAfter the trained second self-encoder, the reconstructed image can be obtained as
Figure BDA0002803766170000161
However, after the reconstructed image is obtained, it is necessary to determine whether the reconstructed image only contains normal samples. At this time, the mean square error between the test sample and the reconstructed image may be calculated first, and then whether the reconstructed image only includes the normal sample may be determined according to the calculated mean square error. Wherein the mean square error may reflect a degree of difference between the test sample and the reconstructed image.
Step S342, if the mean square error between the test sample and the reconstructed image is less than or equal to a preset threshold, the reconstructed image is used as the reconstructed sample.
In this embodiment, the preset threshold is a value that is preset according to actual needs and that corresponds to a mean square error that can determine that the difference between the test sample and the reconstructed image is small. If the mean square error between the test sample and the reconstructed image is less than or equal to a preset threshold, it means that the difference degree between the test sample and the reconstructed image is small, and at this time, it can be determined that the reconstructed image only contains normal samples. Therefore, the reconstructed image does not need to be reconstructed continuously, and can be directly used as a reconstructed sample.
Step S343, if the mean square error between the test sample and the reconstructed image is greater than the preset threshold, re-inputting the reconstructed image into the trained second self-encoder.
In this embodiment, if the mean square error between the test sample and the reconstructed image is greater than the preset threshold, it means that the difference between the test sample and the reconstructed image is still large, and at this time, it cannot be determined that the reconstructed image only includes a normal sample, and therefore, the reconstructed image needs to be reconstructed continuously, that is, the reconstructed image is input into the trained second self-encoder again, and then the mean square error between the input image and the reconstructed image is calculated continuously, so as to determine whether the reconstructed image can be used as the reconstructed sample.
Step S351, calculating an inter-pixel error weighted by the structural similarity using the test sample and the reconstructed sample, and using the inter-pixel error as a defect mask.
In the embodiment, in order to eliminate the influence of a small positioning error in the reconstruction process and improve the accuracy of a defect mask image, the inter-pixel error weighted by the structural similarity is used as the result of the defect mask. Assuming that the test sample is x, the reconstructed sample is
Figure BDA0002803766170000171
The structural similarity is calculated as
Figure BDA0002803766170000172
Wherein, muxAnd
Figure BDA0002803766170000173
are x and
Figure BDA0002803766170000174
average value of (d) (-)xAnd
Figure BDA0002803766170000175
are x and
Figure BDA0002803766170000176
variance of c1And c2Is a constant for maintaining stability.
And the error between the pixels at the position (i, j) is calculated by
Figure BDA0002803766170000177
Where C is the number of channels of the image. In summary, the calculation method of the inter-pixel error for obtaining the structural similarity weighting is
Figure BDA0002803766170000178
Where α is a coefficient for controlling the ratio of the inter-pixel error in the defect mask, and the larger α is, the larger the inter-pixel error weight is. Since the structural similarity is calculated for the picture, the structural similarity index of the pixel point at the position (i, j) is defined as a small square picture (i.e. x ') with the side length of l pixels and taking the pixel point as the center'ijAnd
Figure BDA0002803766170000179
) Index of structural similarity between. And when the pixel point is positioned at the edge of the image, the part exceeding the image is filled by using a pixel value of 0.
After calculation, the defect mask of the test sample can be obtained.
Step S352, using the sum of all pixel values of the defect mask as a defect index, and determining whether the test sample has a defect according to the defect index.
In this embodiment, it can be determined whether the test sample x has a defect according to the defect mask K obtained in step S351. Firstly, summing all pixel values in the defect mask K to obtain the defect index I ═ Σ of the defect maskijMijAnd then judging whether the test sample has defects according to the defect index. Wherein the defect index identifies a likelihood that the test specimen has a defect. The larger the defect index is, the greater the possibility that the test sample has defects; the smaller the defect index, the less likely the test specimen is to have defects. In an embodiment, the step of determining whether the test sample has defects according to the defect index may include: if the defect index is larger than a preset threshold value, judging that the test sample has defects; and if the defect index is smaller than or equal to a preset threshold value, judging that the test sample has no defects. The preset threshold is a value which is preset according to actual conditions and corresponds to a defect index capable of judging that the test sample has defects.
The method has the beneficial effects that the step of detecting the test sample is supplemented and refined on the basis of the first embodiment. Therefore, the problem that the defect detection effect is poor due to the fact that the defect image samples are difficult to obtain in the traditional technology is further effectively solved, the defect is effectively detected under the condition that the model is trained only by using the defect-free image samples, and the defect detection effect is improved.
Based on the same inventive concept, an embodiment of the present application further provides a device, where the device includes a processor, a memory, and a defect detection program that is stored in the memory and can be run on the processor, and when the defect detection program is executed by the processor, the various processes of the above embodiment of the method for detecting defects of an unsupervised industrial product based on deep learning are implemented, and the same technical effects can be achieved, and are not described herein again to avoid repetition.
Since the apparatus provided in the embodiments of the present application is an apparatus used for implementing the method in the embodiments of the present application, based on the method described in the embodiments of the present application, a person skilled in the art can understand the specific structure and the variation of the apparatus, and thus details are not described herein again. All devices used in the methods of the embodiments of the present application are within the scope of the present application.
Based on the same inventive concept, an embodiment of the present application further provides a computer-readable storage medium, where a defect detection program is stored on the computer-readable storage medium, and when the defect detection program is executed by a processor, the processes of the above embodiment of the method for detecting defects of an unsupervised industrial product based on deep learning are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not repeated here.
Since the computer-readable storage medium provided in the embodiments of the present application is a computer-readable storage medium used for implementing the method in the embodiments of the present application, based on the method described in the embodiments of the present application, those skilled in the art can understand the specific structure and modification of the computer-readable storage medium, and thus details are not described herein. Any computer-readable storage medium that can be used with the methods of the embodiments of the present application is intended to be within the scope of the present application.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An unsupervised industrial product defect detection method based on deep learning is characterized by comprising the following steps:
training a first self-encoder by using a training sample, and obtaining a hidden expression of the training sample in a training process; wherein the training sample is a non-defective image;
carrying out dimensionality reduction on the implicit expression of the training sample to obtain the implicit expression of a normal sample;
initializing a memory module in a second self-encoder by using the implicit expression of the normal sample, and training the second self-encoder by using the training sample;
inputting the test sample into a trained second self-encoder to obtain a reconstructed sample;
and calculating by using the test sample and the reconstructed sample to obtain a defect mask, and judging whether the test sample has defects according to the defect mask.
2. The deep learning-based unsupervised industrial product defect detection method of claim 1, wherein the method further comprises:
acquiring a preset number of non-defective images;
and scaling the non-defective image to the same size, and carrying out normalization processing on the non-defective image to obtain a training sample.
3. The method for detecting defects in an unsupervised industrial product based on deep learning of claim 1, wherein the step of training a first self-encoder with training samples and obtaining implicit expressions of the training samples during the training process comprises:
inputting training samples to a first self-encoder one by one to train the first self-encoder;
and after the training of the first self-encoder is finished, acquiring a feature vector output by the training sample through an encoder module of the first self-encoder, and taking the feature vector as an implicit expression of the training sample.
4. The method for detecting the defect of the unsupervised industrial product based on the deep learning as claimed in claim 1, wherein the step of performing the dimensionality reduction processing on the implicit expression of the training sample to obtain the implicit expression of the normal sample comprises:
and performing dimensionality reduction on the implicit expression of the training sample by using principal component analysis to obtain the implicit expression subjected to dimensionality reduction, and taking the implicit expression as the implicit expression of a normal sample.
5. The method for detecting defects in an unsupervised industrial product based on deep learning of claim 1, wherein the step of inputting the test samples to the trained second self-encoder to obtain the reconstructed samples comprises:
inputting the test sample into a trained second self-encoder to obtain a reconstructed image, and calculating a mean square error between the test sample and the reconstructed image;
and if the mean square error between the test sample and the reconstructed image is less than or equal to a preset threshold value, taking the reconstructed image as a reconstructed sample.
6. The method for detecting defects in an unsupervised industrial product based on deep learning of claim 5, wherein the step of inputting the test samples into the trained second self-encoder to obtain the reconstructed samples further comprises:
and if the mean square error between the test sample and the reconstructed image is greater than a preset threshold value, re-inputting the reconstructed image into the trained second self-encoder.
7. The method for detecting defects in an unsupervised industrial product based on deep learning of claim 1, wherein the step of calculating using the test sample and the reconstructed sample to obtain a defect mask and determining whether the test sample has defects according to the defect mask comprises:
calculating an inter-pixel error weighted by structural similarity by using the test sample and the reconstructed sample, and taking the inter-pixel error as a defect mask;
and taking the sum of all pixel values of the defect mask as a defect index, and judging whether the test sample has defects according to the defect index.
8. The method for detecting defects of unsupervised industrial products based on deep learning of claim 7, wherein the step of judging whether the test sample has defects according to the defect index comprises the steps of:
if the defect index is larger than a preset threshold value, judging that the test sample has defects;
and if the defect index is smaller than or equal to a preset threshold value, judging that the test sample has no defects.
9. An apparatus comprising a processor, a memory, and a defect detection program stored on the memory and executable on the processor, the defect detection program when executed by the processor implementing the steps of the deep learning based unsupervised industrial product defect detection method of any one of claims 1 to 8.
10. A computer readable storage medium, wherein a defect detection sequence is stored on the computer readable storage medium, and when executed by a processor, the defect detection program implements the steps of the deep learning based unsupervised industrial product defect detection method according to any one of claims 1 to 8.
CN202011377532.9A 2020-11-27 2020-11-27 Unsupervised industrial product defect detection method and device based on deep learning Pending CN112446869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011377532.9A CN112446869A (en) 2020-11-27 2020-11-27 Unsupervised industrial product defect detection method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011377532.9A CN112446869A (en) 2020-11-27 2020-11-27 Unsupervised industrial product defect detection method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN112446869A true CN112446869A (en) 2021-03-05

Family

ID=74739077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011377532.9A Pending CN112446869A (en) 2020-11-27 2020-11-27 Unsupervised industrial product defect detection method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN112446869A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129272A (en) * 2021-03-30 2021-07-16 广东省科学院智能制造研究所 Defect detection method and device based on denoising convolution self-encoder
CN113205013A (en) * 2021-04-19 2021-08-03 重庆创通联达智能技术有限公司 Object identification method, device, equipment and storage medium
CN113256602A (en) * 2021-06-10 2021-08-13 中科云尚(南京)智能技术有限公司 Unsupervised fan blade defect detection method and system based on self-encoder
CN113269425A (en) * 2021-05-18 2021-08-17 北京航空航天大学 Quantitative evaluation method for health state of equipment under unsupervised condition and electronic equipment
CN114693685A (en) * 2022-06-02 2022-07-01 深圳市华汉伟业科技有限公司 Unsupervised defect detection model training method and defect detection method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129272A (en) * 2021-03-30 2021-07-16 广东省科学院智能制造研究所 Defect detection method and device based on denoising convolution self-encoder
CN113205013A (en) * 2021-04-19 2021-08-03 重庆创通联达智能技术有限公司 Object identification method, device, equipment and storage medium
CN113269425A (en) * 2021-05-18 2021-08-17 北京航空航天大学 Quantitative evaluation method for health state of equipment under unsupervised condition and electronic equipment
CN113256602A (en) * 2021-06-10 2021-08-13 中科云尚(南京)智能技术有限公司 Unsupervised fan blade defect detection method and system based on self-encoder
CN114693685A (en) * 2022-06-02 2022-07-01 深圳市华汉伟业科技有限公司 Unsupervised defect detection model training method and defect detection method

Similar Documents

Publication Publication Date Title
CN112446869A (en) Unsupervised industrial product defect detection method and device based on deep learning
CN110189255B (en) Face detection method based on two-stage detection
CN110619618A (en) Surface defect detection method and device and electronic equipment
CN111275660B (en) Flat panel display defect detection method and device
WO2023116632A1 (en) Video instance segmentation method and apparatus based on spatio-temporal memory information
CN112036513A (en) Image anomaly detection method based on memory-enhanced potential spatial autoregression
CN110930378B (en) Emphysema image processing method and system based on low data demand
CN115619743A (en) Construction method and application of OLED novel display device surface defect detection model
CN114170184A (en) Product image anomaly detection method and device based on embedded feature vector
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN112884721A (en) Anomaly detection method and system and computer readable storage medium
TWI803243B (en) Method for expanding images, computer device and storage medium
JP2021143884A (en) Inspection device, inspection method, program, learning device, learning method, and trained dataset
CN115358952A (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN111914949B (en) Zero sample learning model training method and device based on reinforcement learning
KR102178238B1 (en) Apparatus and method of defect classification using rotating kernel based on machine-learning
CN115862119B (en) Attention mechanism-based face age estimation method and device
CN117058079A (en) Thyroid imaging image automatic diagnosis method based on improved ResNet model
CN116912144A (en) Data enhancement method based on discipline algorithm and channel attention mechanism
CN116563243A (en) Foreign matter detection method and device for power transmission line, computer equipment and storage medium
CN115937991A (en) Human body tumbling identification method and device, computer equipment and storage medium
Fan et al. EGFNet: Efficient guided feature fusion network for skin cancer lesion segmentation
CN116152191A (en) Display screen crack defect detection method, device and equipment based on deep learning
CN115994900A (en) Unsupervised defect detection method and system based on transfer learning and storage medium
CN115564702A (en) Model training method, system, device, storage medium and defect detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination