CN111402203A - Fabric surface defect detection method based on convolutional neural network - Google Patents

Fabric surface defect detection method based on convolutional neural network Download PDF

Info

Publication number
CN111402203A
CN111402203A CN202010112073.5A CN202010112073A CN111402203A CN 111402203 A CN111402203 A CN 111402203A CN 202010112073 A CN202010112073 A CN 202010112073A CN 111402203 A CN111402203 A CN 111402203A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
image
fabric
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010112073.5A
Other languages
Chinese (zh)
Other versions
CN111402203B (en
Inventor
郑小青
陈杰
郑松
孔亚广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010112073.5A priority Critical patent/CN111402203B/en
Publication of CN111402203A publication Critical patent/CN111402203A/en
Application granted granted Critical
Publication of CN111402203B publication Critical patent/CN111402203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a fabric surface defect detection method based on a convolutional neural network, which comprises the following steps: s1, collecting and marking a data set; s2, manufacturing a GroudTruth of a data set defect sample image; s3, constructing a convolutional neural network model; s4, training a convolutional neural network model to obtain an optimal model; s5, acquiring a defect image of the fabric on line, inputting the image of the fabric to be detected into the trained convolutional neural network model for image segmentation, and realizing on-line automatic detection through the convolutional neural network model so as to identify the defects on the surface of the fabric. The method can overcome the defect of artificially designed defect characteristics, can utilize the convolutional neural network to intensively learn characteristics from pre-marked sample data, thereby quickly and accurately segmenting, realizing accurate and automatic detection of the surface defects of the fabric, saving manpower and material resources and improving the quality of the fabric product.

Description

Fabric surface defect detection method based on convolutional neural network
Technical Field
The invention relates to the technical field of fabric surface defect detection, in particular to a fabric surface defect detection method based on a convolutional neural network.
Background
With the rapid development of the textile industry, the quality control of fabric cloth by people is more and more strict, and in the textile industry, various adverse factors such as artificial mistake, machine failure, yarn breakage and the like easily cause fabric defects and influence the product quality, so that huge economic loss is caused to enterprises, and therefore, the detection of the fabric defects is one of important links of quality control.
Defects are various in types and different in shapes, the traditional detection process mainly depends on human eye recognition, most textile enterprises detect the fabric defects by means of artificial vision at present, the traditional artificial detection is easily affected by subjective and objective factors such as personal vision, fatigue conditions, emotion and illumination, the detection precision and accuracy cannot be ensured, and the efficiency is low. Especially, the defects of complex texture, variable patterns and small color difference can not be recognized by human eyes, and the requirements of industrial production application can not be met.
The existence of defects has a decisive influence on the quality and price of textile terminal products, and if the defective products are applied to aviation, war industry and medical use, inestimable and irretrievable losses are caused, so the fabric defect detection is particularly important. However, due to the complex texture structure of various fabrics and the high similarity between noise and fine defects, the difficulty of defect detection is greatly increased.
The traditional fabric defect detection method is generally an unsupervised method based on segmentation, and the image defect detection technology based on segmentation depends on the quality and contrast of an image, so that the defect part and the fabric texture background part are required to have larger difference, namely, the traditional detection method has good detection effect only when the characteristic difference between the background and the defect is larger. However, because the types of the existing fabrics are various, the defects are more in variety and smaller with the continuous improvement of the production technology, and the defect part and the background part are not easy to distinguish by human eyes, in this case, the patterned image part is easy to detect as a defect based on the traditional segmentation method, which causes a large false detection rate, also can miss detection of small target defects, and cannot solve all fabric image defect detection.
The traditional method mainly comprises the following steps:
1. threshold-based image segmentation techniques the basic idea of threshold-based image segmentation techniques is to compute one or more grayscale thresholds based on the grayscale characteristics of the image, and compare the grayscale value of each pixel in the image with the threshold, and finally classify the pixels into appropriate classes according to the results of the pixel comparisons. The threshold segmentation has the advantages of simple calculation, high operation efficiency and high speed. The limitation is that this method only considers the grey values of the pixels themselves, and generally does not consider spatial features, and is therefore very sensitive to noise. In practical applications, the threshold method is usually used in combination with other methods.
2. The edge-based image segmentation technology refers to a set of continuous pixel points on a boundary line of two different regions in an image, reflects the discontinuity of local features of the image, and embodies the abrupt change of image characteristics such as gray level, color, texture and the like. Edge-based segmentation techniques perform edge detection based on gray values to segment the image into different parts. The method has the disadvantage that the continuity and integrity of the edges are difficult to guarantee when the complex image is divided.
3. Region-based image segmentation techniques divide an image into different regions according to a similarity criterion. The method mainly utilizes the local spatial information of the image, and can better avoid the defect of small segmentation space brought by other algorithms. However, this segmentation technique is slow in the case of large-area segmentation, has poor noise immunity, and may result in segmentation of meaningless areas or excessive segmentation of images. Generally, the method is combined with other methods to exert respective advantages to obtain better segmentation effect.
4. The image segmentation technology based on the specific theory and the image segmentation method based on the specific theory and the method comprises clustering analysis, fuzzy set theory, graph theory and the like, and the theories develop a new direction for the difficult point breakthrough and research of the image segmentation technology. The traditional image segmentation technology is difficult to meet the requirements of practical application in terms of segmentation precision and segmentation efficiency, especially in the aspects of real-time scene understanding and image information processing. Moreover, when semantic segmentation is performed, a conventional image segmentation algorithm is used alone, and a good segmentation effect is difficult to obtain.
The traditional methods adopt manual feature extraction, the manual feature selection method is time-consuming and labor-consuming, a large amount of time is needed for operation, and the real-time requirement cannot be met. The detection methods generally adopted at present mainly include a structure-based method, a spectrum-based method, a statistical-based method, a model-based method and a learning-based method. The detection methods have limitations, the execution process is time-consuming and labor-consuming, and the precision and the speed are difficult to meet the requirements.
Disclosure of Invention
The invention provides a fabric surface defect detection method based on a convolutional neural network, aiming at overcoming the technical problems that in the prior art, human eye identification of fabric defect detection is easily influenced by subjective and objective factors such as personal vision, fatigue conditions, emotion and illumination, the traditional detection method has limitations, the execution process is time-consuming and labor-consuming, and the precision and speed are difficult to meet the requirements.
In order to achieve the purpose, the invention adopts the following technical scheme:
a fabric surface defect detection method based on a convolutional neural network, the method comprising the steps of:
s1, collecting and marking a data set;
s2, manufacturing a GroudTruth of a data set defect sample image;
s3, constructing a convolutional neural network model;
s4, training a convolutional neural network model to obtain an optimal model;
s5, acquiring a defect image of the fabric on line, inputting the image of the fabric to be detected into the trained convolutional neural network model for image segmentation, and realizing on-line automatic detection through the convolutional neural network model so as to identify the defects on the surface of the fabric.
According to the fabric surface defect detection method based on the convolutional neural network, a data set sample is input into a constructed convolutional neural network model for iterative training, a deep learning segmentation model is obtained through an offline training process, defects and complex background textures are segmented, then fabric surface defect products are further classified on the basis of the segmented images, the classification difficulty is greatly reduced, accurate and automatic detection of the fabric surface defects is achieved, the defects that human eye recognition in the traditional method is easily affected by subjective and objective factors such as personal vision, fatigue conditions, emotion and illumination, the precision and accuracy of detection cannot be guaranteed, the efficiency is low are overcome, and the defects that a manual characteristic selection method is time-consuming and labor-consuming, a large amount of time is needed in operation, and real-time requirements cannot be met are overcome.
Preferably, in step S1, the collected images of the sample set of surface defects of the fabric are collected and marked, and a kind of normal samples are collected and marked at the same time, and the collected images of the samples are used as a data set.
Preferably, in step S2, the training set and the verification set are divided by dividing all the defect sample images in the data set into 8:2 ratio.
Preferably, the convolutional neural network model is a multilayer convolutional neural network model, and is built by using a bidirectional segmentation network and based on a convolutional module DepthWiseFire, a unique attention optimization module ARM, a channel attention module SE and a feature fusion module FFM.
Preferably, the method for constructing the convolutional neural network model in step S3 includes the following steps:
step one, inputting an image;
performing convolution operation on the image, and dividing the convolution operation into a Part A network structure and a Part B network structure by using a bidirectional segmentation network;
thirdly, performing Part A output feature operation, wherein the selection mode of the convolution kernel size is the image features classified according to the needs, and more favorable convolution kernel size parameters are designed manually or the combination of convolution kernel sizes as many as possible is selected to cover all possible feature types;
fourthly, performing Part B output characteristic operation;
connecting the output characteristics of the PartA and the PartB through a characteristic fusion module FFM;
step six, restoring the weighted feature map to the original resolution ratio through up-sampling;
and seventhly, outputting the segmentation image.
The bidirectional segmentation network is used, so that the spatial resolution and the receptive field are ensured, and the convolutional neural network model has the function of accurately identifying the surface defects of the fabric.
Preferably, the training of the convolutional neural network model in step S4 includes the following steps:
s4.1, setting an iteration cycle;
s4.2, inputting a fabric defect sample image;
s4.3, calculating a loss function;
s4.4, carrying out iterative optimization calculation by using an optimizer to obtain the optimal model parameters of the updated convolutional neural network model;
and S4.5, performing iterative optimization operation from the step S4.2 to the step S4.4 in each iteration period, outputting the accuracy of the verification set once every iteration period, finely adjusting parameters or modifying the number of iteration periods to obtain an optimal model, and quitting training when the iteration period reaches the set value of the iteration period.
The training method provided by the invention transfers the knowledge learned by the traditional multilayer convolutional neural network model in the fabric surface image to the detection and identification of the fabric surface defects, so that the training efficiency of the convolutional neural network model is high and accurate.
Preferably, the Part A network structure comprises three layers, wherein the first layer is a convolution layer, and the second layer and the third layer adopt convolution modules DepthWiseFire in a parallel structure.
Preferably, the Part B network structure uses a densenert skeleton to rapidly down-sample the feature map to obtain a large receptive field, and adds a global average pooling to provide a maximum receptive field by global context information.
The invention has the beneficial effects that: 1. the method can overcome the defect of artificially designed defect characteristics, and can utilize the convolutional neural network to intensively learn the characteristics from the pre-marked sample data, thereby rapidly and accurately segmenting, and having strong adaptability and robustness; 2. the detection and identification method provided by the invention realizes the accurate identification of whether the fabric surface has defects or not by applying the trained convolutional neural network model to the identification of the fabric surface defect image.
Drawings
FIG. 1 is a block diagram of a convolutional neural network model of the present invention.
Fig. 2 is a block diagram of a structure of the inventive convolution module DepthwiseFire.
FIG. 3 is a block diagram of an architecture of the unique attention optimization module ARM of the present invention.
Fig. 4 is a block diagram of a feature fusion module FFM according to the present invention.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example 1: the fabric surface defect detection method based on the convolutional neural network comprises the following steps:
s1, data set collection and labeling: collecting images of a plurality of types of fabric surface defect sample sets and marking the images, collecting a type of normal samples for marking, and taking the collected sample images as a data set;
s2, manufacturing a GroudTruth of a data set defect sample image: dividing all defect sample images in the data set into a training set and a verification set according to the proportion of 8: 2;
s3, constructing a convolutional neural network model;
s4, training a convolutional neural network model to obtain an optimal model;
s5, acquiring a defect image of the fabric on line, inputting the image of the fabric to be detected into the trained convolutional neural network model for image segmentation, and realizing on-line automatic detection through the convolutional neural network model so as to identify the defects on the surface of the fabric.
As shown in fig. 1, the convolutional neural network model is a multilayer convolutional neural network model, which is constructed by using a bidirectional segmentation network and based on a convolutional module depthwisefisefire, a unique attention optimization module ARM, a channel attention module SE and a feature fusion module FFM.
The construction method of the convolutional neural network model in the step S3 comprises the following steps:
step one, inputting an image, wherein the size of the input image is 512 × 512 × 1;
performing convolution operation on the image, and dividing the convolution operation into a Part A network structure and a Part B network structure by using a bidirectional segmentation network;
thirdly, performing Part A output feature operation, wherein the selection mode of the convolution kernel size is the image features classified according to the needs, and more favorable convolution kernel size parameters are designed manually or the combination of convolution kernel sizes as many as possible is selected to cover all possible feature types;
fourthly, performing Part B output characteristic operation;
connecting output characteristics of PartA and PartB through a characteristic fusion module FFM as shown in FIG. 4;
sixthly, restoring the weighted feature map to the original resolution through upsampling, wherein the feature map is (64 × 64) and the resolution is (512 × 512) in the embodiment;
and seventhly, outputting the segmentation image.
By using the bidirectional segmentation network, the spatial resolution and the receptive field are ensured, so that the convolutional neural network model has the function of accurately identifying the surface defects of the copper foil substrate.
In the embodiment, the convolutional neural network model uses a bidirectional segmentation network, namely PartA and PartB, so that the spatial resolution and the receptive field are ensured.
PartA is used to preserve the spatial scale of the original input image and encode rich spatial information. This part has three layers, including one convolution with stride of 2 and two pooling layers. The image resolution is reduced 1/2 for each layer, and finally the network extracts an output feature map corresponding to the original image 1/8.
PartA the first layer of the network structure is a convolution layer, which is 32 convolution kernels with the receptive field size of 3 × 3, the step size is 2, and the layer output is a feature map with 32 channels and the size of 256 × 256.
The second layer adopts a convolution module DepthWiseFire in a parallel structure, as shown in FIG. 2. The module adopts a depth separable convolution structure, a channel attention module and a compression-expansion mechanism, reduces parameters and ensures higher accuracy, and because the module utilizes a characteristic diagram with larger scale, the module can code more abundant spatial information.
The method comprises the steps of firstly, obtaining 8 convolution kernels with the reception field size of 1 × 1, wherein the step size is 1, outputting a characteristic diagram with the size of 8 channels and 256 × 256, and performing compression, then, obtaining a convolution layer with a cascade structure comprising a convolution layer of a left branch and two convolution layers of a right branch, wherein the convolution layer of the left branch is 32 convolution kernels with the reception field size of 1 × 1, the step size is 1, and performing expansion, the convolution layer of the right branch is two cascade convolution layers, the upper convolution layer is 32 depth separable convolution kernels with the reception field size of 3 × 3 and the step size is 1, the upper convolution layer simultaneously uses a channel attention (SE) module, the lower convolution layer is 32 convolution kernels with the reception field size of 1 × 1 and the step size is 1, and performing expansion, and finally, the outputs of the left branch and the right branch are spliced and pass through a maximum pooling layer, the layer outputs are 64 channels, and the characteristic diagram is 128 × 128.
The third layer also adopts convolution module DepthWiseFire with parallel structure, as shown in FIG. 2, firstly 12 convolution kernels with reception field size of 1 × 1, step 1, output of 12 channels, characteristic diagram with size of 24 × 24, and compression function, then convolution layer with two cascade structure including one convolution layer of left branch and right branch.
The convolutional layer of the left branch is a convolutional kernel with 48 reception fields of 1 × 1 size and the step length is 1, the convolutional layer has an expansion effect, the convolutional layer of the right branch is two cascaded convolutional layers, the upper convolutional layer is a deep separable convolutional kernel with 48 reception fields of 3 × 3 size and the step length is 1, the upper convolutional layer simultaneously uses a channel attention (SE) module, the lower convolutional layer is a convolutional kernel with 48 reception fields of 1 × 1 size and the step length is 1, the expansion effect is achieved, finally, the output of the left branch and the output of the right branch are spliced and pass through a maximum pooling layer, the output of the layer is 96 channels, and the size of the characteristic diagram is 64 × 64.
Part b uses the densenert skeleton to rapidly down-sample the feature map to obtain a large receptive field, adds a global average pooling, and provides a maximum receptive field by global context information. In addition, in PartB, a unique attention optimization module ARM is used, as shown in fig. 3, to optimize the characteristics of each stage.
The training of the convolutional neural network model in step S4 includes the following steps:
s4.1, setting an iteration cycle;
s4.2, inputting a fabric defect sample image;
s4.3, calculating a loss function, namely adopting a multi-classification L ovazs-softmax function;
s4.4, carrying out iterative optimization calculation by using an optimizer to obtain the optimal model parameters of the updated convolutional neural network model;
and S4.5, performing iterative optimization operation from the step S4.2 to the step S4.4 in each iteration period, outputting the accuracy of the verification set once every iteration period, finely adjusting parameters or modifying the number of iteration periods to obtain an optimal model, and quitting training when the iteration period reaches the set value of the iteration period.
In this embodiment, an iteration cycle needs to be set, for example, 200, the training is exited when the iteration cycle reaches the set value, the operation shown in fig. 1 is performed in each iteration cycle, and the accuracy of the verification set is output once every iteration cycle. The parameters may be fine-tuned or the number of iteration cycles modified to obtain an optimal model. In the embodiment, the optimizer adopts a RMSProp optimizer, the learning efficiency is 0.0001, and the attenuation coefficient is 0.995.
The fabric surface defect detection method based on the convolutional neural network can overcome the defect of artificially designed defect characteristics, can intensively learn the characteristics from pre-marked sample data by utilizing the convolutional neural network, thereby rapidly and accurately segmenting, and has strong adaptability and robustness.
The textile industry is continuously developed, the texture types of the cloth are more and more, the defect types are increased, the quantity of the cloth required by China is very large, and the high detection rate and the low false detection rate cannot be basically guaranteed on the basis of real-time detection by the traditional detection method.
Under the condition, the invention utilizes the convolutional neural network technology and a universal fabric surface defect segmentation method, so that the defects of various printed fabrics can be rapidly and accurately segmented, and the method has important academic value and application value and is one of necessary technologies for monitoring the quality of the fabrics in the textile industry.
According to the method, a data set sample is input into a constructed convolutional neural network model for iterative training, a deep learning segmentation model is obtained through an offline training process, defects and complex background textures are segmented, then fabric surface defect products are further classified on the basis of the segmented images, the classification difficulty is greatly reduced, accurate and automatic detection of the fabric surface defects is realized, the defects that human eye recognition in the traditional method is easily influenced by subjective and objective factors such as personal eyesight, fatigue conditions, emotion and illumination, the accuracy and accuracy of detection cannot be ensured, the efficiency is low, and the defects that a manual characteristic selection method is time-consuming and labor-consuming, the operation needs a large amount of time, and the real-time requirement cannot be met are overcome.

Claims (8)

1. A fabric surface defect detection method based on a convolutional neural network is characterized by comprising the following steps:
s1, collecting and marking a data set;
s2, manufacturing a GroudTruth of a data set defect sample image;
s3, constructing a convolutional neural network model;
s4, training a convolutional neural network model to obtain an optimal model;
s5, acquiring a defect image of the fabric on line, inputting the image of the fabric to be detected into the trained convolutional neural network model for image segmentation, and realizing on-line automatic detection through the convolutional neural network model so as to identify the defects on the surface of the fabric.
2. The method for detecting fabric surface defects based on a convolutional neural network as claimed in claim 1, wherein in step S1, the collected sample images are used as data sets by collecting and marking images of several types of fabric surface defect sample sets and collecting and marking a type of normal samples.
3. The method for detecting the fabric surface defects based on the convolutional neural network as claimed in claim 1, wherein in step S2, the training set and the validation set are divided by dividing all the defect sample images in the data set into a ratio of 8: 2.
4. The method for detecting the fabric surface defects based on the convolutional neural network as claimed in claim 1, wherein the convolutional neural network model is a multilayer convolutional neural network model, and is constructed by using a bidirectional segmentation network and based on a convolutional module DepthWiseFire, a unique attention optimization module ARM, a channel attention module SE and a feature fusion module FFM.
5. The method for detecting the fabric surface defects based on the convolutional neural network as claimed in claim 1, wherein the method for constructing the convolutional neural network model in the step S3 comprises the following steps:
step one, inputting an image;
performing convolution operation on the image, and dividing the convolution operation into a Part A network structure and a Part B network structure by using a bidirectional segmentation network;
thirdly, performing Part A output feature operation, wherein the selection mode of the convolution kernel size is the image features classified according to the needs, and more favorable convolution kernel size parameters are designed manually or the combination of convolution kernel sizes as many as possible is selected to cover all possible feature types;
fourthly, performing Part B output characteristic operation;
connecting the output characteristics of the PartA and the PartB through a characteristic fusion module FFM;
step six, restoring the weighted feature map to the original resolution ratio through up-sampling;
and seventhly, outputting the segmentation image.
6. The method for detecting the fabric surface defects based on the convolutional neural network as claimed in claim 1, wherein the training of the convolutional neural network model in step S4 includes the following steps:
s4.1, setting an iteration cycle;
s4.2, inputting a fabric defect sample image;
s4.3, calculating a loss function;
s4.4, carrying out iterative optimization calculation by using an optimizer to obtain the optimal model parameters of the updated convolutional neural network model;
and S4.5, performing iterative optimization operation from the step S4.2 to the step S4.4 in each iteration period, outputting the accuracy of the verification set once every iteration period, finely adjusting parameters or modifying the number of iteration periods to obtain an optimal model, and quitting training when the iteration period reaches the set value of the iteration period.
7. The fabric surface defect detection method based on the convolutional neural network as claimed in claim 5, wherein the Part A network structure comprises three layers, the first layer is a convolutional layer, and the second layer and the third layer adopt convolutional modules DepthWiseFire in parallel structure.
8. The method as claimed in claim 5, wherein the Part B network structure uses a Densenet skeleton to rapidly down-sample the feature map to obtain a large receptive field, and adds a global average pooling to provide a maximum receptive field by global context information.
CN202010112073.5A 2020-02-24 2020-02-24 Fabric surface defect detection method based on convolutional neural network Active CN111402203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010112073.5A CN111402203B (en) 2020-02-24 2020-02-24 Fabric surface defect detection method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010112073.5A CN111402203B (en) 2020-02-24 2020-02-24 Fabric surface defect detection method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111402203A true CN111402203A (en) 2020-07-10
CN111402203B CN111402203B (en) 2024-03-01

Family

ID=71430406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010112073.5A Active CN111402203B (en) 2020-02-24 2020-02-24 Fabric surface defect detection method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111402203B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111929327A (en) * 2020-09-09 2020-11-13 深兰人工智能芯片研究院(江苏)有限公司 Cloth defect detection method and device
CN112150460A (en) * 2020-10-16 2020-12-29 上海智臻智能网络科技股份有限公司 Detection method, detection system, device, and medium
CN112184686A (en) * 2020-10-10 2021-01-05 深圳大学 Segmentation algorithm for detecting laser welding defects of safety valve of power battery
CN112257528A (en) * 2020-10-12 2021-01-22 南京工业大学 Wind power gear box fault diagnosis method based on wavelet transformation and dense connection expansion convolutional neural network
CN112284736A (en) * 2020-10-23 2021-01-29 天津大学 Convolutional neural network fault diagnosis method based on multi-channel attention module
CN112365478A (en) * 2020-11-13 2021-02-12 上海海事大学 Motor commutator surface defect detection model based on semantic segmentation
CN112508816A (en) * 2020-12-09 2021-03-16 中国电子科技集团公司第三研究所 Infrared image sharpening method, sharpening processing system and terminal device
CN112561892A (en) * 2020-12-22 2021-03-26 东华大学 Defect detection method for printed and jacquard fabric
CN112561866A (en) * 2020-12-04 2021-03-26 重庆忽米网络科技有限公司 Semiconductor substrate photoresist layer defect detection system based on AI and cloud computing technology
CN112730437A (en) * 2020-12-28 2021-04-30 中国纺织科学研究院有限公司 Spinneret plate surface defect detection method and device based on depth separable convolutional neural network, storage medium and equipment
CN112784718A (en) * 2021-01-13 2021-05-11 上海电力大学 Insulator state identification method based on edge calculation and deep learning
CN113192018A (en) * 2021-04-23 2021-07-30 北京化工大学 Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
CN113344888A (en) * 2021-06-17 2021-09-03 四川启睿克科技有限公司 Surface defect detection method and device based on combined model
CN114565607A (en) * 2022-04-01 2022-05-31 南通沐沐兴晨纺织品有限公司 Fabric defect image segmentation method based on neural network
CN114708267A (en) * 2022-06-07 2022-07-05 浙江大学 Image detection processing method for corrosion defect of tower stay wire on power transmission line
CN117373121A (en) * 2023-10-16 2024-01-09 北京中科睿途科技有限公司 Gesture interaction method and related equipment in intelligent cabin environment
CN117576109A (en) * 2024-01-19 2024-02-20 成都数之联科技股份有限公司 Defect detection method, device, equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018140596A2 (en) * 2017-01-27 2018-08-02 Arterys Inc. Automated segmentation utilizing fully convolutional networks
CN109522966A (en) * 2018-11-28 2019-03-26 中山大学 A kind of object detection method based on intensive connection convolutional neural networks
CN109559302A (en) * 2018-11-23 2019-04-02 北京市新技术应用研究所 Pipe video defect inspection method based on convolutional neural networks
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN110110692A (en) * 2019-05-17 2019-08-09 南京大学 A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight
CN110222792A (en) * 2019-06-20 2019-09-10 杭州电子科技大学 A kind of label defects detection algorithm based on twin network
CN110349146A (en) * 2019-07-11 2019-10-18 中原工学院 The building method of fabric defect identifying system based on lightweight convolutional neural networks
CN110660046A (en) * 2019-08-30 2020-01-07 太原科技大学 Industrial product defect image classification method based on lightweight deep neural network
CN110728657A (en) * 2019-09-10 2020-01-24 江苏理工学院 Annular bearing outer surface defect detection method based on deep learning
CN110766664A (en) * 2019-09-29 2020-02-07 杭州电子科技大学 Method for detecting appearance defective products of electronic components based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018140596A2 (en) * 2017-01-27 2018-08-02 Arterys Inc. Automated segmentation utilizing fully convolutional networks
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency
CN109559302A (en) * 2018-11-23 2019-04-02 北京市新技术应用研究所 Pipe video defect inspection method based on convolutional neural networks
CN109522966A (en) * 2018-11-28 2019-03-26 中山大学 A kind of object detection method based on intensive connection convolutional neural networks
CN110110692A (en) * 2019-05-17 2019-08-09 南京大学 A kind of realtime graphic semantic segmentation method based on the full convolutional neural networks of lightweight
CN110222792A (en) * 2019-06-20 2019-09-10 杭州电子科技大学 A kind of label defects detection algorithm based on twin network
CN110349146A (en) * 2019-07-11 2019-10-18 中原工学院 The building method of fabric defect identifying system based on lightweight convolutional neural networks
CN110660046A (en) * 2019-08-30 2020-01-07 太原科技大学 Industrial product defect image classification method based on lightweight deep neural network
CN110728657A (en) * 2019-09-10 2020-01-24 江苏理工学院 Annular bearing outer surface defect detection method based on deep learning
CN110766664A (en) * 2019-09-29 2020-02-07 杭州电子科技大学 Method for detecting appearance defective products of electronic components based on deep learning

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111929327A (en) * 2020-09-09 2020-11-13 深兰人工智能芯片研究院(江苏)有限公司 Cloth defect detection method and device
CN112184686A (en) * 2020-10-10 2021-01-05 深圳大学 Segmentation algorithm for detecting laser welding defects of safety valve of power battery
CN112257528A (en) * 2020-10-12 2021-01-22 南京工业大学 Wind power gear box fault diagnosis method based on wavelet transformation and dense connection expansion convolutional neural network
CN112257528B (en) * 2020-10-12 2023-07-18 南京工业大学 Wind power gear box fault diagnosis method based on wavelet transformation and dense connection expansion convolutional neural network
CN112150460A (en) * 2020-10-16 2020-12-29 上海智臻智能网络科技股份有限公司 Detection method, detection system, device, and medium
CN112150460B (en) * 2020-10-16 2024-03-15 上海智臻智能网络科技股份有限公司 Detection method, detection system, device and medium
CN112284736A (en) * 2020-10-23 2021-01-29 天津大学 Convolutional neural network fault diagnosis method based on multi-channel attention module
CN112365478A (en) * 2020-11-13 2021-02-12 上海海事大学 Motor commutator surface defect detection model based on semantic segmentation
CN112561866B (en) * 2020-12-04 2022-03-01 重庆忽米网络科技有限公司 Semiconductor substrate photoresist layer defect detection system based on AI and cloud computing technology
CN112561866A (en) * 2020-12-04 2021-03-26 重庆忽米网络科技有限公司 Semiconductor substrate photoresist layer defect detection system based on AI and cloud computing technology
CN112508816A (en) * 2020-12-09 2021-03-16 中国电子科技集团公司第三研究所 Infrared image sharpening method, sharpening processing system and terminal device
CN112508816B (en) * 2020-12-09 2023-09-08 中国电子科技集团公司第三研究所 Infrared image sharpening method, sharpening processing system and terminal equipment
CN112561892A (en) * 2020-12-22 2021-03-26 东华大学 Defect detection method for printed and jacquard fabric
CN112730437A (en) * 2020-12-28 2021-04-30 中国纺织科学研究院有限公司 Spinneret plate surface defect detection method and device based on depth separable convolutional neural network, storage medium and equipment
CN112730437B (en) * 2020-12-28 2023-01-10 中国纺织科学研究院有限公司 Spinneret plate surface defect detection method and device based on depth separable convolutional neural network, storage medium and equipment
CN112784718B (en) * 2021-01-13 2023-04-25 上海电力大学 Insulator state identification method based on edge calculation and deep learning
CN112784718A (en) * 2021-01-13 2021-05-11 上海电力大学 Insulator state identification method based on edge calculation and deep learning
CN113192018B (en) * 2021-04-23 2023-11-24 北京化工大学 Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
CN113192018A (en) * 2021-04-23 2021-07-30 北京化工大学 Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
CN113344888A (en) * 2021-06-17 2021-09-03 四川启睿克科技有限公司 Surface defect detection method and device based on combined model
CN114565607A (en) * 2022-04-01 2022-05-31 南通沐沐兴晨纺织品有限公司 Fabric defect image segmentation method based on neural network
CN114565607B (en) * 2022-04-01 2024-06-04 汕头市鼎泰丰实业有限公司 Fabric defect image segmentation method based on neural network
CN114708267A (en) * 2022-06-07 2022-07-05 浙江大学 Image detection processing method for corrosion defect of tower stay wire on power transmission line
CN117373121A (en) * 2023-10-16 2024-01-09 北京中科睿途科技有限公司 Gesture interaction method and related equipment in intelligent cabin environment
CN117576109A (en) * 2024-01-19 2024-02-20 成都数之联科技股份有限公司 Defect detection method, device, equipment and storage medium
CN117576109B (en) * 2024-01-19 2024-04-02 成都数之联科技股份有限公司 Defect detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111402203B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN111402203A (en) Fabric surface defect detection method based on convolutional neural network
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN107194559B (en) Workflow identification method based on three-dimensional convolutional neural network
CN111862064B (en) Silver wire surface flaw identification method based on deep learning
CN108074231B (en) Magnetic sheet surface defect detection method based on convolutional neural network
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN108918536B (en) Tire mold surface character defect detection method, device, equipment and storage medium
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
Gao et al. A multilevel information fusion-based deep learning method for vision-based defect recognition
CN107123131B (en) Moving target detection method based on deep learning
CN109636772A (en) The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning
CN107016664B (en) A kind of bad needle flaw detection method of large circle machine
CN111667455A (en) AI detection method for various defects of brush
CN110298297A (en) Flame identification method and device
CN111861990B (en) Method, system and storage medium for detecting bad appearance of product
CN113177924A (en) Industrial production line product flaw detection method
CN114612472A (en) SegNet improvement-based leather defect segmentation network algorithm
CN114255212A (en) FPC surface defect detection method and system based on CNN
CN114841957A (en) Steel plate surface defect detection method based on deep learning
CN111667465A (en) Metal hand basin defect detection method based on far infrared image
CN115018790A (en) Workpiece surface defect detection method based on anomaly detection
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN114155186A (en) Unsupervised learning-based defect detection system and method
CN114429445A (en) PCB defect detection and identification method based on MAIRNet
CN112464744A (en) Fish posture identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant