CN112036541A - Fabric defect detection method based on genetic algorithm optimization neural network - Google Patents

Fabric defect detection method based on genetic algorithm optimization neural network Download PDF

Info

Publication number
CN112036541A
CN112036541A CN202011112620.6A CN202011112620A CN112036541A CN 112036541 A CN112036541 A CN 112036541A CN 202011112620 A CN202011112620 A CN 202011112620A CN 112036541 A CN112036541 A CN 112036541A
Authority
CN
China
Prior art keywords
fabric
genetic algorithm
defects
defect
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011112620.6A
Other languages
Chinese (zh)
Other versions
CN112036541B (en
Inventor
余灵婕
陈梦琦
支超
祝双武
孙润军
郜仲元
王帅
柯真霞
周尤勇
朱梦秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202011112620.6A priority Critical patent/CN112036541B/en
Publication of CN112036541A publication Critical patent/CN112036541A/en
Application granted granted Critical
Publication of CN112036541B publication Critical patent/CN112036541B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/086Learning methods using evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for detecting fabric defects based on a genetic algorithm optimized neural network, and belongs to the field of data processing. The invention comprises the following steps: initializing Gabor filtering parameters; collecting fabric defect images; marking to obtain defect categories and frames containing defects, and establishing a Pascal VOC data set; sending the Pascal VOC data set into a Faster-RCNN network training model for training, and calculating mAP; taking the mAP as a fitness function of a genetic algorithm, carrying out mutation, crossing and selection to obtain offspring Gabor parameters until the iteration times reach a set maximum value, and outputting an optimal genotype, namely optimal filtering parameters of the fabric defect image; and calling the corresponding fast-RCNN model to output the position, type and accuracy of the fabric defects. The method can well separate the defects from the background, and the accuracy rate of the defect detection of the fabric detection model is high, and the universality is good.

Description

Fabric defect detection method based on genetic algorithm optimization neural network
Technical Field
The invention belongs to the field of data processing, and particularly relates to a method for detecting fabric defects based on a genetic algorithm optimized neural network.
Background
The fabric defect detection mostly depends on manual operation, the method has high repeatability, consumes manpower and has long detection time, the detection result depends on the proficiency of workers to a great extent, and the accuracy, the consistency and the efficiency cannot be guaranteed. Due to the fact that fabrics (including single-weave fabrics, composite-weave fabrics, knitted fabrics, twill fabrics, jacquard fabrics and the like) are complex in texture, the fabrics are changeable in color, the contrast between fabric defects and background texture is low, and the defects and the background are difficult to separate, and therefore a good recognition effect is achieved. Fabric defect detection can be divided into four categories: statistical methods, structural methods, model-based methods, and spectroscopic methods. Statistical methods typically use first and second order statistics to represent texture features. However, it is very challenging to detect subtle defects using the statistics of gray values. In statistical methods, the gray value of an image is defined by various representations, such as autocorrelation functions, co-occurrence matrices, mathematical morphology, and fractal dimensions. The structuring method treats the texture as a set of texture primitives, and can effectively divide defects when the pattern is regular. The spectral method is one of the most widely applied methods, and the fabric defect detection is carried out by utilizing the spectral characteristics of an image, and mainly comprises Fourier transform, wavelet transform, Gabor transform and the like. Among them, the Gabor transform is close to the representation of the human visual system for frequency and direction, is often used for texture filtering, and has good characteristics in extracting local spatial and frequency domain information of a target. However, there are many Gabor filter parameters, and selecting an appropriate Gabor filter parameter is an important factor for success of filtering. The existing fitness function is difficult to meet the requirements of eliminating the background of the fabric defects and highlighting the defects on Gabor parameter combination, and the accuracy and the efficiency of detecting the fabric defects are low.
Disclosure of Invention
The invention aims to overcome the defects that the existing fitness function is difficult to meet the requirements of eliminating the background of fabric defects and highlighting the defects on Gabor parameter combination, and the accuracy and the efficiency of detecting the fabric defects are low, and provides a method for detecting the fabric defects based on a genetic algorithm optimized neural network.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a fabric defect detection method based on a genetic algorithm optimization neural network comprises the following steps:
s1: establishing an initial population based on a genetic algorithm to obtain initial Gabor filtering parameters;
s2: collecting fabric defect images, and cutting and segmenting the fabric defect images;
s3: acquiring defect types and frames containing defects by using LabelImg marks, and establishing a Pascal VOC data set;
s4: sending the Pascal VOC data set into a Faster-RCNN network training model for training, and calculating mAP;
s5: taking the mAP as a fitness function of a genetic algorithm, carrying out variation, crossing and selection to obtain offspring Gabor parameters until the iteration times reach a set maximum value, and outputting an optimal genotype, namely optimal Gabor filtering parameters of the fabric defect image;
the corresponding fast-RCNN model is then invoked to output the location, type, and accuracy of the fabric defect.
Further, in step S1, establishing an initial population based on the genetic algorithm is:
and setting the size of the population, the iteration times and a fitness function of population evolution.
Further, in step S2, the acquired fabric defect images are:
JPEG format of hole erasing image, hole pricking image or stain image.
Further, the specific process of establishing the pascal voc data set in step S3 is to divide and cut JPEG pictures containing defects, take part of the pictures as a training set, calibrate a text frame by using software label img, and generate an xml file, wherein label types are respectively Hole, Edgehole and Stain;
and (4) making the picture and the xml file into a VOC2007 data set format and generating txt files of test, train and val.
Further, the process of step S4 is:
s4-1: inputting pictures with any size into a backbone network resnet50 for convolution, and outputting a Feature Map;
s4-2: the Feature Map generates a plurality of anchor boxes through an RPN module, cuts the anchor boxes, judges whether the anchor boxes belong to a foreground or a background through softmax, and corrects the anchors through frame regression to obtain propofol;
s4-3: the Rol Pooling layer obtains the propuls feature maps with fixed size by utilizing propulses generated by the RPN module and the feature maps obtained before;
s4-4: classification classifies the characteristic diagram of the suggestion frame, and specific Classification is carried out by utilizing a full connection layer and softmax; meanwhile, frame regression operation is completed by using L1 Loss to obtain the accurate position of the object, a Loss function is calculated, and parameters of the whole network are updated to obtain a training model.
Further, the backbone network resnet50 network has four groups of blocks, each group has 3, 4, 6, 3 blocks, each block has three convolution layers, and the other network has a single convolution layer at the beginning, so that a total of (3+4+6+3) × 3+1 ═ 49 convolution layers and 1 fully connected layer;
conv 1: 7 × 7 × 64, output size 112 × 112;
conv2_ x: 3 blocks, each block has 1 × 1 × 64, 3 × 3 × 64, 1 × 1 × 256, output size is 56 × 56;
conv3_ x: 4 blocks, each block has 1 × 1 × 128, 3 × 3 × 128, 1 × 1 × 512, output size is 28 × 28;
conv4_ x: 6 blocks, each block has 1 × 1 × 256, 3 × 3 × 256, 1 × 1 × 1024, output size is 14 × 14;
conv5_ x: 3 blocks, each block comprising 1 × 1 × 512, 3 × 3 × 512, 1 × 1 × 2048, output size being 7 × 7.
Further, the training loss of the training model includes classification loss and regression loss, and the total loss function is as follows:
Figure BDA0002729095620000041
in the formula: i is an integer representing the subscript of each sample; pi represents the probability that the ith anchor is predicted as the target, and represents the probability that the ith calibration frame is predicted as the target; tk ═ { tx, ty, tw, th } represents a vector of the four parameterized coordinates of the prediction frame, and is a vector of the four parameterized coordinates of the calibration frame; ncls represents the normalized size of the cls term; λ represents an external weight; nreg represents the number of reg terms normalized to anchor positions; lcls (pi, pi) represents categorical loss defined as-log [ pi + (1-pi)]Pi represents the probability of being predicted as a certain class, and pk is 1 if the current sample is a positive sample, and pi is 0 if the current sample is a negative sample; pi is the label of the labeled real data, Lreg (ti, ti) represents the bounding box regression loss, defined as SmoothL1(t-t*);SmoothL1The function is defined as:
Figure BDA0002729095620000042
compared with the prior art, the invention has the following beneficial effects:
the invention relates to a fabric defect detection method based on a genetic algorithm optimization neural network, which comprises the steps of collecting fabric defect sample images, establishing a data set, and establishing a fabric defect detection model based on Pascal VOC; cutting an image before training the model, and performing direct coordinate prediction, loss value calculation and back propagation during model training to obtain a network weight pre-parameter of the textile defect detection model; carrying out network weight calculation and updating for multiple times by utilizing the training set to obtain optimal network weight parameters and obtain a trained fabric defect detection model; and then optimizing a Gabor kernel by taking mAP of the evaluation model index as an objective function based on a multi-population genetic algorithm (MPGA) training method, and finally obtaining the optimal filtering parameters of the fabric defect image and the defect detection result of the fabric defect image. The defects can be well separated from the background by utilizing the optimized Gabor parameters, and the defects of the fabric detection model are high in accuracy and good in universality.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2(a) is a spot diagram of oil stains in example 1;
FIG. 2(b) is a graph showing the effect of the oil stain defect map of example 1 after filtering;
FIG. 2(c) is the location and type of oil stain detection of example 1;
FIG. 3(a) is a spot diagram of oil stains in example 2;
FIG. 3(b) is a graph showing the effect of the oil stain defect map of example 2 after filtering;
FIG. 3(c) is the location and type of oil stain detection of example 2;
FIG. 4(a) is a spot diagram of hole-erasing defects of example 3;
FIG. 4(b) is a graph showing the effect of the scrub hole defect map after filtering in example 3;
FIG. 4(c) is the location and type of scrub hole detection of example 3;
FIG. 5(a) is a spot diagram of hole-erasing defects in example 4;
FIG. 5(b) is a graph showing the effect of example 4 after filtering the hole-erasing defect map;
FIG. 5(c) is the location and type of scrub hole detection of example 4;
FIG. 6(a) is a point diagram of edge pricking hole defects in example 5;
FIG. 6(b) is an effect diagram after the edge pricking defect map of example 5 is filtered;
FIG. 6(c) is the location and type of edge hole detection in example 5;
FIG. 7(a) is a point diagram of edge pricking hole defects in example 6;
FIG. 7(b) is an effect diagram after the edge pricking defect map of example 6 is filtered;
FIG. 7(c) shows the location and type of edge hole detection in example 6.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The population-based genetic algorithm provides a universal framework for searching global optimization parameters, the problem of searching optimal parameter combinations can be solved, and the selection of fitness functions in the genetic process is the key for training Gabor parameters by using the genetic algorithm.
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 1, fig. 1 is a flow chart of the present invention; a fabric defect detection method based on a genetic algorithm optimization neural network comprises the steps of collecting fabric defect sample images, establishing a data set, and establishing a fabric defect detection model based on Pascal VOC; cutting an image before training the model, and performing direct coordinate prediction, loss value calculation and back propagation during model training to obtain a network weight pre-parameter of the textile defect detection model; then, carrying out multiple times of network weight calculation and updating by using the training set to obtain optimal network weight parameters, and obtaining a trained fabric defect detection model; and then optimizing a Gabor kernel by taking mAP of the evaluation model index as an objective function based on a multi-population genetic algorithm (MPGA) training method, and finally obtaining the optimal filtering parameters of the fabric defect image and the defect detection result of the fabric defect image. The defects can be well separated from the background by utilizing the optimized Gabor parameters, and the defects of the fabric detection model are high in accuracy and good in universality.
Example 1
FIG. 2(a) is a diagram of a collected oil stain defect point, wherein the defect point has a larger area and the defect point edge is not clear; fig. 2(b) is an effect diagram of the collected oil stain defect map after filtering processing is performed on the collected oil stain defect map by using a genetic algorithm and a neural network in combination with the searched optimal filtering parameters suitable for the map, and fig. 2(c) is a predicted oil stain position and type output after target detection.
Example 2
FIG. 3(a) is a spot diagram of collected oil stain defects that are smaller in area and are near the fabric edge; fig. 3(b) is an effect diagram of the collected oil stain defect map after filtering processing is performed on the collected oil stain defect map by using a genetic algorithm and a neural network in combination with the searched optimal filtering parameters suitable for the map, and fig. 3(c) is a predicted oil stain position and type output after target detection.
Example 3
FIG. 4(a) is a point diagram of collected hole-wiping defects, which are not folded on the cloth surface where the defects are located, and holes can be seen to be obvious; fig. 4(b) is an effect diagram of the collected hole-scratching defect map after filtering processing is performed on the collected hole-scratching defect map by using a genetic algorithm and a neural network in combination with the searched optimal filtering parameters suitable for the map, and fig. 4(c) is a predicted hole-scratching position and type output after target detection.
Example 4
FIG. 5(a) is a point diagram of collected hole-wiping defects, which have obvious creases on the cloth surface where the defects are located, and it can be seen that holes are not obvious; fig. 5(b) is an effect diagram of the acquired hole-scratching defect map after filtering processing is performed on the acquired hole-scratching defect map by using a genetic algorithm and a neural network in combination with the searched optimal filtering parameters suitable for the map, and fig. 5(c) is a predicted hole-scratching position and type output after target detection, compared with embodiment 3, the accuracy of detection and positioning of embodiment 4 is higher.
Example 5
FIG. 6(a) is a point diagram of collected edge hole pricking defects, where the defects are located on a non-flat cloth surface without obvious creases, and it can be seen that edge hole pricking is obvious; fig. 6(b) is an effect diagram of the acquired edge punch defect map after filtering processing is performed on the acquired edge punch defect map by using a genetic algorithm and a neural network in combination with the searched optimal filter parameters suitable for the map, and fig. 6(c) is a predicted edge punch position and type output after target detection.
Example 6
FIG. 7(a) is a point diagram of collected edge hole pricking defects, where the defects have obvious creases on the cloth surface, and it can be seen that edge hole pricking is not obvious; fig. 7(b) is an effect diagram of the acquired edge punch defect map after filtering processing is performed on the acquired edge punch defect map by using a genetic algorithm and a neural network in combination with the searched optimal filter parameters suitable for the map, and fig. 7(c) is a predicted edge punch position and type output after target detection, and compared with embodiment 5, the accuracy of detection and positioning of embodiment 6 is higher.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (7)

1. A fabric defect detection method based on a genetic algorithm optimization neural network is characterized by comprising the following steps:
s1: establishing an initial population based on a genetic algorithm to obtain initial Gabor filtering parameters;
s2: collecting fabric defect images, and cutting and segmenting the fabric defect images;
s3: acquiring defect types and frames containing defects by using LabelImg marks, and establishing a Pascal VOC data set;
s4: sending the Pascal VOC data set into a Faster-RCNN network training model for training, and calculating mAP;
s5: taking the mAP as a fitness function of a genetic algorithm, carrying out variation, crossing and selection to obtain offspring Gabor parameters until the iteration times reach a set maximum value, and outputting an optimal genotype, namely optimal Gabor filtering parameters of the fabric defect image;
the corresponding fast-RCNN model is then invoked to output the location, type, and accuracy of the fabric defect.
2. The method for detecting fabric defects based on genetic algorithm optimized neural network as claimed in claim 1, wherein in step S1, establishing initial population based on genetic algorithm specific operations is:
and setting the size of the population, the iteration times and a fitness function of population evolution.
3. A method for detecting fabric defects based on genetic algorithm optimized neural network as claimed in claim 1, characterized in that in step S2, the collected fabric defect images are:
JPEG format of hole erasing image, hole pricking image or stain image.
4. The method for detecting fabric defects based on genetic algorithm optimized neural network as claimed in claim 1, wherein the specific process of establishing Pascal VOC data set in step S3 is to divide and cut JPEG images containing defects, take part of the images as training set, calibrate text frames by using software LabelImg, label types are respectively Hole, Edgehole and Stain, and generate xml file;
and (4) making the picture and the xml file into a VOC2007 data set format and generating txt files of test, train and val.
5. The method for detecting fabric defects based on genetic algorithm optimized neural network as claimed in claim 1, wherein the process of step S4 is:
s4-1: inputting pictures with any size into a backbone network resnet50 for convolution, and outputting a Feature Map;
s4-2: the Feature Map generates a plurality of anchor boxes through an RPN module, cuts the anchor boxes, judges whether the anchor boxes belong to a foreground or a background through softmax, and corrects the anchors through frame regression to obtain propofol;
s4-3: the Rol Pooling layer obtains the propuls feature maps with fixed size by utilizing propulses generated by the RPN module and the feature maps obtained before;
s4-4: classification classifies the characteristic diagram of the suggestion frame, and specific Classification is carried out by utilizing a full connection layer and softmax; meanwhile, frame regression operation is completed by using L1 Loss to obtain the accurate position of the object, a Loss function is calculated, and parameters of the whole network are updated to obtain a training model.
6. The method of claim 5, wherein said network of trunk networks resnet50 has four groups of 3, 4, 6, 3 blocks, each having three convolutional layers, and the other network has a single convolutional layer at the beginning, so that a total of (3+4+6+3) × 3+1 ═ 49 convolutional layers, 1 fully connected layer;
conv 1: 7 × 7 × 64, output size 112 × 112;
conv2_ x: 3 blocks, each block has 1 × 1 × 64, 3 × 3 × 64, 1 × 1 × 256, output size is 56 × 56;
conv3_ x: 4 blocks, each block has 1 × 1 × 128, 3 × 3 × 128, 1 × 1 × 512, output size is 28 × 28;
conv4_ x: 6 blocks, each block has 1 × 1 × 256, 3 × 3 × 256, 1 × 1 × 1024, output size is 14 × 14;
conv5_ x: 3 blocks, each block comprising 1 × 1 × 512, 3 × 3 × 512, 1 × 1 × 2048, output size being 7 × 7.
7. The method according to claim 5, wherein the training loss of the training model comprises classification loss and regression loss, and the total loss function is as follows:
Figure FDA0002729095610000031
in the formula: i is an integer representing the subscript of each sample; pi represents the probability that the ith anchor is predicted as the target, and represents the probability that the ith calibration frame is predicted as the target; tk ═ { tx, ty, tw, th } represents a vector of the four parameterized coordinates of the prediction frame, and is a vector of the four parameterized coordinates of the calibration frame; ncls represents the normalized size of the cls term; λ represents an external weight; nreg represents the number of reg terms normalized to anchor positions; lcls (pi, pi) represents categorical loss defined as-log [ pi + (1-pi)]Pi represents the probability of being predicted as a certain class, and pk is 1 if the current sample is a positive sample, and pi is 0 if the current sample is a negative sample; pi is the label of the labeled real data, Lreg (ti, ti) represents the bounding box regression loss, defined as SmoothL1(t-t*);SmoothL1The function is defined as:
Figure FDA0002729095610000032
CN202011112620.6A 2020-10-16 2020-10-16 Fabric defect detection method based on genetic algorithm optimization neural network Active CN112036541B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011112620.6A CN112036541B (en) 2020-10-16 2020-10-16 Fabric defect detection method based on genetic algorithm optimization neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011112620.6A CN112036541B (en) 2020-10-16 2020-10-16 Fabric defect detection method based on genetic algorithm optimization neural network

Publications (2)

Publication Number Publication Date
CN112036541A true CN112036541A (en) 2020-12-04
CN112036541B CN112036541B (en) 2023-11-17

Family

ID=73573675

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011112620.6A Active CN112036541B (en) 2020-10-16 2020-10-16 Fabric defect detection method based on genetic algorithm optimization neural network

Country Status (1)

Country Link
CN (1) CN112036541B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116562358A (en) * 2023-03-16 2023-08-08 中国人民解放军战略支援部队航天工程大学士官学校 Construction method of image processing Gabor kernel convolutional neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081215A1 (en) * 2001-01-09 2003-05-01 Ajay Kumar Defect detection system for quality assurance using automated visual inspection
KR20060122147A (en) * 2005-05-25 2006-11-30 삼성전자주식회사 Gabor filter and filtering method thereof, and image processing method adopting the same
CN103955922A (en) * 2014-04-17 2014-07-30 西安工程大学 Method for detecting flaws of printed fabric based on Gabor filter
CN105205828A (en) * 2015-10-20 2015-12-30 江南大学 Warp knitted fabric flaw detection method based on optimal Gabor filter
CN106845556A (en) * 2017-02-09 2017-06-13 东华大学 A kind of fabric defect detection method based on convolutional neural networks
CN108520114A (en) * 2018-03-21 2018-09-11 华中科技大学 A kind of textile cloth defect detection model and its training method and application
CN109613006A (en) * 2018-12-22 2019-04-12 中原工学院 A kind of fabric defect detection method based on end-to-end neural network
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030081215A1 (en) * 2001-01-09 2003-05-01 Ajay Kumar Defect detection system for quality assurance using automated visual inspection
KR20060122147A (en) * 2005-05-25 2006-11-30 삼성전자주식회사 Gabor filter and filtering method thereof, and image processing method adopting the same
CN103955922A (en) * 2014-04-17 2014-07-30 西安工程大学 Method for detecting flaws of printed fabric based on Gabor filter
CN105205828A (en) * 2015-10-20 2015-12-30 江南大学 Warp knitted fabric flaw detection method based on optimal Gabor filter
CN106845556A (en) * 2017-02-09 2017-06-13 东华大学 A kind of fabric defect detection method based on convolutional neural networks
CN108520114A (en) * 2018-03-21 2018-09-11 华中科技大学 A kind of textile cloth defect detection model and its training method and application
CN109613006A (en) * 2018-12-22 2019-04-12 中原工学院 A kind of fabric defect detection method based on end-to-end neural network
CN110930387A (en) * 2019-11-21 2020-03-27 中原工学院 Fabric defect detection method based on depth separable convolutional neural network

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
BING WEI等: "Fabric Defect Detection Based on Faster RCNN", 《ARTIFICIAL INTELLIGENCE ON FASHION AND TEXTILES CONFERENCE》, pages 45 - 51 *
周文明等: "应用遗传算法优化Gabor滤波器的机织物疵点检测", 《东华大学学报(自然科学版)》, no. 04, pages 535 - 541 *
尉苗苗等: "应用最优Gabor滤波器的经编织物疵点检测", 《纺织学报》, no. 11, pages 48 - 54 *
李明等: "应用GAN和Faster R-CNN的色织物缺陷识别", 《西安工程大学学报》, no. 06, pages 663 - 669 *
祝双武等: "基于纹理周期性分析的织物疵点检测方法", 《计算机工程与应用》, no. 21, pages 163 - 166 *
董阿梅: "基于卷积神经网络的色织物疵点检测与分类算法研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》, no. 02, pages 138 - 2139 *
陈泽虹: "实Gabor滤波器的设计及其在织物疵点检测中的应用", 《中国优秀硕士学位论文全文数据库(工程科技Ⅰ辑)》, no. 04, pages 024 - 8 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116562358A (en) * 2023-03-16 2023-08-08 中国人民解放军战略支援部队航天工程大学士官学校 Construction method of image processing Gabor kernel convolutional neural network
CN116562358B (en) * 2023-03-16 2024-01-09 中国人民解放军战略支援部队航天工程大学士官学校 Construction method of image processing Gabor kernel convolutional neural network

Also Published As

Publication number Publication date
CN112036541B (en) 2023-11-17

Similar Documents

Publication Publication Date Title
CN110533672B (en) Chromosome sorting method based on strip recognition
CN112287899A (en) Unmanned aerial vehicle aerial image river drain detection method and system based on YOLO V5
CN109829914A (en) The method and apparatus of testing product defect
CN109448001B (en) Automatic picture clipping method
CN109615604B (en) Part appearance flaw detection method based on image reconstruction convolutional neural network
CN109460735A (en) Document binary processing method, system, device based on figure semi-supervised learning
CN107545571A (en) A kind of image detecting method and device
CN111178177A (en) Cucumber disease identification method based on convolutional neural network
CN108710893A (en) A kind of digital image cameras source model sorting technique of feature based fusion
CN107944403A (en) Pedestrian's attribute detection method and device in a kind of image
CN107480585A (en) Object detection method based on DPM algorithms
CN112669274B (en) Multi-task detection method for pixel-level segmentation of surface abnormal region
CN114005081A (en) Intelligent detection device and method for foreign matters in tobacco shreds
CN115082776A (en) Electric energy meter automatic detection system and method based on image recognition
CN112164030A (en) Method and device for quickly detecting rice panicle grains, computer equipment and storage medium
CN113537173B (en) Face image authenticity identification method based on face patch mapping
CN112036541A (en) Fabric defect detection method based on genetic algorithm optimization neural network
CN114299394A (en) Intelligent interpretation method for remote sensing image
CN115063679B (en) Pavement quality assessment method based on deep learning
CN117152729A (en) Method for detecting and identifying characters in power pole information card image
CN116597275A (en) High-speed moving target recognition method based on data enhancement
CN116416523A (en) Machine learning-based rice growth stage identification system and method
CN114445691A (en) Model training method and device, electronic equipment and storage medium
CN114066861A (en) Coal and gangue identification method based on cross algorithm edge detection theory and visual features
CN111461060A (en) Traffic sign identification method based on deep learning and extreme learning machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant