CN113362347B - Image defect region segmentation method and system based on super-pixel feature enhancement - Google Patents

Image defect region segmentation method and system based on super-pixel feature enhancement Download PDF

Info

Publication number
CN113362347B
CN113362347B CN202110801975.4A CN202110801975A CN113362347B CN 113362347 B CN113362347 B CN 113362347B CN 202110801975 A CN202110801975 A CN 202110801975A CN 113362347 B CN113362347 B CN 113362347B
Authority
CN
China
Prior art keywords
super
image
pixel
features
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110801975.4A
Other languages
Chinese (zh)
Other versions
CN113362347A (en
Inventor
许亮
吴启荣
郑博远
向旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Deshidi Intelligent Technology Co ltd
Guangdong University of Technology
Original Assignee
Guangzhou Deshidi Intelligent Technology Co ltd
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Deshidi Intelligent Technology Co ltd, Guangdong University of Technology filed Critical Guangzhou Deshidi Intelligent Technology Co ltd
Priority to CN202110801975.4A priority Critical patent/CN113362347B/en
Publication of CN113362347A publication Critical patent/CN113362347A/en
Application granted granted Critical
Publication of CN113362347B publication Critical patent/CN113362347B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an image defect region segmentation method based on multi-scale super-pixel characteristic enhancement, which comprises the following steps: s1: acquiring an image including a surface defect of the workpiece; s2: preprocessing the image; s3: extracting features of the preprocessed image; s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels; s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region; s6: the segmentation network outputs segmented defect regions. According to the invention, the priori knowledge of the super pixels under different scales is extracted, multi-scale fusion is carried out with the coding features of the segmentation network, and the feature information is enriched, so that the segmentation network outputs finer prediction segmentation regions.

Description

Image defect region segmentation method and system based on super-pixel feature enhancement
Technical Field
The invention relates to the field of machine vision deep learning, in particular to an image defect region segmentation method and system based on super-pixel characteristic enhancement.
Background
Most of surface defect detection based on deep learning at present is based on a supervised characterization learning method. The feature learning-based defect detection method can be regarded as an application of the related classical network in the industrial field because the achieved targets are completely consistent with the computer vision task.
The surface detection of industrial products in the industrial field is a key step for determining the quality of the products, and due to the influence of processing environment, processing technology and the like, defects are irregular in shape, different in size and random in position, cannot be predicted in advance, and have unobvious defects (namely similar to the background), so that the traditional vision algorithm is difficult to consider multiple types of defects, and particularly the unobvious defects have larger omission.
The deep learning based on data driving can effectively improve the generalization capability of the addition test model, the convolutional neural network can effectively extract defect features and can also detect key factors of defects, most of the existing feature extraction methods are downsampling-upsampling network structures, the defect features are extracted through convolution and pooling, the downsampling process inevitably leads to the loss of feature information, and the common convolution and pooling can not fully extract the feature information. For low contrast, insignificant defects, the difficulty in extracting the defect features is a further challenge in deep learning.
The Chinese patent with publication number CN111445471A discloses a product surface defect detection method and device based on deep learning and machine vision, wherein the publication number is 2020, 07 and 24. The invention aims to provide a product surface defect detection method and device based on deep learning and machine vision. The technical scheme of the invention is as follows: a product surface defect detection method based on deep learning and machine vision is characterized by comprising the following steps of: acquiring a surface image of a product to be detected in a line scanning mode of an industrial camera; performing defect characteristic pretreatment on the acquired image in real time, and rapidly determining whether defects exist in the surface image of the product to be detected; carrying out defect severity identification and defect type classification on the images with defects by using the trained deep convolutional neural network model; the trained deep convolutional neural network model is formed by performing migration learning, transformation and training on a classical neural network model acceptance-v 3. The patent also makes detection of low contrast, insignificant defects difficult to achieve due to insufficient extraction of the characteristic information.
Disclosure of Invention
The primary aim of the invention is to provide an image defect region segmentation method based on super-pixel characteristic enhancement, which is suitable for detecting low-contrast and multi-scale defects.
It is a further object of the present invention to provide an image defect region segmentation system based on super-pixel feature enhancement.
In order to solve the technical problems, the technical scheme of the invention is as follows:
an image defect region segmentation method based on super-pixel characteristic enhancement comprises the following steps:
s1: acquiring an image including a surface defect of the workpiece;
s2: preprocessing the image;
s3: extracting features of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region;
s6: the segmentation network outputs segmented defect regions.
Preferably, the preprocessing is performed on the image in step S2, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
Preferably, in the step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point.
Preferably, in the step S4, the extracted features are input into the S2pNet network, and the following processing is further required for the extracted features:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
Figure GDA0004051947680000021
Preferably, the dimension of the super pixel domain association map at different scales in step S4 is
Figure GDA0004051947680000031
The first dimension is fixed to be 9, and represents 9 azimuth information of the current pixel point in the neighborhood, wherein the 9 azimuth information is respectively upper left, upper right, left, center, lower right, lower left, lower right, and the value of each store of the super pixel domain association map represents the correlation between the current super pixel and 9 super pixels in the neighborhood.
Preferably, the S2pNet network trains a superpixel neighborhood correlation model by using the extracted features, the superpixel domain correlation model outputs superpixel domain correlation maps under different scales, and the training process of the superpixel domain correlation model is as follows:
for the feature of down-sampling multiple a, first initialize one
Figure GDA0004051947680000032
S_m matrix of (2), and +.>
Figure GDA0004051947680000033
The feature area is subjected to weighted point multiplication calculation to obtain +.>
Figure GDA0004051947680000034
Is the aggregate characteristic f of (2) 0
Figure GDA0004051947680000035
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
reassigning by a superpixel domain association map
Figure GDA0004051947680000036
Is a reconstructed feature f of (2) rc
Figure GDA0004051947680000037
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling multiples of beta and eta, training is completed by adopting the same method as the steps.
Preferably, in step S5, the super-pixel domain association map under different scales and the feature layer of the corresponding scale of the sub-network for dividing the image defect area are fused and spliced, where the fused and spliced are respectively fusion and splicing, and the fusion specifically includes:
f si =λ*s_m i *f i +(1-λ)*f i ,i∈(1,2,3)
wherein f si And (3) representing the feature map after the ith fusion super-pixel incidence matrix, wherein lambda is a super-parameter for adjusting the importance degree of the aggregated feature map and the original feature map.
Preferably, the splicing is specifically:
f out =up(up(up(f s3 )+f s2 )+f s1 )
where up represents the upsampling process and +' represents the stitching process of the feature map.
Preferably, the training of the segmentation network requires preprocessing of workpiece images, including program console motion and synchronized camera acquisition of images, random inversion of input images, contrast stretching, and random center cropping.
An image defect region segmentation system based on super-pixel feature enhancement, comprising:
the image acquisition module is used for acquiring an image comprising the surface defects of the workpiece;
the preprocessing module is used for preprocessing the image;
the feature extraction module is used for extracting features of the preprocessed image;
the S2pNet module is used for inputting the extracted characteristics into an S2pNet network, the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
the fusion splicing module is used for carrying out fusion splicing on the super-pixel domain association mapping diagrams under different scales and the feature layers of the corresponding scales of the segmentation network for segmenting the image defect region;
and the output module outputs the segmented defect areas by utilizing the segmentation network.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
according to the invention, the prior knowledge of the super pixels of the S2pNet network learning image is utilized, the prior knowledge of the super pixels under different scales is extracted, and multi-scale fusion is carried out with the coding features of the segmentation network, so that the feature layer information is more compact, the feature points in the feature layer are mutually influenced, the feature information is enriched, the defect of insufficient supervision information of the weak supervision network is overcome, the first-order or even multi-order information extraction of the image pixels by the weak supervision segmentation network is realized, and finally the segmentation network outputs finer prediction segmentation areas.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a system module according to the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
for the purpose of better illustrating the embodiments, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the actual product dimensions;
it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
Example 1
The embodiment provides an image defect region segmentation method based on super-pixel feature enhancement, as shown in fig. 1, comprising the following steps:
s1: acquiring an image including a surface defect of the workpiece;
s2: preprocessing the image;
s3: extracting features of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region;
s6: the segmentation network outputs segmented defect regions.
In the step S2, the image is preprocessed, specifically:
and carrying out image acquisition, preliminary setting of an image detection area and cutting on the image.
In the step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point.
In the step S4, the extracted features are input into the S2pNet network, and the following processing is further required for the extracted features:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
Figure GDA0004051947680000051
The dimension of the super pixel domain association map in the step S4 under different scales is as follows
Figure GDA0004051947680000052
Figure GDA0004051947680000061
The method comprises the steps that a longitudinal and transverse pixel relation can be extracted by a super-pixel domain association model under multiple scales, the first dimension is fixed to be 9, 9 azimuth information which is included in the neighborhood of a current pixel point is represented, the 9 azimuth information is respectively left upper, right upper, left, center, right lower, left lower and right lower, the value of each store of the super-pixel domain association map represents the correlation between the current super-pixel and 9 super-pixels in the neighborhood, the larger the weight value is, the larger the probability that two super-pixels belong to the same category is indicated, the smaller the weight value is, the smaller the correlation between the two super-pixels is indicated, and the probability of label distribution of different categories is larger.
The S2pNet network is a convolutional neural network formed by a plurality of convolutional layers, is a core part of an algorithm and is mainly responsible for training a neighborhood correlation model of the inner pixel and the outer pixel of the super pixel, and comprises output definition of a network model and training strategy design of the neighborhood correlation model of the super pixel; s2pNet is a coding-inverse coding network structure, and the input and output channels are different but have consistent sizes.
The S2pNet network trains a super-pixel neighborhood association model by using the extracted features, the super-pixel domain association model outputs super-pixel domain association maps under different scales, and the training process of the super-pixel domain association model is as follows:
for the feature of down-sampling multiple a, first initialize one
Figure GDA0004051947680000062
S_m matrix of (2), and +.>
Figure GDA0004051947680000063
The feature area is subjected to weighted point multiplication calculation to obtain +.>
Figure GDA0004051947680000064
Is the aggregate characteristic f of (2) 0
Figure GDA0004051947680000065
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
reassigning by a superpixel domain association map
Figure GDA0004051947680000066
Is a reconstructed feature f of (2) rc
Figure GDA0004051947680000067
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling multiples of beta and eta, training is completed by adopting the same method as the steps.
In the step S5, the super-pixel domain association mapping map under different scales and the feature layer of the corresponding scale of the sub-network for dividing the image defect area are fused and spliced, wherein the fusion and splicing are divided into fusion and splicing, and the fusion specifically comprises:
f si =λ*s_m i *f i +(1-λ)*f i ,i∈(1,2,3)
wherein f si And (3) representing the feature map after the ith fusion super-pixel incidence matrix, wherein lambda is a super-parameter for adjusting the importance degree of the aggregated feature map and the original feature map.
The splicing is specifically as follows:
f out =up(up(up(f s3 )+f s2 )+f s1 )
where up represents the upsampling process and +' represents the stitching process of the feature map.
The training of the segmentation network requires preprocessing of workpiece images, wherein the preprocessing comprises the steps of program console movement and image acquisition by a synchronous camera, and random overturn, contrast stretching and random center cutting of input images are carried out.
Example 2
The present embodiment provides an image defect region segmentation system based on super-pixel feature enhancement, as shown in fig. 2, including:
the image acquisition module is used for acquiring an image comprising the surface defects of the workpiece;
the preprocessing module is used for preprocessing the image;
the feature extraction module is used for extracting features of the preprocessed image;
the S2pNet module is used for inputting the extracted characteristics into an S2pNet network, the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
the fusion splicing module is used for carrying out fusion splicing on the super-pixel domain association mapping diagrams under different scales and the feature layers of the corresponding scales of the segmentation network for segmenting the image defect region;
and the output module outputs the segmented defect areas by utilizing the segmentation network.
The same or similar reference numerals correspond to the same or similar components;
the terms describing the positional relationship in the drawings are merely illustrative, and are not to be construed as limiting the present patent;
it is to be understood that the above examples of the present invention are provided by way of illustration only and not by way of limitation of the embodiments of the present invention. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the invention are desired to be protected by the following claims.

Claims (5)

1. The image defect region segmentation method based on super-pixel characteristic enhancement is characterized by comprising the following steps of:
s1: acquiring an image including a surface defect of the workpiece;
s2: preprocessing the image;
s3: extracting features of the preprocessed image;
s4: inputting the extracted features into an S2pNet network, wherein the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
s5: fusing and splicing the super-pixel domain association mapping diagrams under different scales with feature layers of corresponding scales of a segmentation network for segmenting the image defect region;
s6: the segmentation network outputs segmented defect areas;
in the step S2, the image is preprocessed, specifically:
performing image acquisition on the image, primarily determining an image detection area as a cutting area;
in the step S3, feature extraction is performed on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point;
in the step S4, the extracted features are input into the S2pNet network, and the following processing is further required for the extracted features:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
Figure FDA0004040155160000011
The dimension of the super pixel domain association map in the step S4 under different scales is as follows
Figure FDA0004040155160000012
Figure FDA0004040155160000013
The first dimension is fixed to be 9, and represents 9 azimuth information including the current pixel point neighborhood, namely upper left, upper right, left, center, right, lower left and lower right, wherein the super pixel field is closedThe value of each store of the co-map represents the correlation of the current superpixel with 9 superpixels in the neighborhood;
the S2pNet network trains a super-pixel neighborhood association model by using the extracted features, the super-pixel domain association model outputs super-pixel domain association maps under different scales, and the training process of the super-pixel domain association model is as follows:
for the feature of down-sampling multiple a, first initialize one
Figure FDA0004040155160000021
S_m matrix of (2), and +.>
Figure FDA0004040155160000022
The feature area is subjected to weighted point multiplication calculation to obtain +.>
Figure FDA0004040155160000023
Is the aggregate characteristic f of (2) 0
Figure FDA0004040155160000024
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
reassigning by a superpixel domain association map
Figure FDA0004040155160000025
Is a reconstructed feature f of (2) rc
Figure FDA0004040155160000026
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling times of beta and eta, training is completed by adopting the same method as the training process of the features with the downsampling times of alpha.
2. The method for segmenting the image defect area based on the super-pixel feature enhancement according to claim 1, wherein in the step S5, the super-pixel domain association maps under different scales and feature layers of corresponding scales of a sub-network for segmenting the image defect area are fused and spliced, and the fusion and splicing are divided into fusion and splicing, and the fusion specifically includes:
f si =λ*s_m i *f i +(1-λ)*f i ,i∈(1,2,3)
wherein f si And (3) representing the feature map after the ith fusion super-pixel incidence matrix, wherein lambda is a super-parameter for adjusting the importance degree of the aggregated feature map and the original feature map.
3. The image defect area segmentation method based on super-pixel feature enhancement according to claim 2, wherein the stitching specifically comprises:
f out =up(up(up(f s3 )+f s2 )+f s1 )
where up represents the upsampling process and +' represents the stitching process of the feature map.
4. The method for segmenting the image defect area based on the super-pixel characteristic enhancement according to claim 3, wherein training of the segmentation network requires preprocessing of workpiece images, the preprocessing comprises the steps of program console motion and image acquisition by a synchronous camera, and random inversion, contrast stretching and random center clipping are performed on input images.
5. An image defect region segmentation system based on super-pixel feature enhancement, comprising:
the image acquisition module is used for acquiring an image comprising the surface defects of the workpiece;
the preprocessing module is used for preprocessing the image;
the feature extraction module is used for extracting features of the preprocessed image;
the S2pNet module is used for inputting the extracted characteristics into an S2pNet network, the S2pNet network outputs super-pixel domain association maps under different scales, and the super-pixel domain association maps represent the relationship between the inner pixels and the outer pixels of the super-pixels;
the fusion splicing module is used for carrying out fusion splicing on the super-pixel domain association mapping diagrams under different scales and the feature layers of the corresponding scales of the segmentation network for segmenting the image defect region;
the output module outputs the segmented defect areas by utilizing the segmentation network;
the preprocessing module is used for preprocessing the image, and specifically comprises the following steps:
performing image acquisition on the image, primarily determining an image detection area as a cutting area;
the feature extraction module performs feature extraction on the preprocessed image, specifically:
extracting characteristic information of each pixel point in the preprocessed image, wherein the characteristic information comprises relative coordinate information (x, y) of the current pixel point, three-dimensional information (l, a, b) of the current pixel point in an LAB color space, gradient information h of the current pixel point and label information t of the current image, and taking a set (x, y, l, a, b, h, t) of all the information as characteristics for describing each pixel point;
the S2pNet module inputs the extracted features into an S2pNet network, and the extracted features are required to be subjected to the following processing:
the extracted features are subjected to mean value pooling of different scales to obtain features f under different scales α f β f η The downsampling multiple is alpha, beta, eta, and the dimension size is respectively
Figure FDA0004040155160000031
The dimension of the super pixel domain association map in the S2pNet module under different scales is as follows
Figure FDA0004040155160000041
The first dimension is fixed to be 9, and represents 9 azimuth information including the current pixel point in the neighborhood, namely upper left, upper right, left, center, lower right, lower left, lower right, and the value of each store of the super pixel domain association map represents the correlation between the current super pixel and 9 super pixels in the neighborhood;
the S2pNet network trains a super-pixel neighborhood association model by using the extracted features, the super-pixel domain association model outputs super-pixel domain association maps under different scales, and the training process of the super-pixel domain association model is as follows:
for the feature of down-sampling multiple a, first initialize one
Figure FDA0004040155160000042
S_m matrix of (2), and +.>
Figure FDA0004040155160000043
The feature area is subjected to weighted point multiplication calculation to obtain +.>
Figure FDA0004040155160000044
Is the aggregate characteristic f of (2) 0
Figure FDA0004040155160000045
In the formula, H, W represents the length and width of an image, s_m represents a super-pixel domain association map to be learned, f represents extracted image features, and alpha represents downsampling multiple;
reassigning by a superpixel domain association map
Figure FDA0004040155160000046
Is a reconstructed feature f of (2) rc
Figure FDA0004040155160000047
The similarity degree of the reconstructed features and the original features is used as the learning quality degree of the super-pixel neighborhood correlation model, and an objective function Loss is defined s_m
Loss s_m =|f 0 -f rc | 2
Reversely updating according to the objective function until the objective function is smaller than the threshold value, and finishing training;
for the features with the downsampling times of beta and eta, training is completed by adopting the same method as the training process of the features with the downsampling times of alpha.
CN202110801975.4A 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement Active CN113362347B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110801975.4A CN113362347B (en) 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110801975.4A CN113362347B (en) 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement

Publications (2)

Publication Number Publication Date
CN113362347A CN113362347A (en) 2021-09-07
CN113362347B true CN113362347B (en) 2023-05-26

Family

ID=77539672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110801975.4A Active CN113362347B (en) 2021-07-15 2021-07-15 Image defect region segmentation method and system based on super-pixel feature enhancement

Country Status (1)

Country Link
CN (1) CN113362347B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115063725B (en) * 2022-06-23 2024-04-26 中国民航大学 Aircraft skin defect identification system based on multi-scale self-adaptive SSD algorithm

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2016236323A1 (en) * 2015-03-20 2017-08-10 Ventana Medical Systems, Inc. System and method for image segmentation
CN110717354B (en) * 2018-07-11 2023-05-12 哈尔滨工业大学 Super-pixel classification method based on semi-supervised K-SVD and multi-scale sparse representation
CN112633416A (en) * 2021-01-16 2021-04-09 北京工业大学 Brain CT image classification method fusing multi-scale superpixels
CN112927235B (en) * 2021-02-26 2022-12-02 南京理工大学 Brain tumor image segmentation method based on multi-scale superpixel and nuclear low-rank representation
CN112991302B (en) * 2021-03-22 2023-04-07 华南理工大学 Flexible IC substrate color-changing defect detection method and device based on super-pixels

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019104767A1 (en) * 2017-11-28 2019-06-06 河海大学常州校区 Fabric defect detection method based on deep convolutional neural network and visual saliency

Also Published As

Publication number Publication date
CN113362347A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN108305243B (en) Magnetic shoe surface defect detection method based on deep learning
JP6710135B2 (en) Cell image automatic analysis method and system
CN113807355B (en) Image semantic segmentation method based on coding and decoding structure
CN109580630B (en) Visual inspection method for defects of mechanical parts
CN112508090A (en) External package defect detection method
CN109840483B (en) Landslide crack detection and identification method and device
CN114612472B (en) SegNet improvement-based leather defect segmentation network algorithm
CN114581388A (en) Contact net part defect detection method and device
CN111223084A (en) Chromosome cutting data processing method, system and storage medium
CN113221881B (en) Multi-level smart phone screen defect detection method
CN116012291A (en) Industrial part image defect detection method and system, electronic equipment and storage medium
CN115272306B (en) Solar cell panel grid line enhancement method utilizing gradient operation
CN114332473A (en) Object detection method, object detection device, computer equipment, storage medium and program product
CN113362347B (en) Image defect region segmentation method and system based on super-pixel feature enhancement
Zhang et al. Automatic detection of surface defects based on deep random chains
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN114037684A (en) Defect detection method based on yolov5 and attention mechanism model
CN114048789A (en) Winebottle fault detection based on improved Cascade R-CNN
CN112668725A (en) Metal hand basin defect target training method based on improved features
CN116452809A (en) Line object extraction method based on semantic segmentation
CN111612802A (en) Re-optimization training method based on existing image semantic segmentation model and application
Chang et al. Bilayer Markov random field method for detecting defects in patterned fabric
CN116245843A (en) Vehicle paint defect detection and segmentation integrated method based on YOLOv5 frame

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230512

Address after: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province

Applicant after: GUANGDONG University OF TECHNOLOGY

Applicant after: Guangzhou Deshidi Intelligent Technology Co.,Ltd.

Address before: 510090 Dongfeng East Road 729, Yuexiu District, Guangzhou City, Guangdong Province

Applicant before: GUANGDONG University OF TECHNOLOGY