CN114820379A - Image rain layer removing method for generating countermeasure network based on attention dual residual error - Google Patents

Image rain layer removing method for generating countermeasure network based on attention dual residual error Download PDF

Info

Publication number
CN114820379A
CN114820379A CN202210518394.4A CN202210518394A CN114820379A CN 114820379 A CN114820379 A CN 114820379A CN 202210518394 A CN202210518394 A CN 202210518394A CN 114820379 A CN114820379 A CN 114820379A
Authority
CN
China
Prior art keywords
image
rain
layer
attention
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210518394.4A
Other languages
Chinese (zh)
Other versions
CN114820379B (en
Inventor
罗旗舞
何汉东
刘可欣
阳春华
桂卫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202210518394.4A priority Critical patent/CN114820379B/en
Publication of CN114820379A publication Critical patent/CN114820379A/en
Application granted granted Critical
Publication of CN114820379B publication Critical patent/CN114820379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image rain layer removing method for generating a countermeasure network based on attention dual residual errors, which comprises the following steps: improving a method for constructing an image database for removing a rain-like layer on the surface of the hot-rolled strip steel, and manufacturing a pair of image databases containing dispersed water drops, splashing water lines and fine white water drops; inputting a clean background original image and a corresponding rain-like layer image into an attention dual residual error in pairs to generate a confrontation network model, and introducing an attention scheme between a generator and a discriminator to form a self-optimization closed loop so as to mine prior knowledge of a generalized rain-like layer; and positioning and removing the rain-like false defect by using the trained generator model. The method can remove the mixed rain-like false defect clusters on the surface of the hot rolled strip steel on the premise of keeping edge and texture details, the obtained result is closer to a real industrial image, and the false detection rate of the AVI instrument can be effectively reduced.

Description

Image rain layer removing method for generating countermeasure network based on attention dual residual error
Technical Field
The invention belongs to the technical field of digital image processing, and particularly relates to an image rain layer removing method for generating a countermeasure network based on an attention dual residual error.
Background
Steel is one of the basic materials of manufacturing enterprises, and the quality of the steel seriously affects the production of a plurality of subsequent industrial chains. Automated visual inspection equipment (AVI) is of great value in ensuring the quality of steel products, and it is always located at the back end of the spray cooling process. Under the influence of continuous industrial cooling water, a large amount of dispersed water drops, splashed water marks and tiny white water drops (essentially pseudo-defects) randomly appear on the surface of the steel body and are mixed with real defects, so that the AVI instrument inevitably generates false alarm. What is more troublesome is that part of real defects are covered by false defects, so that the detection accuracy and efficiency of the AVI instrument are drastically reduced.
Many studies have attempted to detect and classify various defects directly from the raw strip images acquired under the above-mentioned harsh environments, but there are limitations in practical applications. The defects of hot rolled strip steel can be generally divided into two categories: one type is periodic defects and the other type is occasional defects. The occurrence frequency of the sporadic defects is low, so that enough samples are difficult to accumulate for neural network learning, but the sporadic defects are not negligible in steel quality detection. Therefore, algorithms based on statistical learning are often used in real industrial production lines, but such algorithms are subject to interference from rain-like artifacts.
The phenomena of under-raining and past rain are also a major challenge for the removal of rain-like layers. Under the condition of insufficient rain, the false rain defect is not sufficiently removed, so that the false alarm frequency of the AVI instrument is increased. In contrast, rain in the past erroneously removed some real defects, greatly affecting the accuracy of defect detection. The primary task to solve this problem is to construct an image restoration method with strong robustness to prevent the under-raining and past-raining problems.
How to establish an image database for removing the rain-like layer in the field of automatic detection of the surface of the hot-rolled strip steel is also a difficulty. The situations of cooling water dispersion, mechanical vibration, high temperature and the like frequently occur in the field image acquisition of the surface of the hot rolled strip steel, and the severe working condition has strict requirements on data acquisition, so that the acquisition cost is high. Furthermore, since the strip moves at a relatively fast speed on the production line, it is difficult to ensure that a pair of images with the same background are provided for training. Unfortunately, the common data set for hot rolled strip is very sparse, which greatly limits the development of hot rolled strip rain layer removal algorithms.
In summary, the existing rain streak and rain line removal method cannot prevent the problem of under-raining caused by high density areas due to uneven distribution of cooling water in the image. Therefore, these algorithms cannot meet the needs of the actual industry,
disclosure of Invention
The invention aims to overcome the defects of the background technology and provide a hot-rolled strip steel surface image raininess-like layer removing method based on an attention dual residual error generation countermeasure network, which can remove mixed raininess-like false defect clusters on the hot-rolled steel surface while keeping edge and texture details.
The technical scheme adopted by the invention for solving the technical problem is that the image rainlike layer removing method for generating the countermeasure network based on the attention dual residual error comprises a rainlike false defect construction method, a hot rolled strip steel surface rainlike layer image database and the attention dual residual error generation countermeasure network; the rain and false like defect construction method is used for restoring the shape and the distribution position of a mixed rain and false like defect cluster in a real steel mill on a clean original strip steel image and manufacturing a hot-rolled strip steel surface rain and false like layer image database; the hot-rolled strip steel surface rainlike layer image database is used for training, testing and analyzing an attention dual residual error generation confrontation network model; the attention dual residual error generation confrontation network model is used for removing the rain-like false defects on the strip steel image on the premise of keeping real defect edges and texture details. The method is implemented according to the following steps:
s10: aiming at three rain and false defects of dispersed water drops, splash waterlines and fine white water drops, designing a corresponding construction method and manufacturing a hot-rolled strip steel surface rain layer image database;
s20: constructing an attention dual residual error generation confrontation network model which consists of a generator G, a director M and a discriminator D;
s30: the raindrop-like image database produced in S10 is divided into a training set and a test set.
S40: and inputting the training set obtained in the step S30 into the attention dual residual generation confrontation network model built in the step S20 in pairs, introducing an attention scheme in the training process to mine the prior knowledge of the generalized rain-like layer, iteratively updating the weight and loss of the generator and the discriminator for multiple times, and obtaining the generation models of multiple iterative versions.
S50: and inputting the image containing the rainlike layer test set obtained in the step S30 into the generation model of each iteration version obtained in the step S40, and carrying out quantitative and qualitative tests on the images to ensure that a generator G outputs a clean background image which keeps real defect edges and texture details so as to select a global optimal model.
Preferably, step S10 is specifically implemented as follows:
firstly, a high-speed camera is used for capturing an original strip steel surface image moving at a high speed on a production line to obtain a rain-like layer image and a clean background image. Then, aiming at different rain pseudo-defects, designing a corresponding construction method:
for dispersed water drops, in order to improve the generalization degree of the model, firstly extracting real water drops from the original image containing the rain-like layer, mixing the real water drops with artificially made rain drops, and then pasting the real water drops and the artificially made rain drops into a clean background image;
for the splash waterline, 4 kinds of slope rainwater-like lines with the length-width ratio of 6 kinds are manufactured by Gaussian noise, and are overlapped with a clean background picture according to a certain proportion;
for the tiny white water drops, the false defects of the tiny white water drops with different sizes are artificially simulated and made according to the pixel level of the tiny white water drops generated in the high-speed rolling of the steel strip in an original image, and the tiny white water drops are overlapped with a clean background image.
Through the manufacturing process, a database containing 1450 pairs of hot-rolled strip steel surface images of 1000 × 1000 pixels is obtained, wherein one half of the database is an original clean image, and the other corresponding half of the database is an artificially manufactured image with mixed rain-like false defects. From this, 1300 pairs of images were randomly extracted as the training set described in S30, leaving 150 pairs to be used as the test set described in S30.
Preferably, the specific sub-steps of step 20 are as follows:
step S21: the training set pair described in step S30 is input into a generator that selects an attention-pair residual network model based on the encoder-decoder architecture. A periodic combined structure is provided for the bottleneck layer. The dual-SE module consists of a DuRB-P, a DuRB-DS configured with dual-SE modules and a DuRB-P which are connected end to end. This structure can be repeated three times in each iteration to progressively search for and recover rain-like false defects, reducing the difference between a pair of images.
Step S22: in the training phase, a mask image is imported between the generator and the discriminator as a director to form a self-optimized closed loop. The mask image generation method is a threshold-based binary classification strategy, and the equation can be expressed as:
Figure BDA0003640690940000041
in which pixels rain-like layer Is the Pixel value, Pixel, of a rain-like false defect image clean Is the pixel value of the clean background image.
The director directs the generator to restore local detail of the mask by constructing a weighted sum of the L1 and SSIM penalties, and to take global features into account to ensure that the generated image is distortion free. The total loss value of L1 or SSIM is constructed as follows:
Figure BDA0003640690940000042
wherein Loss _ Mask represents the Loss value of the rain-like layer, Loss _ Overall represents the Loss value of the whole image, and Loss _ Total represents the Total Loss of L1 or SSIM.
Step S23: and constructing a discriminator Res2Net with grain-level multi-scale features, dividing an input feature map into four blocks after convolution, and distinguishing the fidelity of the recovered features in different blocks. It can improve the multi-scale feature extraction capability without increasing the amount of computation, thereby directing the generator in step S21 to restore the image texture more finely.
Preferably, the loss of the generator in step S40 is a weighted sum loss based on the fusion strategy. Structural Similarity (SSIM) can solve the problem of image distortion, and the SSIM loss function is expressed as follows:
Figure BDA0003640690940000051
where o denotes the element multiplication and G denotes the generation network. R and I respectively represent a clean background image and an image containing a rain-like layer, M represents a corresponding mask image, R to P clean Denotes that R is a picture sampled from a sample of clean background image, I-P rain-like layer Representation I is a picture sampled from an image sample containing a rain-like layer.
The L1 loss function in the generator is as follows:
Figure BDA0003640690940000052
finally, the loss function of the generator based on the fusion strategy is:
Figure BDA0003640690940000053
wherein r is 1 Set to 0.75, r 2 Assuming 1.1, D is the discriminator, V (G, D) is able to calculate the JS divergence between the real image and the generated image.
The hot-rolled strip steel surface image rain-like layer removing method based on the attention dual residual error generation countermeasure network provided by the invention is compatible with channel attention and space attention, so that the position of the rain-like false defect is accurately positioned and removed; meanwhile, a director introduced in the training stage skillfully excavates the prior knowledge of the generalized rain-like layer so as to solve the problem of accidental misjudgment caused by an active search strategy. Compared with a plurality of famous rain removing algorithms, the image generated by removing the rain-like layer by using the method provided by the invention is closest to a clean background image in real industry, so that the problem of serious false alarm of AVI instruments caused by mixed rain-like false defect clusters is effectively solved. In addition, the present invention designs a high resolution data set for rain-like layer removal in automatic steel surface inspection. This continuously published data set will motivate more rain-like layer removal methods for inspection of industrial sheet surfaces in real-world scenarios.
Drawings
FIG. 1 is a flow chart of a hot-rolled strip steel surface image raininess layer removing method based on attention dual residual error generation countermeasure network provided by the invention;
FIG. 2 is a schematic diagram of an overall model structure of a hot-rolled strip steel surface image rainlike layer removing method based on an attention dual residual error generation countermeasure network provided by the invention;
FIG. 3 is a schematic diagram of a model structure of a generator in the hot-rolled strip steel surface image rainlike layer removing method for generating a countermeasure network based on an attention dual residual error provided by the invention;
FIG. 4 is a schematic diagram of a model structure of an identifier in the hot-rolled strip steel surface image rainlike layer removing method based on attention dual residual generation countermeasure network provided by the invention;
FIG. 5a is a real strip steel image containing a rain-like layer in the experimental example provided by the present invention;
FIG. 5b is an image generated after removing a rain-like layer using an adhesive GAN;
FIG. 5c is an image generated after removing the rain-like layer using Pix2 Pix;
FIG. 5d is an image generated after the rain-like layer is removed using PReNet;
FIG. 5e is an image generated after removing the rain-like layer using IADN;
FIG. 5f is an image generated after removing the rain-like layer using DuRN-S-P;
FIG. 5g is an image generated after removing a rain-like layer using PReGAN;
FIG. 5h is an image generated after the rainlike layer is removed by the hot-rolled strip steel surface image rainlike layer removing method for generating a countermeasure network based on the attention dual residual error provided by the invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples. The following experimental examples and examples are intended to further illustrate but not limit the invention.
Referring to fig. 1 to 4, the present invention provides an image rainlike layer removing method for generating a confrontation network based on an attention dual residual, which is compatible with channel attention and spatial attention, and can accurately track and remove the position of a rainlike false defect; meanwhile, a director introduced in the training stage skillfully excavates the prior knowledge of the generalized rain-like layer so as to solve the problem of accidental misjudgment caused by an active search strategy. The method provided by the invention can generate a clean background image which is closest to the real industry.
Specifically, the method for removing the image rain layer based on the attention dual residual generation countermeasure network provided by the invention comprises the following steps:
s10: aiming at three rain and false defects of dispersed water drops, splash waterlines and fine white water drops, a corresponding construction method is designed and a hot-rolled strip steel surface rain layer image database is manufactured. The method is implemented by the following steps:
firstly, a high-speed camera is used for capturing an original strip steel surface image moving at a high speed on a production line to obtain a rain-like layer image and a clean background image. Then, aiming at different rain pseudo-defects, designing a corresponding construction method:
for dispersed water drops, in order to improve the generalization degree of the model, firstly extracting real water drops from the original raindrop-containing layer image, mixing the real water drops with artificially made raindrops, and then pasting the real water drops and the artificially made raindrops into a clean background image;
for the splash waterline, 4 kinds of slope rainwater-like lines with the length-width ratio of 6 kinds are manufactured by Gaussian noise, and are overlapped with a clean background picture according to a certain proportion;
for the tiny white water drops, the false defects of the tiny white water drops with different sizes are artificially simulated and made according to the pixel level of the tiny white water drops generated in the high-speed rolling of the steel strip in an original image, and the tiny white water drops are overlapped with a clean background image.
Through the manufacturing process, a database containing 1450 pairs of hot-rolled strip steel surface images of 1000 × 1000 pixels is obtained, wherein one half of the database is an original clean image, and the other corresponding half of the database is an artificially manufactured image with mixed rain-like false defects. From this, 1300 pairs of images were randomly extracted as the training set described in S30, leaving 150 pairs to be used as the test set described in S30.
S20: and constructing an attention dual residual error generation confrontation network model which consists of a generator G, a director M and a discriminator D. The specific substeps are as follows:
step S21: the training set pair described in step S30 is input into a generator that selects an attention-pair residual network model based on the encoder-decoder architecture. The input includes: the convolutional layer, the batch standardized BN layer and the ReLU layer are sequentially connected, the structure is repeated three times, and then one convolutional layer is connected. A periodic combined structure is provided for the bottleneck layer. The dual-SE module consists of a DuRB-P, a DuRB-DS configured with dual-SE modules and a DuRB-P which are connected end to end. This structure can be repeated three times in each iteration to progressively search for and recover rain-like false defects, reducing the difference between a pair of images. The receptive fields of the convolution layers are continuously increased in 6 DuRB-P so as to fully utilize the context information of the image and increase the capability of multi-scale feature representation. The dual rb-DS configured dual SE modules has better global attention during the down-sampling process, which helps to infer the distribution of rain-like false defects from real-world steel strip images. Referring to fig. 3, DuRB-P comprises two convolutional layers and two containers connected in series, the containers comprising a ConvLayers1 for the operation of the pair. The DuRB-DS includes two convolutional layers and two containers connected in series, the containers including a ConvLayers1 convolutional layer module and an SE compression and excitation module (Squeeze and excitation module).
Step S22: the mask image is imported between the generator and the discriminator as a director, taking into account the rainprint intensity information in the spatial dimension, thereby forming a self-optimizing closed loop during the training phase. The director utilizes the potential prior knowledge of the broad rain-like layer to solve the problems of edge blurring and detail loss caused by insufficient attention to spatial features in the generator. The generation method of the mask image is a binary classification strategy based on a threshold value, and an equation can be expressed as follows:
Figure BDA0003640690940000081
in which pixels rain-like layer Is the Pixel value, Pixel, of a rain-like false defect image clean Is the pixel value of the clean background image.
To ensure edge consistency of the recovery region, the director removes the rain-like layer by constructing L1 and weighting and supervision generators of SSIM loss. It can direct the generator to restore local details of the mask and take global features into account to ensure that the generated image is distortion free. The total loss value of L1 or SSIM is constructed as follows:
Figure BDA0003640690940000082
wherein Loss _ Mask represents the Loss value of the rain-like layer, Loss _ Overall represents the Loss value of the whole image, and Loss _ Total represents the Total Loss of L1 or SSIM.
Step S23: and constructing a discriminator Res2Net with grain-level multi-scale features, dividing an input feature map into four blocks after convolution, and distinguishing the fidelity of the recovered features in different blocks. It can improve the multi-scale feature extraction capability without increasing the amount of computation, thereby directing the generator in step S21 to restore the image texture more finely. Referring to fig. 4, the discriminator includes 3 convolutional layers, 1 Res2Net layer, 1 fully-connected layer, and 1 Sigmoid layer, where the Res2Net layer divides the feature map into four blocks of x1, x2, x3, and x 4. Except for x1, all blocks xi, i 2-4 need to pass through a convolution layer, a batch normalization layer and a ReLU layer, and the obtained result is defined as Ki (namely yi). The method comprises the specific steps that x2 is processed by a convolutional layer, a batch normalization layer and a ReLU layer to obtain y2, the first results (K2) of the processing of the convolutional layer, the batch normalization layer and the ReLU layer and x3 are processed by the convolutional layer, the batch normalization layer and the ReLU layer to obtain y3, the second results (K3) of the processing of the convolutional layer, the batch normalization layer and the ReLU layer and x4 are processed by the convolutional layer, the batch normalization layer and the ReLU layer to obtain y4, and finally y1, y2, y3 and y4 are spliced and output.
S30: the raindrop-like image database produced in S10 is divided into a training set and a test set.
S40: and inputting the training set obtained in the step S30 into the attention dual residual generation confrontation network model built in the step S20 in pairs, introducing an attention scheme in the training process to mine the prior knowledge of the generalized rain-like layer, iteratively updating the weight and loss of the generator and the discriminator for multiple times, and obtaining the generation models of multiple iterative versions.
Specifically, the loss of the generator in step S40 is a weighted sum loss based on the fusion policy. Structural Similarity (SSIM) can solve the problem of image distortion, and the SSIM loss function is expressed as follows:
Figure BDA0003640690940000091
where o denotes the element multiplication and G denotes the generation network. R and I respectively represent a clean background image and an image containing a rain-like layer, M represents a corresponding mask image, R to P clean Denotes that R is a picture sampled from a sample of clean background image, I-P rain-like layer Representation I is a picture sampled from an image sample containing a rain-like layer.
The L1 loss function in the generator is as follows:
Figure BDA0003640690940000101
finally, the loss function of the generator based on the fusion strategy is:
Figure BDA0003640690940000102
wherein r is 1 Set to 0.75, r 2 Set to 1.1, D is the discriminator and V (G, D) is able to calculate the JS divergence between the real image and the generated image.
S50: and inputting the image containing the rainlike layer test set obtained in the step S30 into the generation model of each iteration version obtained in the step S40, and carrying out quantitative and qualitative tests on the images to ensure that a generator G outputs a clean background image which keeps real defect edges and texture details so as to select a global optimal model.
As an experimental example of the invention, a self-synthesized database is selected for training and testing, and other famous rainwater removal methods (attention GAN, Pix2Pix, PReNet, IADN, DuRN-S-P and PReGAN) are introduced as comparison groups, so that the superiority of the method is highlighted. For quantitative results, refer to FIG. 5a to FIG. 5 h.
The results of the quantitative evaluation of the different methods are summarized in table 1. These two indicators illustrate that our proposed method can make the generated image closer to a clean background image in real industry. This is mainly because our method, which exploits both channel and spatial attention to greatly influence the processing results without additional computational overhead, is more sensitive to the removal of the rain-like layer.
Table 1 results of quantitative evaluation of different methods.
Figure BDA0003640690940000103
Figure BDA0003640690940000111
FIG. 5 shows the quantitative results shown in FIG. 5a to FIG. 5 h.
In FIG. 5, we compared the results of the present invention with those of the active GAN, Pix2Pix, PReNet, IADN, DurRN-S-P and PReGAN. FIG. 5a is an image of a real hot rolled strip containing a rain-like layer. Fig. 5b shows an image generated by the active GAN, which barely sees large water droplets, but it fails to overcome the challenges of water lines and fine white water droplets, and also creates global artifacts. As can be seen from fig. 5c, although the resulting image has no chromatic aberration, Pix2Pix does not completely remove the rain-like layer. In addition, it also removes some real defects, which will greatly affect the subsequent defect detection accuracy. Fig. 5d is generated by a pcenet with good global attention. However, in the case where the background of the original image is not clear, it cannot remove some minute water droplets. As shown in fig. 5e, IADN can restore clearly visible image texture details while eliminating most rain streaks, but still leave a lot of tiny rain-like artifacts. Neither FIG. 5f generated by DuRN-S-P nor FIG. 5g obtained by PReGAN can remove fine white water beads and water marks, increasing the false detection rate of AVI instruments. As can be seen from fig. 5h, the hot-rolled strip steel surface image rain-like layer removing method based on the attention dual residual generation countermeasure network provided by the invention can remove extremely fine rain-like false defects, and the generated image is more vivid.
The hot-rolled strip steel surface image rain-like layer removing method based on the attention dual residual error generation countermeasure network provided by the invention is compatible with channel attention and space attention, so that the position of the rain-like false defect is accurately positioned and removed; meanwhile, a director introduced in the training stage skillfully excavates the prior knowledge of the generalized rain-like layer so as to solve the problem of accidental misjudgment caused by an active search strategy. Compared with a plurality of famous rain removing algorithms, the image generated by removing the rain-like layer by using the method provided by the invention is closest to a clean background image in real industry, so that the problem of serious false alarm of AVI instruments caused by mixed rain-like false defect clusters is effectively solved. In addition, the present invention designs a high resolution data set for rain-like layer removal in automatic steel surface inspection. This continuously published data set will motivate more rain-like layer removal methods for inspection of industrial sheet surfaces in real-world scenarios.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that several improvements and modifications without departing from the principle of the present invention will occur to those skilled in the art, and such improvements and modifications should also be construed as within the scope of the present invention.

Claims (4)

1. The image rain layer removing method for generating the countermeasure network based on the attention dual residual errors comprises a rain pseudo defect constructing method, a hot rolled strip steel surface rain layer image database and the attention dual residual errors generating the countermeasure network; the rain and false like defect construction method is used for restoring the shape and the distribution position of a mixed rain and false like defect cluster in a real steel mill on a clean original strip steel image and manufacturing a hot-rolled strip steel surface rain and false like layer image database; the hot-rolled strip steel surface rainlike layer image database is used for training, testing and analyzing an attention dual residual error generation confrontation network model; the attention dual residual error generation confrontation network model is used for removing rain-like pseudo defects on the strip steel image on the premise of keeping real defect edges and texture details;
the method is implemented according to the following steps:
s10: aiming at three rain and false defects of dispersed water drops, splash waterlines and fine white water drops, designing a corresponding construction method and manufacturing a hot-rolled strip steel surface rain layer image database;
s20: constructing an attention dual residual error generation confrontation network model which consists of a generator G, a director M and a discriminator D;
s30: dividing the raindrop-like layer image database manufactured in the step S10 into a training set and a test set;
s40: inputting the training set obtained in the step S30 into the attention dual residual generation confrontation network model built in the step S20 in pairs, introducing an attention scheme in the training process to excavate the prior knowledge of the generalized rain-like layer, iteratively updating the weight and loss of the generator and the discriminator for many times, and obtaining a plurality of iterative version generation models;
s50: and inputting the image containing the rainlike layer test set obtained in the step S30 into the generation model of each iteration version obtained in the step S40, and carrying out quantitative and qualitative tests on the generated model to ensure that a generator G outputs a clean background image which keeps real defect edges and texture details so as to select a global optimal model.
2. The method for removing the image raininess-like layer based on the attention pair residual error generation countermeasure network according to claim 1, wherein the raininess-like false defect construction method and the hot rolled strip steel surface raininess-like layer image database in step S10 are implemented by the following steps:
firstly, capturing an original strip steel surface image moving at a high speed on a production line by using a high-speed camera to obtain a rain-like layer image and a clean background image; then, aiming at different rain pseudo-defects, designing a corresponding construction method:
for dispersed water drops, in order to improve the generalization degree of the model, firstly extracting real water drops from the original raindrop-containing layer image, mixing the real water drops with artificially made raindrops, and then pasting the real water drops and the artificially made raindrops into a clean background image;
for the splash waterline, 4 kinds of slope rainwater-like lines with the length-width ratio of 6 kinds are manufactured by Gaussian noise, and are overlapped with a clean background picture according to a certain proportion;
for the tiny white water drops, the false defect of the tiny white water drops with different sizes is artificially simulated and made according to the pixel level of the tiny white water drops generated in the high-speed rolling of the steel strip in the original image, and the tiny white water drops are superposed with the clean background image;
through the manufacturing process, a database containing 1450 pairs of hot rolled strip steel surface images with 1000 multiplied by 1000 pixels is obtained, wherein one half of the database is an original clean image, and the other corresponding half of the database is an artificially manufactured image with mixed rain and pseudo defects; from this, 1300 pairs of images were randomly extracted as the training set described in S30, leaving 150 pairs to be used as the test set described in S30.
3. The method for removing an image rain layer based on an attention pair residual generation countermeasure network according to claim 1, wherein the step 20 of generating an attention pair residual generation countermeasure network model specifically includes the following steps:
step S21: inputting the training set pair of step S30 into a generator, the generator selecting an attention-pair residual network model based on an encoder-decoder architecture; aiming at a bottleneck layer, a periodic combined structure is provided, the periodic combined structure consists of a DuRB-P, a DuRB-DS configured with double SE modules and a DuRB-P which are connected end to end, the structure is repeated for three times in each iteration to gradually search and recover rain-like false defects, and the difference between a pair of images is reduced;
step S22: in the training stage, a mask image is led in between a generator and a discriminator to be used as a director to form a self-optimized closed loop, the generation method of the mask image is a binary classification strategy based on a threshold value, and the equation is expressed as follows:
Figure FDA0003640690930000031
in which pixels rain-like layer Is the Pixel value, Pixel, of a rain-like false defect image clean Is the pixel value of the clean background image;
the director directs the generator to restore local detail of the mask by constructing a weighted sum of the L1 and SSIM penalties, and to take global features into account to ensure that the generated image is distortion free; the total loss value of L1 or SSIM is constructed as follows:
Figure FDA0003640690930000032
wherein Loss _ Mask represents the Loss value of the rain-like layer, Loss _ Overall represents the Loss value of the whole image, and Loss _ Total represents the Total Loss of L1 or SSIM;
step S23: constructing a discriminator Res2Net with grain-level multi-scale features, dividing an input feature map into four blocks after convolution, distinguishing the fidelity of recovery features in different blocks, and improving the multi-scale feature extraction capability without increasing the amount of calculation, thereby guiding the generator in the step S21 to recover image textures more finely.
4. The method for image rainlike layer removal based on attention-pair residual generation countermeasure network of claim 1, wherein the loss of the generator of step S40 is a weight and loss based on a fusion strategy; the structural similarity SSIM solves the problem of image distortion, and an SSIM loss function is expressed as follows:
Figure FDA0003640690930000033
wherein o represents element multiplication, G represents generation network, R and I represent clean background image and image containing rain-like layer, M represents corresponding mask image, and R-P clean Denotes that R is a picture sampled from a sample of clean background image, I-P rain-like layer Representation I is a picture sampled from an image sample containing a rain-like layer.
The L1 loss function in the generator is as follows:
Figure FDA0003640690930000041
finally, the loss function of the generator based on the fusion strategy is:
Figure FDA0003640690930000042
wherein r is 1 Set to 0.75, r 2 Set to 1.1, D is the discriminator and V (G, D) is able to calculate the JS divergence between the real image and the generated image.
CN202210518394.4A 2022-05-12 2022-05-12 Image rain-like layer removing method for generating countermeasure network based on attention dual residual error Active CN114820379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210518394.4A CN114820379B (en) 2022-05-12 2022-05-12 Image rain-like layer removing method for generating countermeasure network based on attention dual residual error

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210518394.4A CN114820379B (en) 2022-05-12 2022-05-12 Image rain-like layer removing method for generating countermeasure network based on attention dual residual error

Publications (2)

Publication Number Publication Date
CN114820379A true CN114820379A (en) 2022-07-29
CN114820379B CN114820379B (en) 2024-04-26

Family

ID=82513426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210518394.4A Active CN114820379B (en) 2022-05-12 2022-05-12 Image rain-like layer removing method for generating countermeasure network based on attention dual residual error

Country Status (1)

Country Link
CN (1) CN114820379B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020100196A4 (en) * 2020-02-08 2020-03-19 Juwei Guan A method of removing rain from single image based on detail supplement
CN110992275A (en) * 2019-11-18 2020-04-10 天津大学 Refined single image rain removing method based on generation countermeasure network
CN111145112A (en) * 2019-12-18 2020-05-12 华东师范大学 Two-stage image rain removing method and system based on residual error countermeasure refinement network
CN112198170A (en) * 2020-09-29 2021-01-08 合肥公共安全技术研究院 Detection method for identifying water drops in three-dimensional detection of outer surface of seamless steel pipe
CN112258402A (en) * 2020-09-30 2021-01-22 北京理工大学 Dense residual generation countermeasure network capable of rapidly removing rain
CN113191969A (en) * 2021-04-17 2021-07-30 南京航空航天大学 Unsupervised image rain removing method based on attention confrontation generation network
KR102288645B1 (en) * 2020-08-26 2021-08-10 한국해양과학기술원 Machine learning method and system for restoring contaminated regions of image through unsupervised learning based on generative adversarial network
CN113450288A (en) * 2021-08-04 2021-09-28 广东工业大学 Single image rain removing method and system based on deep convolutional neural network and storage medium
CN113469913A (en) * 2021-07-06 2021-10-01 中南大学 Hot-rolled strip steel surface water drop removing method based on gradual cycle generation countermeasure network
CN114119382A (en) * 2021-09-09 2022-03-01 浙江工业大学 Image raindrop removing method based on attention generation countermeasure network
KR20220059881A (en) * 2020-11-03 2022-05-10 고려대학교 산학협력단 Progressive rain removal method and apparatus via a recurrent neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992275A (en) * 2019-11-18 2020-04-10 天津大学 Refined single image rain removing method based on generation countermeasure network
CN111145112A (en) * 2019-12-18 2020-05-12 华东师范大学 Two-stage image rain removing method and system based on residual error countermeasure refinement network
AU2020100196A4 (en) * 2020-02-08 2020-03-19 Juwei Guan A method of removing rain from single image based on detail supplement
KR102288645B1 (en) * 2020-08-26 2021-08-10 한국해양과학기술원 Machine learning method and system for restoring contaminated regions of image through unsupervised learning based on generative adversarial network
CN112198170A (en) * 2020-09-29 2021-01-08 合肥公共安全技术研究院 Detection method for identifying water drops in three-dimensional detection of outer surface of seamless steel pipe
CN112258402A (en) * 2020-09-30 2021-01-22 北京理工大学 Dense residual generation countermeasure network capable of rapidly removing rain
KR20220059881A (en) * 2020-11-03 2022-05-10 고려대학교 산학협력단 Progressive rain removal method and apparatus via a recurrent neural network
CN113191969A (en) * 2021-04-17 2021-07-30 南京航空航天大学 Unsupervised image rain removing method based on attention confrontation generation network
CN113469913A (en) * 2021-07-06 2021-10-01 中南大学 Hot-rolled strip steel surface water drop removing method based on gradual cycle generation countermeasure network
CN113450288A (en) * 2021-08-04 2021-09-28 广东工业大学 Single image rain removing method and system based on deep convolutional neural network and storage medium
CN114119382A (en) * 2021-09-09 2022-03-01 浙江工业大学 Image raindrop removing method based on attention generation countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DONGWEI REN等: "Progressive Image Deraining Networks: A Better and Simpler Baseline", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》, 9 January 2020 (2020-01-09) *
丁明航;邓然然;邵恒;: "基于注意力生成对抗网络的图像超分辨率重建方法", 计算机***应用, no. 02, 15 February 2020 (2020-02-15) *
蒙佳浩;王东骥;帅天平;: "基于生成对抗网络去除单张图像中的雨滴", 软件, no. 05, 15 May 2020 (2020-05-15) *

Also Published As

Publication number Publication date
CN114820379B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN111768388B (en) Product surface defect detection method and system based on positive sample reference
CN112581463A (en) Image defect detection method and device, electronic equipment, storage medium and product
CN112070727B (en) Metal surface defect detection method based on machine learning
CN114627383B (en) Small sample defect detection method based on metric learning
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN111353983A (en) Defect detection and identification method and device, computer readable medium and electronic equipment
CN115205264A (en) High-resolution remote sensing ship detection method based on improved YOLOv4
CN112132196B (en) Cigarette case defect identification method combining deep learning and image processing
CN111126115A (en) Violence sorting behavior identification method and device
CN114742799B (en) Industrial scene unknown type defect segmentation method based on self-supervision heterogeneous network
CN112669274B (en) Multi-task detection method for pixel-level segmentation of surface abnormal region
CN116071327A (en) Workpiece defect detection method based on deep neural network
Guo et al. A novel transformer-based network with attention mechanism for automatic pavement crack detection
Zhao et al. Research on detection method for the leakage of underwater pipeline by YOLOv3
CN117557784B (en) Target detection method, target detection device, electronic equipment and storage medium
Xu et al. Multiple guidance network for industrial product surface inspection with one labeled target sample
CN114022586A (en) Defect image generation method based on countermeasure generation network
CN117372876A (en) Road damage evaluation method and system for multitasking remote sensing image
CN114820379B (en) Image rain-like layer removing method for generating countermeasure network based on attention dual residual error
CN105354833A (en) Shadow detection method and apparatus
Mi et al. Dense residual generative adversarial network for rapid rain removal
CN113192018B (en) Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
CN115909378A (en) Document text detection model training method and document text detection method
CN114862755A (en) Surface defect detection method and system based on small sample learning
CN115223033A (en) Synthetic aperture sonar image target classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant