CN112508083A - Image rain and fog removing method based on unsupervised attention mechanism - Google Patents

Image rain and fog removing method based on unsupervised attention mechanism Download PDF

Info

Publication number
CN112508083A
CN112508083A CN202011398742.6A CN202011398742A CN112508083A CN 112508083 A CN112508083 A CN 112508083A CN 202011398742 A CN202011398742 A CN 202011398742A CN 112508083 A CN112508083 A CN 112508083A
Authority
CN
China
Prior art keywords
image
rain
fog
model
unsupervised
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011398742.6A
Other languages
Chinese (zh)
Other versions
CN112508083B (en
Inventor
马子凡
宋智颖
郭雨婷
汤若聪
刘林峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202011398742.6A priority Critical patent/CN112508083B/en
Publication of CN112508083A publication Critical patent/CN112508083A/en
Application granted granted Critical
Publication of CN112508083B publication Critical patent/CN112508083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image rain and fog removing method based on an unsupervised attention mechanism, which comprises the following steps of: s1, respectively constructing each part of the model by taking the unsupervised rain-removing network cycleGAN as a model basic framework to construct a complete Cycle-Derain model; s2, inputting the single rain image to be processed into a Cycle-Derain model, and completing the restoration and reconstruction of the single rain image to obtain a clear single image; the respective portions of the build model described in S1 include a rain removal portion and a fog removal portion of the build model. The invention trains unpaired rain images and non-rain images by utilizing a bidirectional generation countermeasure network and circulation consistency loss principle, introduces an attention mechanism under the unsupervised condition for detecting whether fog exists in the images, and combines a circulation search positioning algorithm to realize efficient processing of rain and fog details in a single rain image.

Description

Image rain and fog removing method based on unsupervised attention mechanism
Technical Field
The invention relates to an image processing method, in particular to an image rain fog removing method based on an unsupervised attention mechanism and aiming at a single rain image, and relates to the technical field of image processing.
Background
Image recognition is an important field of artificial intelligence, and refers to a technology for recognizing objects in images to recognize various different modes of objects and objects. In recent years, with the continuous development of the internet and artificial intelligence, image recognition technology is also widely applied.
However, in the daily application scenario of the image recognition technology, the use effect of the image recognition technology is often directly linked with the quality of the acquired image. For example, in a rainy environment, the definition of an image collected is often greatly reduced due to rain stripes and raindrops, which is not only unfavorable for an operator to obtain information from the image, but also has a great influence on a series of subsequent image processing processes.
Due to the lack of effective information for detecting and deleting rain in the image, the difficulty of rain removal operation is greater for a single rain image than for a video image or a continuous image, and thus research on rain removal around the single rain image has become an industry focus in recent years.
At present, the main image rain removing methods include two methods, namely sparse coding dictionary-based learning and convolutional neural network-based learning. The former is that under the excessive learning dictionary with mutual exclusivity, the single image rain removing process is regularized, so that the local patches of a rain removing image layer and a rainwater layer can be subjected to sparse modeling in the learning dictionary. Sparse codes learned from dictionaries in this way have a significant degree of discrimination between the rain-removed image layer and the rain layer. However, this method cannot completely solve the ambiguity in the low-pass channel, and cannot perform effective separation when the image background is similar to rain and raindrops are enlarged. The latter method utilizes a complete convolution network to detect and remove rain, but as training requires a large number of rain images and rain-free images in pairs, a synthetic data set is mostly required, so that the generalization capability of the method in real scenes is weak.
In addition, in any of the conventional schemes, the influence of fog on the image content in the actual application environment is not considered, so that the technical staff reasonably thinks that the expected use effect cannot be actually achieved in any of the conventional single image rain removing schemes.
As such, if a completely new method for removing rain fog for a single rain image can be designed, it is likely to provide great help for the subsequent development of image processing, image recognition and other technologies.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide an image defogging method based on an unsupervised attention mechanism for a single image with rain, specifically as follows.
An image rain and fog removing method based on an unsupervised attention mechanism comprises the following steps:
s1, respectively constructing each part of the model by taking the unsupervised rain-removing network cycleGAN as a model basic framework to construct a complete Cycle-Derain model;
s2, inputting the single rain image to be processed into a Cycle-Derain model, and completing the restoration and reconstruction of the single rain image to obtain a clear single image;
the respective portions of the build model described in S1 include a rain removal portion and a fog removal portion of the build model.
Preferably, the step of providing, in S1,
s11, constructing a rain removing part in the Cycle-Derain model, and specifically comprising the following steps:
s111, selecting an unsupervised rain removal network cycleGAN as a basic framework of the model, and enabling the constructed model to meet the cycle consistency loss by utilizing a plurality of rain removal generators and a plurality of rain removal discriminators;
s112, training the rain removing generator by using a public data set in the Internet, wherein the training process comprises a process from a rain image to a no-rain image and a process from the no-rain image to the rain image;
s113, training the rain removing discriminator, wherein the training mode is that a rain-free image generated by a rain image and a rain image generated by the rain-free image are respectively input into the rain removing discriminator to judge whether the images are real images;
and S114, circularly iterating S112 to S113 until the rain removing generator and the rain removing discriminator reach a Nash equilibrium state.
Preferably, a plurality of the rain removing generators include GGr、GFr、GFnAnd GGnForward mapping of co-implementation models
Figure BDA0002811749390000031
And backward mapping
Figure BDA0002811749390000032
Wherein G isGrFor generating a rainless image n from a true rainy image rr,GFrFor composing a rainless image nrReconstructing generates a rained image
Figure BDA0002811749390000033
GFnFor generating a rained image r from a true rainless image nn,GGnFor composing a rain image rnReconstruction to produce a rain-free image
Figure BDA0002811749390000034
Preferably, a plurality of the rain removing discriminators includes DGr、DFr、DFnAnd DGnJudging the rain removing generators G respectivelyGr、GFnWhether the generated image is real and whether the forward and backward mapping processes of the model accord with the cycle consistency;
wherein D isGrFor judging the generated rain-free image nrWhether it is true, DFrFor judging whether the forward mapping of the model conforms to the cycle consistency, DFnFor judging the generated rain image rnWhether it is true, DGnFor judging modelsWhether the backward mapping is consistent with the circular consistency.
Preferably, in S11,
the loss function for determining whether the image is a real image is
Figure BDA0002811749390000041
Wherein x represents a real image, G (x) is an image generated by the rain removing generator, and x and y obey probability distribution, i.e. x to pdata(x) And y to pdata(y);
The loss function for determining the cyclic consistency of the image in the mapping process is
Figure BDA0002811749390000042
Wherein x represents a real image inputted by forward mapping, y represents a real image inputted by backward mapping, G is a rain removing generator for generating a rain-free image from a rain image, F is a rain removing generator for removing a rain image from a rain image, and x and y obey probability distribution, namely x to pdata(x) And y to pdata(y)。
Preferably, the method further comprises, in S1,
s12, constructing a defogging part in the Cycle-Derain model, and specifically comprising the following steps:
s121, utilizing defogging generator GS—TGenerating a fog-free image from a rain-free image processed by a rain removing part in a Cycle-Derain model, and multiplying the generated fog-free image by a pixel weight map obtained by an attention mechanism network in a bit-by-bit manner to obtain a preliminary fog removing layer sfCarrying out bitwise multiplication on the input rain-removing image and the calculated weight map to obtain a fog-free background map;
s122, removing the primary defogging layer SfOverlapping the image with a fog-free background image to generate a preliminary defogging image s';
s123, judging whether the preliminary defogged image S 'is completely defogged, namely whether the pixel weight map tends to zero, and outputting the image if the preliminary defogged image S' is completely defoggedA preliminary defogged image s'; otherwise, the primary demisting layer sfInput defogger G as a new input imageS—TIn the method, the obtained new fog-free background image and the original fog-free background image are superposed to be used as a complete background image, and the preorder operation in the S123 is repeated again until the pixel weight image tends to zero;
s124, using public data set in Internet to generate G defoggingS—TTraining, wherein the training process comprises a process of learning and processing the details of the fog layer;
s125, fog discriminator DS-TTraining is carried out in such a manner that the generated fog-free image is input to the defogging discriminator DS-TJudging whether the image is a real image or not;
s126, circularly iterating S125-S126 until the defogging generator GS—TAnd a defogging discriminator DS-TAchieving a nash equilibrium state.
Compared with the prior art, the invention has the advantages that:
the invention provides an image rain and fog removing method based on an unsupervised attention mechanism, which trains unpaired rain images and non-rain images by utilizing a bidirectional generation countermeasure network and a cycle consistency loss principle, introduces the attention mechanism under the unsupervised condition to detect whether fog exists in the images, and combines a cycle search positioning algorithm to realize efficient processing of rain and fog details in a single rain image.
Through tests, the method can effectively remove rain marks and rain fog details in a single rain image, and has the characteristics of high rain removing efficiency of the image, good rain removing performance of the image and the like.
In addition, the technical idea of the invention can be used as a basis for technical personnel in the field to apply a similar method to the construction of other related image processing models, so that the overall application prospect of the scheme is very wide.
The following detailed description of the embodiments of the present invention is provided in connection with the accompanying drawings to make the technical solutions of the present invention easier to understand and master.
Drawings
FIG. 1 is a schematic diagram of the architecture of the Cycle-Derain model of the present invention;
FIG. 2 is a schematic flow chart of an attention mechanism and a circular search positioning algorithm according to the present invention;
FIG. 3 is a schematic diagram of a rain removal generator for use in the present invention;
FIG. 4 is a schematic diagram of a rain removal discriminator used in the present invention;
fig. 5 is a schematic diagram of a feature extractor used in the present invention.
Detailed Description
The invention provides an image rain and fog removing method based on an unsupervised attention mechanism, which trains unpaired rain images and non-rain images by utilizing a bidirectional generation countermeasure network and a cycle consistency loss principle, introduces the attention mechanism under the unsupervised condition to detect whether fog exists in the images, and combines a cycle search positioning algorithm to realize efficient processing of rain and fog details in a single rain image. The specific scheme of the invention is as follows.
An image rain and fog removing method based on an unsupervised attention mechanism comprises the following steps:
s1, taking the unsupervised rain-removing network cycleGAN as a model basic framework, respectively constructing each part of the model, and constructing a complete Cycle-Derain model.
Considering that heavy rainfall is usually accompanied by rain fog in real life, in the scheme, the rain image is further divided into a background layer, a rain layer and a fog layer. Correspondingly, a rain removing part and a fog removing part are respectively designed in the Cycle-Derain model for rain removal. The parts of the separate build model thus comprise a rain removal part and a mist removal part of the build model. The Cycle-Derain model architecture is shown in FIG. 1.
And S2, inputting the single rain image to be processed into a Cycle-Derain model, and completing the restoration and reconstruction of the single rain image to obtain a clear single image.
Further, S1 includes S11 and the rain removing part in the Cycle-Derain model is constructed, and the method specifically includes the following steps:
and S111, selecting an unsupervised rain removal network cycleGAN as a basic framework of the model, and enabling the constructed model to meet the cycle consistency loss by utilizing a plurality of rain removal generators and a plurality of rain removal discriminators.
Where a plurality of said rain generators comprise GGr、GFr、GFnAnd GGnForward mapping of co-implementation models
Figure BDA0002811749390000071
And backward mapping
Figure BDA0002811749390000072
Wherein G isGrFor generating a rainless image n from a true rainy image rr,GFrFor composing a rainless image nrReconstructing generates a rained image
Figure BDA0002811749390000073
GFnFor generating a rained image r from a true rainless image nn,GGnFor composing a rain image rnReconstruction to produce a rain-free image
Figure BDA0002811749390000074
A plurality of the rain removing discriminators include DGr、DFr、DFnAnd DGnRespectively judging rain removing generator GGr、GFnWhether the generated image is real and whether the forward and backward mapping process of the model accords with the cycle consistency. Wherein D isGrFor judging the generated rain-free image nrWhether it is true, DFrFor judging whether the forward mapping of the model conforms to the cycle consistency, DFnFor judging the generated rain image rnWhether it is true, DGnAnd the method is used for judging whether the forward mapping of the model conforms to the cycle consistency.
And S112, training the rain removing generator by using the public data set in the Internet, wherein the training process comprises a process from a rain image to a no-rain image and a process from the no-rain image to the rain image.
And S113, training the rain removing discriminator, wherein the training mode is that the rain-free image generated by the rain image and the rain image generated by the rain-free image are respectively input into the rain removing discriminator to judge whether the images are real images.
And S114, circularly iterating S112 to S113 until the rain removing generator and the rain removing discriminator reach a nash equilibrium state, so that the model can output a high-quality rain-free image.
It should be added that, for the correlation operation in S11, when the image is input, the resize command gives the image size of the target, and scales the image to 256 × 3, and then processes the input image, processes the input image with 256 × 3 using 9 residual blocks, performs example regularization using convolution with step size of 2, and considers all elements in a single sample and a single channel to improve the sharpness of the input image. The rain removal generator, after processing, processes the image using the ReLU activation function, and then uses the deconvolution network with step size of 2 to generate a rain-free image that is still 256 × 3 in size. For the mapping process from the rain-free domain to the rain-containing domain, the generated rain-free image is processed by the convolution network with the step length of 2 and then substituted into the Tanh activation function for operation, then the inverse convolution network with the step length of 2 is used for generating the rain-containing image with the same size, and the same principle is applied to backward mapping. In the process, four generators used all adopt a least square method to constrain a loss function to lead the loss function to tend to a minimum value, and the lambda in the loss function is determined1Set to 10, an Adam decoder with a batch size of 1 was used to produce a more desirable effect. The rain removal generator structure is shown in fig. 3.
And the four rain removing discriminators corresponding to the rain removing discriminators adopt a patch level-based discriminator framework containing 70 × 70 PatchGANs, and carry out constraint on a loss function by using a least square method so as to judge the authenticity and the cycle consistency of the generated picture. In the processing process, the image generated by the rain removing generator is processed by a convolution network with the step length of 2, the convolution network with the step length of 2 is substituted into the Leaky ReLU activation function for operation, the convolution network with the step length of 2 is input again for example regularization, and then the Leaky ReLU operation is input to generate a picture with the size of 16 x 3 for judgment. The structure of the rain removal discriminator is shown in fig. 4.
The loss function for determining whether the image is a real image in S11 is
Figure BDA0002811749390000091
Wherein x represents a real image, G (x) is an image generated by the rain removing generator, and x and y obey probability distribution, namely x to pdata(x) And y to pdata(y)。
In general, the more realistic the image generated by the generator, the smaller the loss function will be; the more accurate the judger determines, the larger the penalty function will be. Therefore, in the training of the scheme, the rain removing generator and the rain removing judger need to reach a nash equilibrium state, and a high-quality rain removing image can be output at the moment. For any image, after the behavior of converting from the source domain to the target domain and finally returning to the source domain, the image should be as identical as possible to the original image, and the difference between the finally generated image and the input real image is called the cycle consistency loss, and the loss function is
Figure BDA0002811749390000092
In order to minimize the difference between the final generated image and the input image, the loss function should be minimized.
Further, S1 also includes S12 and the defogging part in the Cycle-Derain model is constructed, and the method specifically includes the following steps:
s121, utilizing defogging generator GS—TGenerating a fog-free image from a rain-free image subjected to a rain removal portion process in a Cycle-Derain model, and comparing the generated fog-free image with an image obtained through an attention mechanism networkMultiplying the prime weight map by bit to obtain a preliminary defogging layer sfAnd carrying out bitwise multiplication on the input rain-removing image and the calculated weight map to obtain a fog-free background map.
S122, removing the primary defogging layer SfAnd overlapping the image with the fog-free background image to generate a preliminary defogged image s'.
S123, judging whether the preliminary defogged image S 'is completely defogged, namely whether the pixel weight map tends to zero, and if so, outputting the preliminary defogged image S'; otherwise, the primary demisting layer sfInput defogger G as a new input imageS—TAnd superposing the obtained new fog-free background image with the original fog-free background image to form a complete background image, and repeating the preamble operation in the step S123 again until the pixel weight image tends to zero.
S124, using public data set in Internet to generate G defoggingS—TTraining is carried out, and the training process comprises a process of learning and processing the details of the fog layer.
S125, fog discriminator DS-TTraining is carried out in such a manner that the generated fog-free image is input to the defogging discriminator DS-TAnd judging whether the image is a real image or not.
S126, circularly iterating S125-S126 until the defogging generator GS—TAnd a defogging discriminator DS-TAnd achieving a Nash equilibrium state, so that the model can complete the recovery reconstruction of the rain image.
The specific operation of S12 will be described in detail below in correspondence with the above steps.
First note that the input to the force mechanism is the rain-free image s after preliminary rain removal, by generator GS→TObtaining a prediction chart G of a fog layerS→T(s). At the same time, s passes through attention network AsGet an attention map sa,saReflecting the weight of each pixel. Will saAnd prediction map GS→T(S)Performing bit-wise multiplication operation on RGB channels, wherein the result is a preliminary defogging layer in the image and the preliminary defogging layer is defined as sf:sf=sa⊙GS→T(s). The back of the image is needed to be obtained after the fog layer is processedLandscape layer sbThe weight of the neglected partial pixel points is (1-s)a) Is multiplied by the original image in bit to obtain the background part sb=(1-sa) As indicated by s. Subsequently, the defogging layer and the background layer are added to obtain a defogged image s ═ sf+sb=sa⊙GS→T(s)+(1-sa)⊙GS→T(s)。
Positioning to obtain saAnd then, removing the fog layer obtained by positioning by using a cyclic search positioning method. If attention is sought to try saIf the matrix of (a) tends to zero, namely the fog layer in the picture is removed completely, outputting s' as an image after the fog layer is removed; otherwise, updating the input s to sfContinuously inputting the attention network, and performing positioning judgment, bitwise multiplication attention diagram judgment and other steps, and continuously circulating until saThe matrix of (c) is as close to zero as possible.
The loss function of the cyclic search positioning method is
L(sa)=min‖sa-0‖,
Wherein min is an attention-drawing diagram saThe weight of each pixel in the image is as small as possible, and due to the introduction of the attention mechanism, the Cycle-Derain model can better recover effective information in the image.
In addition, in order to judge whether the finally generated defogged image s' is authentic or not, a discriminator D is introducedsTo obtain the function of the penalty of confrontation
LGAN(G,Ds,S,T)=Et~PT(t)[logDT(t)]+Es~PS(s)[log(1-DT(G(s′)))]。
The above attention mechanism and the working flow of the circular search positioning algorithm are shown in fig. 2.
It should be added that, in S121, inclusion v3 is used as a feature extractor, which includes two convolution networks with step size 1 and a ReLU activation function. The method comprises the steps of firstly processing an input preliminary rain-removing image through a convolution network, then substituting the processed preliminary rain-removing image into a ReLU activation function for operation, and then inputting a layer of convolution network for processing to obtain a pixel weight graph corresponding to the processed preliminary rain-removing image. The feature extractor structure is shown in fig. 5.
Meanwhile, for the relevant operations in S121 and S125, when the defogging generator and the defogging discriminator are trained simultaneously, a KID algorithm with an unbiased estimation amount is used to constrain the loss function to enhance reliability. The KID algorithm quantifies the features of the real preliminary rain-removed image and the generated rain-fog-removed image, the difference between the generated image and the real image is represented by calculating the maximum mean value after the square, and the lower the value, the higher the visual similarity shared between the real image and the generated image.
In summary, the main advantages of the present invention are shown in the following aspects:
firstly, the method mainly utilizes an unsupervised rain removal network cycleGAN to realize the migration of a single rain image from a source domain to a target domain and then from the target domain to the source domain, and achieves the integral correspondence of the images by restricting the cycle consistency loss;
secondly, in the method, an unsupervised attention mechanism is introduced, weights are given to pixel points in a generated pixel image, and the pixel points are multiplied by a prediction image in each RGB channel in a bit-by-bit manner, so that the fog layer in a single rain image is accurately positioned;
in addition, in the method, a cyclic search positioning algorithm is introduced, so that the haze-free background image and the foreground image are separately processed, whether a haze layer is completely removed or not is judged in an iterative mode in the running process, and if the haze layer is not completely removed, the foreground image is substituted into a cycle, so that the complexity of image processing is reduced, and redundant processing of a haze-free image area is effectively avoided.
Through tests, the method can effectively remove rain marks and rain fog details in a single rain image, and has the characteristics of high rain removing efficiency of the image, good rain removing performance of the image and the like. The technical idea of the invention can be used as a basis for the technical personnel in the field to apply the similar method to the construction of other related image processing models, and the overall application prospect of the scheme is very wide.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Finally, it should be understood that although the present description refers to embodiments, not every embodiment contains only a single technical solution, and such description is for clarity only, and those skilled in the art should integrate the description, and the technical solutions in the embodiments can be appropriately combined to form other embodiments understood by those skilled in the art.

Claims (6)

1. An image rain and fog removing method based on an unsupervised attention mechanism is characterized by comprising the following steps:
s1, respectively constructing each part of the model by taking the unsupervised rain-removing network cycleGAN as a model basic framework to construct a complete Cycle-Derain model;
s2, inputting the single rain image to be processed into a Cycle-Derain model, and completing the restoration and reconstruction of the single rain image to obtain a clear single image;
the respective portions of the build model described in S1 include a rain removal portion and a fog removal portion of the build model.
2. The unsupervised attention mechanism-based image defogging method according to claim 1, wherein S1 comprises,
s11, constructing a rain removing part in the Cycle-Derain model, and specifically comprising the following steps:
s111, selecting an unsupervised rain removal network cycleGAN as a basic framework of the model, and enabling the constructed model to meet the cycle consistency loss by utilizing a plurality of rain removal generators and a plurality of rain removal discriminators;
s112, training the rain removing generator by using a public data set in the Internet, wherein the training process comprises a process from a rain image to a no-rain image and a process from the no-rain image to the rain image;
s113, training the rain removing discriminator, wherein the training mode is that a rain-free image generated by a rain image and a rain image generated by the rain-free image are respectively input into the rain removing discriminator to judge whether the images are real images;
and S114, circularly iterating S112 to S113 until the rain removing generator and the rain removing discriminator reach a Nash equilibrium state.
3. The unsupervised attention mechanism-based image defogging method according to claim 2, wherein: a plurality of the rain removing generators include GGr、GFr、GFnAnd GGnForward mapping of co-implementation models
Figure FDA0002811749380000021
And backward mapping
Figure FDA0002811749380000022
Wherein G isGrFor generating a rainless image n from a true rainy image rr,GFrFor composing a rainless image nrReconstructing generates a rained image
Figure FDA0002811749380000023
GFnFor generating a rained image r from a true rainless image nn,GGnFor composing a rain image rnReconstruction to produce a rain-free image
Figure FDA0002811749380000025
4. The unsupervised attention mechanism-based image defogging method of claim 3The method is characterized in that: a plurality of the rain removing discriminators include DGr、DFr、DFnAnd DGnJudging the rain removing generators G respectivelyGr、GFnWhether the generated image is real and whether the forward and backward mapping processes of the model accord with the cycle consistency;
wherein D isGrFor judging the generated rain-free image nrWhether it is true, DFrFor judging whether the forward mapping of the model conforms to the cycle consistency, DFnFor judging the generated rain image rnWhether it is true, DGnAnd the method is used for judging whether the backward mapping of the model conforms to the cycle consistency.
5. The unsupervised attention mechanism-based image defogging method according to claim 4, wherein in S11,
the loss function for determining whether the image is a real image is
Figure FDA0002811749380000024
Wherein x represents a real image, G (x) is an image generated by the rain removing generator, and x and y obey probability distribution, i.e. x to pdata(x) And y to pdata(y);
The loss function for determining the cyclic consistency of the image in the mapping process is
Figure FDA0002811749380000031
Wherein x represents a real image inputted by forward mapping, y represents a real image inputted by backward mapping, G is a rain removing generator for generating a rain-free image from a rain image, F is a rain removing generator for removing a rain image from a rain image, and x and y obey probability distribution, namely x to pdata(x) And y to pdata(y)。
6. The unsupervised attention mechanism-based image defogging method according to claim 4, wherein S1 further comprises,
s12, constructing a defogging part in the Cycle-Derain model, and specifically comprising the following steps:
s121, utilizing defogging generator GS-TGenerating a fog-free image from a rain-free image processed by a rain removing part in a Cycle-Derain model, and multiplying the generated fog-free image by a pixel weight map obtained by an attention mechanism network in a bit-by-bit manner to obtain a preliminary fog removing layer sfCarrying out bitwise multiplication on the input rain-removing image and the calculated weight map to obtain a fog-free background map;
s122, removing the primary defogging layer SfOverlapping the image with a fog-free background image to generate a preliminary defogging image s';
s123, judging whether the preliminary defogged image S 'is completely defogged, namely whether the pixel weight map tends to zero, and if so, outputting the preliminary defogged image S'; otherwise, the primary demisting layer sfInput defogger G as a new input imageS-TIn the method, the obtained new fog-free background image and the original fog-free background image are superposed to be used as a complete background image, and the preorder operation in the S123 is repeated again until the pixel weight image tends to zero;
s124, using public data set in Internet to generate G defoggingS-TTraining, wherein the training process comprises a process of learning and processing the details of the fog layer;
s125, fog discriminator DS-TTraining is carried out in such a manner that the generated fog-free image is input to the defogging discriminator DS-TJudging whether the image is a real image or not;
s126, circularly iterating S125-S126 until the defogging generator GS-TAnd a defogging discriminator DS-TAchieving a nash equilibrium state.
CN202011398742.6A 2020-12-02 2020-12-02 Image rain and fog removing method based on unsupervised attention mechanism Active CN112508083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011398742.6A CN112508083B (en) 2020-12-02 2020-12-02 Image rain and fog removing method based on unsupervised attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011398742.6A CN112508083B (en) 2020-12-02 2020-12-02 Image rain and fog removing method based on unsupervised attention mechanism

Publications (2)

Publication Number Publication Date
CN112508083A true CN112508083A (en) 2021-03-16
CN112508083B CN112508083B (en) 2022-09-20

Family

ID=74968109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011398742.6A Active CN112508083B (en) 2020-12-02 2020-12-02 Image rain and fog removing method based on unsupervised attention mechanism

Country Status (1)

Country Link
CN (1) CN112508083B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033687A (en) * 2021-04-02 2021-06-25 西北工业大学 Target detection and identification method under rain and snow weather condition
CN113139922A (en) * 2021-05-31 2021-07-20 中国科学院长春光学精密机械与物理研究所 Image defogging method and defogging device
CN113191969A (en) * 2021-04-17 2021-07-30 南京航空航天大学 Unsupervised image rain removing method based on attention confrontation generation network
CN113256538A (en) * 2021-06-23 2021-08-13 浙江师范大学 Unsupervised rain removal method based on deep learning
CN113393385A (en) * 2021-05-12 2021-09-14 广州工程技术职业学院 Unsupervised rain removal method, system, device and medium based on multi-scale fusion
CN113537057A (en) * 2021-07-14 2021-10-22 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN113554568A (en) * 2021-08-03 2021-10-26 东南大学 Unsupervised circulating rain removal network method based on self-supervision constraint and unpaired data
CN114332460A (en) * 2021-12-07 2022-04-12 合肥工业大学 Semi-supervised single image rain removal processing method
CN116958468A (en) * 2023-07-05 2023-10-27 中国科学院地理科学与资源研究所 Mountain snow environment simulation method and system based on SCycleGAN

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850850A (en) * 2015-04-05 2015-08-19 中国传媒大学 Binocular stereoscopic vision image feature extraction method combining shape and color
CN110992275A (en) * 2019-11-18 2020-04-10 天津大学 Refined single image rain removing method based on generation countermeasure network
CN111179187A (en) * 2019-12-09 2020-05-19 南京理工大学 Single image rain removing method based on cyclic generation countermeasure network
CN111652812A (en) * 2020-04-30 2020-09-11 南京理工大学 Image defogging and rain removing algorithm based on selective attention mechanism

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850850A (en) * 2015-04-05 2015-08-19 中国传媒大学 Binocular stereoscopic vision image feature extraction method combining shape and color
CN110992275A (en) * 2019-11-18 2020-04-10 天津大学 Refined single image rain removing method based on generation countermeasure network
CN111179187A (en) * 2019-12-09 2020-05-19 南京理工大学 Single image rain removing method based on cyclic generation countermeasure network
CN111652812A (en) * 2020-04-30 2020-09-11 南京理工大学 Image defogging and rain removing algorithm based on selective attention mechanism

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033687A (en) * 2021-04-02 2021-06-25 西北工业大学 Target detection and identification method under rain and snow weather condition
CN113191969A (en) * 2021-04-17 2021-07-30 南京航空航天大学 Unsupervised image rain removing method based on attention confrontation generation network
CN113393385A (en) * 2021-05-12 2021-09-14 广州工程技术职业学院 Unsupervised rain removal method, system, device and medium based on multi-scale fusion
CN113393385B (en) * 2021-05-12 2024-01-02 广州工程技术职业学院 Multi-scale fusion-based unsupervised rain removing method, system, device and medium
CN113139922A (en) * 2021-05-31 2021-07-20 中国科学院长春光学精密机械与物理研究所 Image defogging method and defogging device
CN113139922B (en) * 2021-05-31 2022-08-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and defogging device
CN113256538A (en) * 2021-06-23 2021-08-13 浙江师范大学 Unsupervised rain removal method based on deep learning
CN113537057A (en) * 2021-07-14 2021-10-22 山西中医药大学 Facial acupuncture point automatic positioning detection system and method based on improved cycleGAN
CN113554568A (en) * 2021-08-03 2021-10-26 东南大学 Unsupervised circulating rain removal network method based on self-supervision constraint and unpaired data
CN114332460A (en) * 2021-12-07 2022-04-12 合肥工业大学 Semi-supervised single image rain removal processing method
CN114332460B (en) * 2021-12-07 2024-04-05 合肥工业大学 Semi-supervised single image rain removing processing method
CN116958468A (en) * 2023-07-05 2023-10-27 中国科学院地理科学与资源研究所 Mountain snow environment simulation method and system based on SCycleGAN

Also Published As

Publication number Publication date
CN112508083B (en) 2022-09-20

Similar Documents

Publication Publication Date Title
CN112508083B (en) Image rain and fog removing method based on unsupervised attention mechanism
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
CN110992275B (en) Refined single image rain removing method based on generation of countermeasure network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN111861901A (en) Edge generation image restoration method based on GAN network
CN113450288B (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN111861925A (en) Image rain removing method based on attention mechanism and gate control circulation unit
CN113076957A (en) RGB-D image saliency target detection method based on cross-modal feature fusion
CN113298734B (en) Image restoration method and system based on mixed hole convolution
CN111815526B (en) Rain image rainstrip removing method and system based on image filtering and CNN
CN112949553A (en) Face image restoration method based on self-attention cascade generation countermeasure network
CN112149526B (en) Lane line detection method and system based on long-distance information fusion
CN113256538B (en) Unsupervised rain removal method based on deep learning
CN112686822B (en) Image completion method based on stack generation countermeasure network
CN112686817B (en) Image completion method based on uncertainty estimation
CN114155165A (en) Image defogging method based on semi-supervision
CN113947538A (en) Multi-scale efficient convolution self-attention single image rain removing method
CN116051407A (en) Image restoration method
CN113962332B (en) Salient target identification method based on self-optimizing fusion feedback
CN113298232B (en) Infrared spectrum blind self-deconvolution method based on deep learning neural network
CN116958317A (en) Image restoration method and system combining edge information and appearance stream operation
CN114943655A (en) Image restoration system for generating confrontation network structure based on cyclic depth convolution
Mandal et al. Neural architecture search for image dehazing
Zhu et al. HDRD-Net: High-resolution detail-recovering image deraining network
Wu et al. Semantic image inpainting based on generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant