CN115619677A - Image defogging method based on improved cycleGAN - Google Patents

Image defogging method based on improved cycleGAN Download PDF

Info

Publication number
CN115619677A
CN115619677A CN202211336414.2A CN202211336414A CN115619677A CN 115619677 A CN115619677 A CN 115619677A CN 202211336414 A CN202211336414 A CN 202211336414A CN 115619677 A CN115619677 A CN 115619677A
Authority
CN
China
Prior art keywords
image
defogging
cyclegan
loss
attention module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211336414.2A
Other languages
Chinese (zh)
Inventor
于天河
高赫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN202211336414.2A priority Critical patent/CN115619677A/en
Publication of CN115619677A publication Critical patent/CN115619677A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

An image defogging method based on an improved cycleGAN relates to the technical field of image processing, aims at the problems of incomplete image defogging and color distortion in the prior art, and comprises a step of obtaining a to-be-identified foggy image and a step of utilizing a defogging network to defogge the to-be-identified foggy image to obtain a defogged image; the defogging network comprises the following steps: the method comprises the following steps: acquiring a foggy image and a clear image; step two: constructing a loop to generate a confrontation network model; step three: and optimizing the loss between the foggy image and the clear image, and iteratively training a loop to generate a confrontation network model until convergence to obtain the defogging network. The fog of different concentrations not only can effectively be got rid of to this application, can improve the quality that image colour and detail resume moreover, generate clear natural fog-free picture, guaranteed promptly that the image defogging is thorough to make the colour undistorted.

Description

Image defogging method based on improved cycleGAN
Technical Field
The invention relates to the technical field of image processing, in particular to a single image defogging method based on improved cycleGAN.
Background
Under haze weather environment, there are a large amount of smoke and dust particles in the atmosphere, and light can be absorbed, scattered etc. by these particles in transmission process for the image contrast who reaches imaging device is low, characteristics such as color dark. At present, with the progress of mobile phones and other shooting devices, it is significant to integrate a defogging algorithm into the mobile phones and enable people to see clear images, and convenience is brought to various aspects of life, work and the like.
Currently popular defogging algorithms are mainly classified into three categories: the defogging algorithm is based on image enhancement and the defogging algorithm is based on a physical model [3] And a defogging algorithm based on deep learning. The defogging algorithm based on image enhancement mainly comprises histogram equalization, homomorphic filtering, wavelet transformation, retinex algorithm and the like; however, these algorithms do not address the principle of fog generation and have no clear defogging purpose, so that problems such as local over-enhancement and color distortion may occur; a defogging algorithm based on a physical model usually needs certain prior knowledge; with the development of deep learning, good results are obtained by applying the deep learning to the field of image defogging, and a relatively representative network comprises: dehazeNet, AOD-NET, GCANet, etc., but require paired datasets and are prone to incomplete defogging and color distortion.
Disclosure of Invention
The purpose of the invention is: aiming at the problems of incomplete image defogging and color distortion in the prior art, an improved cycleGAN-based single image defogging method is provided.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an image defogging method based on improved cycleGAN comprises the steps of obtaining a to-be-identified foggy image and defogging the to-be-identified foggy image by utilizing a defogging network to obtain a defogged image;
the defogging network comprises the following steps:
the method comprises the following steps: acquiring a foggy image and a clear image;
step two: constructing a loop to generate a confrontation network model;
step three: and optimizing the loss between the foggy image and the clear image, and iteratively training and circularly generating a confrontation network model until convergence to obtain the defogging network.
Further, the step of acquiring the defogging network specifically comprises:
two generators and two multi-scale discriminators are utilized to form two forward transmissions and form a circulating structure;
the two generators are a generator G and a generator F;
the generator G is used for converting the foggy image into a fogless image;
the generator F is used for converting the fog-free image into a fog image;
the two multi-scale discriminators are multi-scale discriminators D X And a multiscale discriminator D Y
The multi-scale discriminator D X The image processing device is used for judging whether the image is foggy and real;
the multi-scale discriminator D Y The image processing method is used for judging whether the image is fog-free and real;
and finally, obtaining the defogging network by the generator and the discriminator through repeated confrontation training through two confrontation losses, cycle consistency loss and structure loss.
Further, the generator includes an encoder, a decoder, and a residual block containing a feature attention module;
the encoder comprises two sets of convolutional layers, an average pooling function and a Relu activation function;
the decoder comprises two anti-convolution blocks and one convolution block;
the feature attention module comprises a channel attention module and a pixel attention module;
the channel attention module first pools H by global averaging c Inputting a feature map F c Converting the channel descriptor gc into a channel descriptor gc, and then obtaining a channel weight A by passing the gc through a convolutional layer, a Relu activation function, a convolutional layer and a Sigmoid activation function c And multiplying the input feature map by corresponding elements to obtain the output of the channel attention module
Figure BDA0003914758020000021
The pixel attention module pair input feature map
Figure BDA0003914758020000022
Obtaining channel weight A through a convolution layer, a Relu activation function, a convolution layer and a Sigmoid activation function p And multiplying the input feature map by corresponding elements to obtain the output of the pixel attention module
Figure BDA0003914758020000023
Further, gc and gc in the channel attention module
Figure BDA0003914758020000024
Respectively expressed as:
Figure BDA0003914758020000025
Figure BDA0003914758020000026
A c =Sigmoid(Conv(Relu(conv(g c ))))
wherein, X c (i, j) represents X c The value of the c-th channel at coordinate (i, j), the global average pooling function H c The method is used for converting the characteristic diagram with the input size of C multiplied by H multiplied by W into the size of C multiplied by 1, and encoding the global information of the input characteristic diagram.
Further, a in the pixel attention module p And
Figure BDA0003914758020000027
expressed as:
Figure BDA0003914758020000028
Figure BDA0003914758020000029
further, the feature attention module is represented as:
Figure BDA00039147580200000210
further, the loss of the defogging network is expressed as:
L=L LsGAN (G,D Y ,X,Y)+L LSGAN (F,D x ,Y,X)+λ 1 L cyc (G,F)+λ 2 L s
wherein L is LSGAN Denotes the resistance to loss, L cyc Denotes loss of cyclic consistency, L s Denotes the structural loss, λ 1 And λ 2 Representing the loss weight.
Further, the countermeasure loss is expressed as:
Figure BDA0003914758020000031
Figure BDA0003914758020000032
wherein,
Figure BDA0003914758020000033
respectively representing the true distribution of a foggy image, a fogless image, a downsampled fogless image and a downsampled fogless image, D X 、D X2 And D Y 、D Y2 Respectively representing multi-scale discriminators D x And D Y Discriminators of different scales of G (x), F (y) and G s (x)、F s (y) represents an image generated by the generator G or F and an image obtained by down-sampling the generated image by 2 times, respectively.
Further, the cycle consistency loss is expressed as:
Figure BDA0003914758020000034
wherein F (G (x)) represents an original diagramLike the loop image of x, G (F (y)) represents the loop image of the original image y, | × | y 1 Represents L 1 And (4) norm.
Further, the structural loss is expressed as:
L s =1-SSIM(x,y)
Figure BDA0003914758020000035
where M = F (G (x)), N = G (F (y)), SSIM is structural similarity, μ x And
Figure BDA0003914758020000036
respectively representing the mean and variance, σ, of x Mx Denotes the covariance of x and M, μ M And
Figure BDA0003914758020000037
respectively, mean and variance of M, μ y And
Figure BDA0003914758020000038
respectively representing the mean and variance of y, σ Ny Denotes the covariance of y and N, C1 and C2 are constant terms, μ M And
Figure BDA0003914758020000039
mean and variance of N are indicated.
The invention has the beneficial effects that:
the fog of different concentrations not only can effectively be got rid of to this application, can improve the quality that image colour and detail resume moreover, generate clear natural fog-free picture, guaranteed promptly that the image defogging is thorough to make the colour undistorted.
Drawings
FIG. 1 is a schematic flow diagram of the present application;
FIG. 2 is a schematic diagram of a generator network of the improved cycleGAN of the present application;
FIG. 3 is a schematic view of an attention mechanism module as used herein;
FIG. 4 is a schematic diagram of a residual block composed of attention modules as used in the present application;
FIG. 5 is a schematic diagram of a multi-scale discriminator used in the present application;
FIG. 6 is a graph comparing the defogging results of the present application.
Detailed Description
It should be noted that, in the present invention, the embodiments disclosed in the present application may be combined with each other without conflict.
The first specific implementation way is as follows: specifically describing the embodiment with reference to fig. 1, the method for defogging a single image based on improved CycleGAN in the embodiment includes the following steps:
collecting images with fog and without fog, and constructing a defogged image data set;
constructing a generator network and a multi-scale discriminator based on an attention mechanism, and optimizing a loss function;
training a generator and a discriminator using a defogged image dataset; and inputting the foggy image into a generator to obtain a generated image, calculating an error through a loss function, feeding the error back to the network through a back propagation algorithm, and updating network parameters by the generator and the discriminator.
And inputting the image needing defogging into a trained generator to obtain the defogged image.
In this embodiment, the defogged image dataset is obtained by randomly extracting 2000 images of the foggy image and the fogless image from the RESIDE dataset as training samples, performing cropping and scaling, and uniformly adjusting the two groups of image pixels to 256 × 256.
In this embodiment, the generator network comprises an encoder, a decoder, a residual block composition containing the feature attention module, as shown in fig. 2. The encoder consists of two groups of convolution layers, namely average pooling and Relu activation functions; the decoder is composed of two deconvolution blocks and a convolution block; the feature attention module comprises a channel attention module and a pixel attention module;
in this embodiment, as shown in FIG. 3, the attention module is a channel attention moduleAnd a pixel attention module. The channel attention first converts the input feature map into a channel descriptor gc through global average pooling; passing gc through a convolutional layer, a Relu activation function, a convolutional layer and a Sigmoid activation function; and multiplying the input feature map by the corresponding element to obtain the output of the channel attention module
Figure BDA0003914758020000051
The formula is as follows:
Figure BDA0003914758020000052
A c =Sigmoid(Conv(Relu(Conv(g c ))))
Figure BDA0003914758020000053
the pixel attention versus input feature map
Figure BDA0003914758020000059
Passing through a convolutional layer, a Relu activation function, a convolutional layer and a Sigmoid activation function; and multiplying the pixel attention module with corresponding elements of the input feature map to obtain the output of the pixel attention module
Figure BDA0003914758020000054
The formula is as follows:
Figure BDA0003914758020000055
Figure BDA0003914758020000056
thus, the output of the feature attention module is:
Figure BDA0003914758020000057
in the present embodiment, as shown in fig. 4, the residual block is composed of a local residual learning and feature attention module. And low-frequency region and other less important information is bypassed through local residual learning, so that the main network focuses on more important features.
In this embodiment, as shown in FIG. 5, the multi-scale discriminator is composed of two identical discriminators D 1 And D 2 Each discriminator is a simple classification convolutional network which comprises 6 convolutional layers and a 1024 full-link layer; the input image and the image after double sampling are input into a discriminator to discriminate true and false.
Further, the optimization loss function is specifically: in addition to combating losses L LSGAN Adding a cyclic consistency loss L to the loss function cyc And the structural loss function L s The specific loss function and the total loss function are as follows:
L=L LsGAN (G,D Y ,X,Y)+L LSGAN (F,D x ,Y,X)+λ 1 L cyc (G,F)+λ 2 L s
in the formula, λ 1 And λ 2 Is the loss of weight.
Figure BDA0003914758020000058
Figure BDA0003914758020000061
Against loss L LSGAN Using a squared error loss, where D X 、D X2 And D Y 、D Y2 Respectively representing multi-scale discriminators D x And D Y Discriminators of different scales of G (x), F (y) and G s (x)、F s (y) represents an image generated by the generator G or F and an image obtained by down-sampling the generated image by 2 times, respectively.
Figure BDA0003914758020000062
The cycle consistency loss is used to constrain the interconversion of the foggy and fogless image data, F (G (x)) is the cycle image of the original image x, which converts the result of the generator G (x) into the original foggy image; g (F (y)) is a loop image of the original image y, which converts the result of the generator F (y) into the original fog-free image.
Figure BDA0003914758020000063
L s =1-SSIM(x,y)
The structural loss is used to further enhance the stability of the generated image, where M = F (G (x)), N = G (F (y)), μ x And
Figure BDA0003914758020000064
respectively representing the mean and variance, σ, of x Mx Represents the covariance of x and M, μ y And
Figure BDA0003914758020000065
respectively representing the mean and variance, σ, of y Ny Represents the covariance of y and N, C 1 And C 2 Is a constant term.
In this example, the batch parameter was set to 8, the Adam optimizer was used for training, the learning rate was set to 0.0001, and the hyper-parameter β was set 1 And beta 2 Default values are used, 0.85 and 0.98 respectively. And continuously and iteratively updating the network parameters until the network converges. And inputting the image to be defogged into a trained generator to obtain a fog-free image.
In this experiment, in order to objectively evaluate the defogging effect, two indexes, namely Structural Similarity (SSIM) and Peak Signal to Noise Ratio (PSNR), are used to evaluate the quality of the defogging algorithm. The calculation formula is as follows:
Figure BDA0003914758020000066
Figure BDA0003914758020000067
in the formula, MAX 1 The maximum value of the pixel point, and MSE is the mean square error of the fog image and the fog-free image. Mu.s x And mu y Represents the average of x and y,
Figure BDA0003914758020000068
and
Figure BDA0003914758020000069
denotes the variance, σ, of x and y xy Denotes the covariance of x, y, c 1 And c 2 For two constants, set c 1 And c 2 0.01 and 0.03 respectively.
The method of the present application and other methods were tested simultaneously and the results are tabulated below.
Figure BDA0003914758020000071
Compared with the defogging effect, as shown in fig. 6, the defogging effect of the invention not only can effectively remove the fog with different concentrations, but also can improve the quality of image color and detail recovery, and generate a clearer and more natural fog-free image.
It should be noted that the detailed description is only for explaining and explaining the technical solution of the present invention, and the scope of protection of the claims is not limited thereby. It is intended that all such modifications and variations that fall within the spirit and scope of the invention be limited only by the claims and the description.

Claims (10)

1. An image defogging method based on improved cycleGAN is characterized in that: the method comprises the steps of obtaining a to-be-identified foggy image and defogging the to-be-identified foggy image by utilizing a defogging network to obtain a defogged image;
the defogging network comprises the following steps:
the method comprises the following steps: acquiring a foggy image and a clear image;
step two: constructing a loop to generate a confrontation network model;
step three: and optimizing the loss between the foggy image and the clear image, and iteratively training a loop to generate a confrontation network model until convergence to obtain the defogging network.
2. The improved CycleGAN-based image defogging method according to claim 1, wherein the defogging network comprises the following steps:
two generators and two multi-scale discriminators are utilized to form two times of forward transmission and form a circulating structure;
the two generators are a generator G and a generator F;
the generator G is used for converting the foggy image into a fogless image;
the generator F is used for converting the fog-free image into a fog image;
two multi-scale discriminators are multi-scale discriminators D X And a multi-scale discriminator D Y
The multi-scale discriminator D X The image processing device is used for judging whether the image is foggy and real;
the multi-scale discriminator D Y The image processing method is used for judging whether the image is fog-free and real;
and finally, obtaining the defogging network by repeatedly resisting training the generator and the discriminator through two resisting losses, cycle consistency loss and structure loss.
3. The improved CycleGAN-based image defogging method according to claim 2, wherein said generator comprises an encoder, a decoder and a residual block containing a feature attention module;
the encoder comprises two sets of convolutional layers, an average pooling function and a Relu activation function;
the decoder comprises two anti-convolution blocks and one convolution block;
the feature attention module comprises a channel attention module and a pixel attention module;
the channel attention module first pools H by global averaging c Inputting a feature map F c Converting the channel descriptor gc into a channel descriptor gc, and then obtaining a channel weight A by passing the gc through a convolutional layer, a Relu activation function, a convolutional layer and a Sigmoid activation function c And multiplying the input feature map by corresponding elements to obtain the output of the channel attention module
Figure FDA0003914758010000011
The pixel attention module pair input feature map
Figure FDA0003914758010000012
Obtaining channel weight A through convolution layer, relu activation function, convolution layer and Sigmoid activation function p And multiplying the pixel attention module with the corresponding element of the input feature map to obtain the output of the pixel attention module
Figure FDA0003914758010000013
4. The improved cycleGAN based image defogging method according to claim 3, wherein gc and g in said channel attention module
Figure FDA0003914758010000014
Respectively expressed as:
Figure FDA0003914758010000021
Figure FDA0003914758010000022
A c =Sigmoid(Conv(Relu(Conv(g c ))))
wherein X c (i, j) represents X c The value of the c-th channel at coordinate (i, j), the global average pooling function H c The method is used for converting the characteristic diagram with the input size of C multiplied by H multiplied by W into the size of C multiplied by 1, and encoding the global information of the input characteristic diagram.
5. The improved cycleGAN based image defogging method according to claim 4, wherein A in said pixel attention module p And
Figure FDA0003914758010000023
expressed as:
Figure FDA0003914758010000024
Figure FDA0003914758010000025
6. the improved CycleGAN-based image defogging method according to claim 5, wherein said feature attention module is represented as:
Figure FDA0003914758010000026
7. the improved CycleGAN-based image defogging method according to claim 2, wherein the loss of said defogging network is expressed as:
L=L LsGAN (G,D Y ,X,Y)+L LSGAN (F,D x ,Y,X)+λ 1 L cyc (G,F)+λ 2 L s
wherein L is LSGAN Denotes the loss of antagonism, L cyc Denotes loss of cyclic consistency, L s Presentation structureLoss, λ 1 And λ 2 Representing the loss weight.
8. The improved CycleGAN-based image defogging method according to claim 7, wherein said countermeasure loss is represented as:
Figure FDA0003914758010000027
Figure FDA0003914758010000028
wherein,
Figure FDA0003914758010000031
respectively representing the true distribution of a foggy image, a fogless image, a downsampled fogless image and a downsampled fogless image, D X 、D X2 And D Y 、D Y2 Respectively representing multi-scale discriminators D x And D Y Discriminators of different scales of G (x), F (y) and G s (x)、F s (y) represents an image generated by the generator G or F and an image obtained by down-sampling the generated image by 2 times, respectively.
9. The improved CycleGAN-based image defogging method according to claim 8, wherein said cyclical consistency loss is represented as:
Figure FDA0003914758010000032
wherein F (G (x)) represents a loop image of the original image x, G (F (y)) represents a loop image of the original image y, | | × | n 1 Represents L 1 And (4) norm.
10. The improved CycleGAN-based image defogging method according to claim 9, wherein said structural loss is represented by:
L s =1-SSIM(x,y)
Figure FDA0003914758010000033
where M = F (G (x)), N = G (F (y)), SSIM is structural similarity, μ x And
Figure FDA0003914758010000034
respectively, mean and variance of x, μ Mx Denotes the covariance of x and M, μ M And
Figure FDA0003914758010000035
respectively, mean and variance of M, μ y And
Figure FDA0003914758010000036
respectively representing the mean and variance, σ, of y Ny Denotes the covariance of y and N, C1 and C2 are constant terms, μ M And
Figure FDA0003914758010000037
mean and variance of N are indicated.
CN202211336414.2A 2022-10-28 2022-10-28 Image defogging method based on improved cycleGAN Pending CN115619677A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211336414.2A CN115619677A (en) 2022-10-28 2022-10-28 Image defogging method based on improved cycleGAN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211336414.2A CN115619677A (en) 2022-10-28 2022-10-28 Image defogging method based on improved cycleGAN

Publications (1)

Publication Number Publication Date
CN115619677A true CN115619677A (en) 2023-01-17

Family

ID=84876247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211336414.2A Pending CN115619677A (en) 2022-10-28 2022-10-28 Image defogging method based on improved cycleGAN

Country Status (1)

Country Link
CN (1) CN115619677A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230089280A1 (en) * 2021-09-17 2023-03-23 Nanjing University Of Posts And Telecommunications Image haze removal method and apparatus, and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230089280A1 (en) * 2021-09-17 2023-03-23 Nanjing University Of Posts And Telecommunications Image haze removal method and apparatus, and device
US11663705B2 (en) * 2021-09-17 2023-05-30 Nanjing University Of Posts And Telecommunications Image haze removal method and apparatus, and device

Similar Documents

Publication Publication Date Title
Xu et al. External prior guided internal prior learning for real-world noisy image denoising
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN112184577B (en) Single image defogging method based on multiscale self-attention generation countermeasure network
CN111161360B (en) Image defogging method of end-to-end network based on Retinex theory
CN113052814B (en) Dim light image enhancement method based on Retinex and attention mechanism
CN112465727A (en) Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN115358952B (en) Image enhancement method, system, equipment and storage medium based on meta-learning
Sun et al. Underwater image enhancement with encoding-decoding deep CNN networks
Chen et al. Image denoising via deep network based on edge enhancement
CN112241939A (en) Light-weight rain removing method based on multi-scale and non-local
CN117274059A (en) Low-resolution image reconstruction method and system based on image coding-decoding
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN115619677A (en) Image defogging method based on improved cycleGAN
CN111915518A (en) Hyperspectral image denoising method based on triple low-rank model
CN117392065A (en) Cloud edge cooperative solar panel ash covering condition autonomous assessment method
ZhiPing et al. A new generative adversarial network for texture preserving image denoising
CN117351340A (en) Underwater image enhancement algorithm based on double-color space
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN116563141A (en) Mars surface image enhancement method based on convolutional neural network
CN116309221A (en) Method for constructing multispectral image fusion model
CN116452450A (en) Polarized image defogging method based on 3D convolution
CN116843559A (en) Underwater image enhancement method based on image processing and deep learning
CN113012071B (en) Image out-of-focus deblurring method based on depth perception network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination