CN116309178A - Visible light image denoising method based on self-adaptive attention mechanism network - Google Patents

Visible light image denoising method based on self-adaptive attention mechanism network Download PDF

Info

Publication number
CN116309178A
CN116309178A CN202310324928.4A CN202310324928A CN116309178A CN 116309178 A CN116309178 A CN 116309178A CN 202310324928 A CN202310324928 A CN 202310324928A CN 116309178 A CN116309178 A CN 116309178A
Authority
CN
China
Prior art keywords
network
image
denoising
layer
attention mechanism
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310324928.4A
Other languages
Chinese (zh)
Inventor
顾国华
沈昊博
万敏杰
陈钱
王佳节
徐秀钰
许运凯
龚晟
钱惟贤
韶阿俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Ligong Chengao Optoelectronics Technology Co ltd
Nanjing University of Science and Technology
Original Assignee
Nanjing Ligong Chengao Optoelectronics Technology Co ltd
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Ligong Chengao Optoelectronics Technology Co ltd, Nanjing University of Science and Technology filed Critical Nanjing Ligong Chengao Optoelectronics Technology Co ltd
Priority to CN202310324928.4A priority Critical patent/CN116309178A/en
Publication of CN116309178A publication Critical patent/CN116309178A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a visible light image denoising method based on a self-adaptive attention mechanism network, which relates to the field of image processing and comprises the following steps: taking the noise image and the truth image as training sets, and inputting a U-shaped full convolution network model for training; designing an adaptive channel attention mechanism, adding a model to further improve denoising performance, adjusting network parameters according to a loss function gradient until the maximum iteration number, and outputting a network model; and inputting the test set image with noise into a trained network model, and outputting a clear image. The peak signal-to-noise ratio, the structural similarity and the processing speed of the network model after the improvement of the invention on image denoising are obviously improved.

Description

Visible light image denoising method based on self-adaptive attention mechanism network
Technical Field
The invention belongs to the technical field of image processing, and relates to a visible light image denoising method based on a self-adaptive attention mechanism network.
Background
Images, which are one of the most commonly used information carriers for people, contain a large amount of information, are an important way for people to obtain information. The images are often interfered by noise with different degrees in the processes of acquisition, transmission and the like, the quality of the images is reduced due to the noise, the serious noise can submerge the useful information of the images, inconvenience is brought to the observation and the use of people, and the accuracy of the subsequent processing of the images such as image segmentation, target detection and the like is also influenced. Therefore, it is necessary to remove noise from an image, and it is difficult to remove noise from an image while keeping useful information of an image as much as possible.
Image denoising is classified into three main categories: filter-Based Methods (Filtering-Based Methods), optimization model-Based Methods (Optimization Model-Based Methods), and Learning-Based Methods (Learning-Based Methods).
Filter-based methods utilize some manually designed low-pass filters to remove image noise. The same image has many similar image blocks, noise can be removed through a non-local similar block stacking mode, such as NLM algorithm (Buades A, coll B, morel J M.A non-local algorithm for image denoising [ C ]//2005 IEEE computer society conference on computer vision and pattern recognition (CVPR' 05). Ieee,2005, 2:60-65.), CBM3D algorithm (Dabov K, foi A, katkovnikV, et al image denoisingby sparse 3-D transform-domain collaborative filtering [ J ]. IEEE Transactions on image processing,2007,16 (8): 2080-2095.) and the like, but the defects of fuzzy output and complicated super-parameter setting caused by block operation and long processing time exist.
Model-based methods generally define the denoising task as a maximum a posteriori-based optimization problem, whose performance depends primarily on the prior of the image. An infrared weighted kernel norm minimization method based on low rank matrix approximation is proposed as Gu et al (Gu S, zhang L, zuo W, et al weighted nuclear norm minimization with application to image denoising [ C ]// Proceedings of the IEEE conference on computer vision and pattern reception.2014:2862-2869.). The model-based method has strong mathematical deduction, but the performance is obviously reduced under high noise level, and has a certain complexity, and the problem of long processing time exists.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an image denoising method based on deep learning, so as to realize quick and accurate denoising of a noisy image.
The technical solution for realizing the purpose of the invention is as follows: a visible light image denoising method based on an adaptive attention mechanism network comprises the following steps:
step 1, constructing an adaptive channel attention mechanism module D;
step 2, combining the self-adaptive channel attention mechanism module D in the step 1 with a full convolution neural network to construct a denoising network DUnet;
step 3, constructing a loss function of the denoising network DUnet;
step 4, initializing denoising network DUnet parameters: learning rate lr, picture clipping size patch-size of each batch of pictures sent into a network, and maximum iteration times;
step 5, adding Gaussian noise to the truth image to obtain a training image, inputting the training image into a denoising network DUnet, training the training image until a loss function converges, and obtaining a trained DUnet network model;
and 6, inputting the image to be denoised into the trained DUnet network in the step 5 to perform image denoising processing.
Preferably, constructing the adaptive attention mechanism module D in step 1 includes the steps of:
step 1.1, carrying out space feature compression on an input feature map, carrying out global average pooling on the input feature map, and carrying out average value calculation on all pixel values of each channel map, wherein the feature map is changed into a vector of [1, C ] from a matrix of [ H, W, C ], and [ H, W, C ] represents the channel number, width and height of the feature map;
step 1.2, calculating the size k of the self-adaptive convolution kernel for the compressed feature map;
step 1.3, obtaining the weight of each channel of the feature map through one-dimensional convolution with the convolution kernel k;
and 1.4, multiplying the normalized weight and the original input feature map channel by channel, and outputting a feature map with channel attention.
Preferably, the calculation formula of the one-dimensional convolution kernel size k is:
Figure BDA0004152933390000021
where k represents the convolution kernel size, C represents the number of channels, || odd The representation takes the form of an odd number, gamma and b being the number of channels C and the scaling factor of the convolution kernel size k, respectively.
Preferably, the adaptive channel attention mechanism module D of the step 1 is combined with a full convolution neural network, and the specific steps of constructing the denoising network DUnet are as follows:
step 2.1, constructing five-layer feature extraction networks, wherein each layer of five-layer feature extraction network consists of two 3×3 convolution layers and one 2×2 maximum pooling layer; each layer of network firstly carries out 3X 3 convolution twice, and then transfers the data into the next layer of network after 2X 2 maximum pooling;
step 2.2, constructing five-layer feature fusion networks, wherein each layer of feature fusion network consists of a 2X 2 up-sampling convolution layer, a feature splicing layer and two 3X 3 convolution layers; firstly, carrying out 2X 2 up-sampling convolution, carrying out feature fusion with a feature map subjected to corresponding feature extraction layer convolution, and then carrying out 3X 3 convolution twice to transfer the feature map into a next layer network;
and 2.3, adding an adaptive channel attention module, respectively capturing channel information after the first convolution of the first layer of feature extraction network and the second layer of feature extraction network, and generating a weighted feature map.
Preferably, the loss function of the denoising network DUnet constructed in the step 3 is specifically:
Figure BDA0004152933390000031
wherein n is the total number of samples, x i Representing the output graph after each round of network training, y i And representing a truth image corresponding to the output graph.
Preferably, in step 5, gaussian white noise is added to the truth image, and the network model is trained, which comprises the following steps:
step 5.1, respectively adding Gaussian white noise with four noise levels to the true image, and taking the processed picture as a training set;
step 5.2, optimizing network parameters according to an Adam optimization algorithm by using a loss function; cutting and rotating the training set according to the patch-size, sending the training set into a network, training the training set until the maximum iteration number is reached, and obtaining a trained network model.
Preferably, the image after adding gaussian white noise of four noise levels to the truth image is:
Figure BDA0004152933390000032
where σ is the noise level, μ is the global average, and x is the input pixel.
Compared with the prior art, the invention has the remarkable advantages that: (1) Aiming at the characteristics that some feature layers have large effect and some feature layers have small effect in the process of extracting image features of the feature images, a channel attention mechanism module D is constructed, so that on the basis of feature image feature extraction, channel weight is adaptively given, the effect of the feature images with large effect on the result is larger, and therefore, the final result is more effective in extracting the features than a common convolution layer; (2) The model obtained after training can ensure the denoising performance, and can quickly realize the denoising of the visible light image, thereby greatly improving the efficiency.
The invention is further described below with reference to the drawings.
Drawings
Fig. 1 is a flowchart of a visible light image denoising method based on an adaptive attention mechanism network according to the present invention.
Fig. 2 is a schematic diagram of a channel attention mechanism.
Fig. 3 is a diagram of a DUnet network architecture.
Fig. 4 is a graph comparing denoising effects of different algorithms when the noise level σ=20.
Fig. 5 is a graph comparing denoising effects of different algorithms when the noise level σ=30.
Fig. 6 is a graph comparing denoising effects of different algorithms when the noise level σ=50.
Fig. 7 is a graph comparing denoising effects of different algorithms when the noise level σ=70.
Detailed description of the preferred embodiments
The invention designs a self-adaptive channel attention mechanism, and the validity of the method is proved by experimental verification on a public data set PolyU.
Referring to fig. 1, a visible light image denoising method based on an adaptive attention mechanism network includes the following steps:
step 1, in combination with fig. 2, an adaptive channel attention mechanism module D is constructed, and is configured to adaptively give channel weights on the basis of feature extraction features of feature graphs, so that the effect of the feature graphs with large effects on the results is greater, and the specific process of processing the input feature graphs by the adaptive channel attention mechanism module D includes the following steps:
and 1.1, performing spatial feature compression on an input feature map, carrying out global average pooling on the input feature map, and averaging all pixel values of each channel map, wherein the feature map is changed into a vector of [1, C ] from a matrix of [ H, W, C ].
Step 1.2, carrying out channel feature learning on the compressed feature map, and obtaining the self-adaptive one-dimensional convolution kernel size k through calculation, wherein the formula is as follows:
Figure BDA0004152933390000051
where k represents the convolution kernel size, C represents the number of channels, || odd Meaning that only an odd number, γ and b, can be taken for changing the ratio between the number of channels C and the convolution kernel size k. The convolution kernel size also represents the coverage of local cross-channel interactions, i.e., how many areas around the channel are involved in the attention prediction of this channel.
Step 1.3, using the convolution kernel size k in one-dimensional convolution to obtain the weight of each channel of the feature map;
step 1.4, using a Sigmoid activation function, and the formula is as follows:
Figure BDA0004152933390000052
the output of each neuron is normalized, the normalized weight and the original input feature map are multiplied by each other channel by channel, and a feature map with channel attention is output.
Step 2, combining the adaptive channel attention mechanism module D of step 1 with a full convolution neural network to construct a denoising network DUnet, specifically comprising the following steps:
step 2.1, constructing a five-layer feature extraction network, which consists of two 3×3 convolution layers and one 2×2 maximum pooling layer. Each layer of network is firstly subjected to 3×3 convolution twice, and then is transferred into the next layer of network after 2×2 maximum pooling.
And 2.2, constructing a five-layer feature fusion network, wherein the five-layer feature fusion network consists of a 2×2 up-sampling convolution layer, a feature splicing layer and two 3×3 convolution layers. Each layer of network firstly carries out 2X 2 up-sampling convolution, carries out feature fusion with the feature map after the corresponding feature extraction layer convolution, and then carries out 3X 3 convolution twice to transfer the feature map into the next layer of network.
And 2.3, adding an adaptive channel attention module, respectively capturing channel information after the first convolution of the first layer of feature extraction network and the second layer of feature extraction network, and generating a weighted feature map.
Step 3, constructing a Loss function Loss, wherein the specific formula is as follows:
Figure BDA0004152933390000053
wherein n is the total number of samples, x i Representing the output graph after each round of network training, y i And representing a truth image corresponding to the output graph.
Step 4, initializing network parameters: learning rate lr, picture clipping size patch-size of each batch sent to the network, maximum iteration number, specifically as follows:
setting the input picture patch-size to 128×128, and the initial learning rate to 1×10 -4 The learning rate lr parameter decreases with the training round number epoch.
Figure BDA0004152933390000061
And 5, adding Gaussian noise to the truth image to obtain a training image, inputting the training image into a denoising network DUnet, training the training image until a loss function converges to obtain a trained DUnet network model, wherein the training image comprises the following specific steps of:
step 5.1, adding four grades of Gaussian white noise with sigma=20, 30, 50 and 70 to the truth image respectively, and taking the processed picture as a training set, wherein the formula is as follows:
Figure BDA0004152933390000062
where σ is the noise level, μ is the overall mean, and x is the input pixel.
And 5.2, optimizing network parameters according to an Adam optimization algorithm by using a loss function. Cutting and rotating the training set according to the patch-size, sending the training set into a network, training the training set until the maximum iteration number is reached, and obtaining a trained network model.
And 6, inputting the image to be denoised into the trained DUnet network in the step 5 to perform image denoising processing, outputting the denoised image, and comparing and evaluating with the truth image.
And 6.1, inputting test sets with sigma=20, 30, 50 and 70 different noise levels into the trained model, and outputting a denoised image.
And 6.2, comparing the output denoised image with the truth image to perform denoising performance evaluation, and obtaining PSNR and SSIM performance indexes.
The PSNR is used to measure the difference between two images, and the formula is as follows:
Figure BDA0004152933390000071
wherein MSE is the mean square error of two images; maxValue is the maximum value that an image pixel can take.
The SSIM is based on the assumption that the human eye will extract structural information in the image, and accords with human eye visual perception in comparison with the traditional mode, and the formula is as follows:
SSIM(x,y)=[l(x,y)] α ×[c(x,y)] β ×[s(x,y)] γ
Figure BDA0004152933390000072
Figure BDA0004152933390000073
Figure BDA0004152933390000074
Figure BDA0004152933390000075
Figure BDA0004152933390000076
Figure BDA0004152933390000077
wherein N is the total number of picture samples, x i Representing the output graph after network training, y i Representing a truth image corresponding to the output image, C 1 =(K 1 L) 2 ;C 2 =(K 2 L) 2 The method comprises the steps of carrying out a first treatment on the surface of the L is the maximum value that an image pixel can take; k (K) 1 =0.01;K 2 =0.03; setting α=β=γ=1 and
Figure BDA0004152933390000078
to simplify the formula:
Figure BDA0004152933390000079
the greater the SSIM is, the more similar the two images are.
Example 1
In this example, the present invention was compared with conventional methods NLM, CBM3D, and deep learning based CBD Net, PD Net, ECND Net and FFD Net on a public dataset PolyU. The experimental environment is Tensorflow-GPU 2.5.0, the GPU used is NVIDIA 3080Ti, the CPU is 11th Gen Inter (R) Core (TM) i7-117002.5Ghz, and the running memory is 64GB.
And respectively adding noise levels sigma=20, 30, 50 and 70 Gaussian noise to the truth image to obtain 1900 training images, inputting the training images into a denoising network DUnet, training the training images until the loss function converges, and obtaining a trained DUnet network.
Table 1 PSNR of the methods on PolyU dataset (optimal values are shown in bold)
Figure BDA0004152933390000081
TABLE 2 SSIM on PolyU dataset for each method (optimal values are shown in bold)
Figure BDA0004152933390000082
TABLE 3 processing time of the methods on PolyU datasets (optimal values are shown in bold)
Figure BDA0004152933390000083
In tables 1 and 2, σ is gaussian white noise with noise levels of 20, 30, 50 and 70 respectively added to the corresponding test set to simulate an image to be denoised, and in table 3, when the processing Time (unit: seconds) is the model processing average, PSNR and SSIM are used for measuring the denoising effect of each algorithm on the noisy image, the higher the PSNR, the closer the SSIM is to 1, which indicates that the better the denoising performance of the algorithm on the image. If the test set contains multiple images, the data in the table is the average of the multiple test images PSNR and SSIM in the test set.
As can be seen from tables 1 and 2, the present invention has more prominent performance on the PolyU test set compared with other methods, and the PSNR average value of the present invention is improved by 4.103 and the ssim average value is improved by 0.062 compared with the conventional CBM3D method at σ=20, 30, 50, 70 noise levels; compared with 4 PD Net which are better based on a deep learning method, the PSNR average value of the method is improved by 1.187, and the SSIM average value is improved by 0.01. The invention obtains the highest average value on PSNR and SSIM, which means that compared with other six existing denoising algorithms, DUnet has better average denoising effect objectively, and compared with the traditional algorithm, the invention greatly improves denoising processing time as can be seen from Table 3. By combining the denoising contrast effect diagrams of fig. 4, 5, 6 and 7, the method can realize rapid and efficient denoising of the visible light image. In fig. 4, (a) is a truth image; (b) an image with a noise level σ=20; (c) denoising the CBM3D image; (d) denoising the image for NLM; (e) denoising the image for CBD Net; (f) denoising the PD Net image; (g) is an ECND Net denoised image; (h) denoising the FFD Net image; (i) is a denoised image of the method of the invention.
In fig. 5, (a) is a truth image; (b) an image with a noise level σ=30; (c) denoising the CBM3D image; (d) denoising the image for NLM; (e) denoising the image for CBD Net; (f) denoising the PD Net image; (g) is an ECND Net denoised image; (h) denoising the FFD Net image; (i) is a denoised image of the method of the invention.
In fig. 6, (a) is a truth image; (b) an image with a noise level σ=50; (c) denoising the CBM3D image; (d) denoising the image for NLM; (e) denoising the image for CBD Net; (f) denoising the PD Net image; (g) is an ECND Net denoised image; (h) denoising the FFD Net image; (i) is a denoised image of the method of the invention.
In fig. 7, (a) is a truth image; (b) an image with a noise level σ=70; (c) denoising the CBM3D image; (d) denoising the image for NLM; (e) denoising the image for CBD Net; (f) denoising the PD Net image; (g) is an ECND Net denoised image; (h) denoising the FFD Net image; (i) is a denoised image of the method of the invention.

Claims (7)

1. The visible light image denoising method based on the self-adaptive attention mechanism network is characterized by comprising the following steps of:
step 1, constructing an adaptive channel attention mechanism module D;
step 2, combining the self-adaptive channel attention mechanism module D in the step 1 with a full convolution neural network to construct a denoising network DUnet;
step 3, constructing a loss function of the denoising network DUnet;
step 4, initializing denoising network DUnet parameters: learning rate lr, picture clipping size patch-size of each batch of pictures sent into a network, and maximum iteration times;
step 5, adding Gaussian noise to the truth image to obtain a training image, inputting the training image into a denoising network DUnet, training the training image until a loss function converges, and obtaining a trained DUnet network model;
and 6, inputting the image to be denoised into the trained DUnet network in the step 5 to perform image denoising processing.
2. The method for denoising visible light images based on an adaptive attention mechanism network according to claim 1, wherein constructing the adaptive attention mechanism module D in step 1 comprises the steps of:
step 1.1, carrying out space feature compression on an input feature map, carrying out global average pooling on the input feature map, and carrying out average value calculation on all pixel values of each channel map, wherein the feature map is changed into a vector of [1, C ] from a matrix of [ H, W, C ], and [ H, W, C ] represents the channel number, width and height of the feature map;
step 1.2, calculating the size k of the self-adaptive convolution kernel for the compressed feature map;
step 1.3, obtaining the weight of each channel of the feature map through one-dimensional convolution with the convolution kernel k;
and 1.4, multiplying the normalized weight and the original input feature map channel by channel, and outputting a feature map with channel attention.
3. The method for denoising visible light images based on the adaptive attention mechanism network according to claim 2, wherein the calculation formula of the one-dimensional convolution kernel size k is:
Figure FDA0004152933360000011
where k represents the convolution kernel size, C represents the number of channels, || odd The representation takes the form of an odd number, gamma and b being the number of channels C and the scaling factor of the convolution kernel size k, respectively.
4. The visible light image denoising method based on the adaptive attention mechanism network according to claim 1, wherein the specific steps of combining the adaptive channel attention mechanism module D of step 1 with the full convolution neural network to construct a denoising network DUnet are as follows:
step 2.1, constructing five-layer feature extraction networks, wherein each layer of five-layer feature extraction network consists of two 3×3 convolution layers and one 2×2 maximum pooling layer; each layer of network firstly carries out 3X 3 convolution twice, and then transfers the data into the next layer of network after 2X 2 maximum pooling;
step 2.2, constructing five-layer feature fusion networks, wherein each layer of feature fusion network consists of a 2X 2 up-sampling convolution layer, a feature splicing layer and two 3X 3 convolution layers; firstly, carrying out 2X 2 up-sampling convolution, carrying out feature fusion with a feature map subjected to corresponding feature extraction layer convolution, and then carrying out 3X 3 convolution twice to transfer the feature map into a next layer network;
and 2.3, adding an adaptive channel attention module, respectively capturing channel information after the first convolution of the first layer of feature extraction network and the second layer of feature extraction network, and generating a weighted feature map.
5. The visible light image denoising method based on the adaptive attention mechanism network according to claim 1, wherein the denoising network DUnet constructed in the step 3 has a loss function of:
Figure FDA0004152933360000021
wherein n is the total number of samples, x i Representing the output graph after each round of network training, y i And representing a truth image corresponding to the output graph.
6. The method for denoising visible light images based on an adaptive attention mechanism network according to claim 1, wherein the step 5 of adding white gaussian noise to the truth image and training the network model comprises the following steps:
step 5.1, respectively adding Gaussian white noise with four noise levels to the true image, and taking the processed picture as a training set;
step 5.2, optimizing network parameters according to an Adam optimization algorithm by using a loss function; cutting and rotating the training set according to the patch-size, sending the training set into a network, training the training set until the maximum iteration number is reached, and obtaining a trained network model.
7. The method for denoising visible light images based on an adaptive attention mechanism network according to claim 6, wherein the images after four noise levels of gaussian white noise are added to the truth image respectively are:
Figure FDA0004152933360000031
where σ is the noise level, μ is the global average, and x is the input pixel.
CN202310324928.4A 2023-03-28 2023-03-28 Visible light image denoising method based on self-adaptive attention mechanism network Pending CN116309178A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310324928.4A CN116309178A (en) 2023-03-28 2023-03-28 Visible light image denoising method based on self-adaptive attention mechanism network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310324928.4A CN116309178A (en) 2023-03-28 2023-03-28 Visible light image denoising method based on self-adaptive attention mechanism network

Publications (1)

Publication Number Publication Date
CN116309178A true CN116309178A (en) 2023-06-23

Family

ID=86814932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310324928.4A Pending CN116309178A (en) 2023-03-28 2023-03-28 Visible light image denoising method based on self-adaptive attention mechanism network

Country Status (1)

Country Link
CN (1) CN116309178A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173037A (en) * 2023-08-03 2023-12-05 江南大学 Neural network structure automatic search method for image noise reduction
CN117291835A (en) * 2023-09-13 2023-12-26 广东海洋大学 Denoising network model based on image content perception priori and attention drive
CN117173037B (en) * 2023-08-03 2024-07-09 江南大学 Neural network structure automatic search method for image noise reduction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173037A (en) * 2023-08-03 2023-12-05 江南大学 Neural network structure automatic search method for image noise reduction
CN117173037B (en) * 2023-08-03 2024-07-09 江南大学 Neural network structure automatic search method for image noise reduction
CN117291835A (en) * 2023-09-13 2023-12-26 广东海洋大学 Denoising network model based on image content perception priori and attention drive

Similar Documents

Publication Publication Date Title
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
Shahdoosti et al. Edge-preserving image denoising using a deep convolutional neural network
CN109360156B (en) Single image rain removing method based on image block generation countermeasure network
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
CN110766632A (en) Image denoising method based on channel attention mechanism and characteristic pyramid
CN111861894B (en) Image motion blur removing method based on generation type countermeasure network
Starovoytov et al. Comparative analysis of the SSIM index and the pearson coefficient as a criterion for image similarity
CN112541877B (en) Defuzzification method, system, equipment and medium for generating countermeasure network based on condition
CN112634163A (en) Method for removing image motion blur based on improved cycle generation countermeasure network
Min et al. Blind deblurring via a novel recursive deep CNN improved by wavelet transform
CN109949200B (en) Filter subset selection and CNN-based steganalysis framework construction method
CN108830829B (en) Non-reference quality evaluation algorithm combining multiple edge detection operators
CN116309178A (en) Visible light image denoising method based on self-adaptive attention mechanism network
CN112927137A (en) Method, device and storage medium for acquiring blind super-resolution image
Zhao et al. A simple and robust deep convolutional approach to blind image denoising
CN115578262A (en) Polarization image super-resolution reconstruction method based on AFAN model
CN115082336A (en) SAR image speckle suppression method based on machine learning
Wu et al. Dcanet: Dual convolutional neural network with attention for image blind denoising
Hussain et al. Image denoising to enhance character recognition using deep learning
CN111353982B (en) Depth camera image sequence screening method and device
CN117392036A (en) Low-light image enhancement method based on illumination amplitude
CN116703750A (en) Image defogging method and system based on edge attention and multi-order differential loss
CN114862699B (en) Face repairing method, device and storage medium based on generation countermeasure network
CN115619677A (en) Image defogging method based on improved cycleGAN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination