CN113850736A - Poisson-Gaussian mixed noise removing method - Google Patents

Poisson-Gaussian mixed noise removing method Download PDF

Info

Publication number
CN113850736A
CN113850736A CN202111083142.5A CN202111083142A CN113850736A CN 113850736 A CN113850736 A CN 113850736A CN 202111083142 A CN202111083142 A CN 202111083142A CN 113850736 A CN113850736 A CN 113850736A
Authority
CN
China
Prior art keywords
noise
image
poisson
gaussian
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111083142.5A
Other languages
Chinese (zh)
Inventor
黄梦醒
殷家汇
冯思玲
毋媛媛
冯文龙
张雨
吴迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainan University
Original Assignee
Hainan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainan University filed Critical Hainan University
Priority to CN202111083142.5A priority Critical patent/CN113850736A/en
Publication of CN113850736A publication Critical patent/CN113850736A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method for removing Poisson-Gaussian mixed noise, which comprises the following steps: constructing a data set containing a Poisson-Gaussian mixed noise image, and dividing the data set into a training set and a test set; establishing a noise image denoising model, wherein the model comprises a GAT layer, a CNN layer, a residual layer and an inverse GAT layer, and inputting the data in the training set into the non-blind Poisson-Gaussian mixture denoising model for training so as to obtain a trained non-blind Poisson-Gaussian mixture denoising model; and inputting the data of the test set into the non-blind Poisson-Gaussian mixed denoising model to obtain an image denoising result.

Description

Poisson-Gaussian mixed noise removing method
Technical Field
The invention relates to the technical field of image denoising, in particular to a method for removing Poisson-Gaussian mixed noise.
Background
Image denoising is a technology for recovering a clear image by removing noise by using context information of an image sequence, and is one of important research contents in the field of computer vision. The main methods for image denoising include: the method comprises the traditional denoising method based on artificial features and the denoising method based on deep learning. The traditional image denoising method based on artificial features uses discrete cosine transform, wavelet transform and the like to modify transform coefficients, and calculates local similarity by using average neighborhood values. The NLM method and the BM3D method take advantage of the self-similar patch to achieve a prominent effect on image fidelity and visual quality. Because the encoding of the image features by the traditional denoising method depends on the hypothesis of the original image, the matching degree of the encoding features in the real image is low, the performance and the flexibility of the method in practical application are reduced, and the feature extraction process of the method is complicated, time-consuming and large in calculation amount, and is not suitable for processing real noise with complicated distribution.
With the development of machine learning, deep learning is widely applied in the field of image denoising and becomes an effective solution for processing image denoising. Compared with the traditional image denoising method, the image denoising method based on the deep learning has strong learning capability, not only can fit complex noise distribution, but also saves the calculation time. The early deep learning image denoising method uses a reinforcement learning technique and Q-learning training to train a recurrent neural network. However, the reinforcement learning-based method has a large calculation amount and low search efficiency. The deep learning denoising method combines modes such as jump connection, attention mechanism and multi-scale feature fusion to improve the network feature expression capability, but the methods have a deeper network structure and are easy to cause problems such as gradient explosion or gradient dispersion in the training process. In recent years, some denoising methods adopting transfer learning and model compression transfer trained parameters to a new lightweight model, so as to accelerate and optimize learning efficiency and effectively avoid gradient problems.
The Convolutional Neural Network (CNN) is a basic network for deep learning, and the feature expression capability is improved by continuously optimizing the network. DnCN is a typical self-supervised denoising network. This network has enjoyed great success in image denoising of Additive White Gaussian Noise (AWGN). But has limited performance on real noisy photographs. Mainly because the learned model is easily over-fitted on the simplified AWGN model, which deviates significantly from the true noise model. For a single denoising model, although a good denoising effect can be achieved on a noise image of a specified category, the single denoising model does not have generalization capability, and the non-target noise removing effect is not satisfactory. For mixed noise (such as poisson gaussian mixed noise), it is difficult to obtain a good denoising effect. The real noisy images contain mixed noise, which is the result of the influence of the internal circuit factors of the digital equipment for photographing and the illumination intensity of the photographing environment. A poisson-gaussian noise model is typically used to describe the true source-related noise in the original image, which has a non-uniform noise variance and two parameters (λ, σ). λ represents the intensity of poisson noise, which generally occurs in a line with little or high-power electronic amplification; σ represents a variance of gaussian noise, which is generated when the field of view of the image sensor is not bright enough and brightness is not uniform enough or when the noise of each component of the circuit affects each other when the image sensor takes an image. Most of the existing Poisson-Gaussian noise estimation methods firstly obtain the mean value and the variance of local estimation, and then use the local estimation to fit a noise model by using Maximum Likelihood Estimation (MLE), which has the problems of low processing speed and poor noise removal effect.
Disclosure of Invention
The present invention is directed to a method for removing poisson-gaussian mixed noise, so as to solve the problems in the background art.
The invention is realized by the following technical scheme: a Poisson-Gaussian mixture noise removing method comprises the following steps:
constructing a data set containing a Poisson-Gaussian mixed noise image, and dividing the data set into a training set and a test set;
establishing a noise image denoising model, wherein the model comprises a GAT layer, a CNN layer, a residual layer and an inverse GAT layer, and inputting the data in the training set into the non-blind Poisson-Gaussian mixture denoising model for training so as to obtain a trained non-blind Poisson-Gaussian mixture denoising model;
and inputting the data of the test set into the non-blind Poisson-Gaussian mixed denoising model to obtain an image denoising result.
Optionally, the GAT layer is configured to include poisson-gaussian mixed noise np-gThe image fitting of (1) comprises Gaussian noise ngImage Y ofiWhich obtains an image YiThe specific expression of (A) is as follows:
Figure BDA0003264720310000031
wherein, YiFor noisy pixels, λ is the intensity of poisson noise, and σ is the standard deviation of gaussian noise.
Optionally, the CNN layer includes a DnCNN deep learning network for transforming the image YiInputting the DnCNN deep learning network for training to obtain a residual mapping function R (y) approximately equal to ngIn the training process, the mean square error is used as a loss function to train the parameter theta of the DnCNN deep learning network, and the expression of the loss function is as follows:
Figure BDA0003264720310000032
wherein, N represents the number of pairs of noise images and clear images converted by the GAT module, i.e. the number of samples of each training batch, then the parameter θ is optimized by using an Adam optimizer, and the weight is updated as:
Figure BDA0003264720310000033
Figure BDA0003264720310000034
wherein W is a convolution kernel, l is the current number of layers, b is the number of iterations, and α is the learning rate;
optionally, a PReLu function is adopted as a nonlinear activation function in the DnCNN deep learning network.
Optionally, the residual layer is used for transforming the image Y to be subjected to the GAT layeriAnd fitting Gaussian noise n extracted through CNN layergMaking a difference: x ═ Yi-ngAnd obtaining a primary clear image X.
Optionally, the inverse GAT layer is configured to combine each pixel X in the preliminary sharp image X with the corresponding pixel XiCarrying out inverse GAT conversion to obtain each pixel point I of the final clean imageiThe transformation process comprises the following steps:
Figure BDA0003264720310000041
where λ is the intensity of the poisson noise and σ is the standard deviation of the gaussian noise.
Optionally, when constructing the data set including the poisson-gaussian mixture noise image, the method includes:
acquiring a clean image, calculating the gray value of a pixel point of the clean image, and sequentially adding the gray value, the intensity lambda of Poisson noise and the standard deviation sigma of Gaussian noise to obtain a mixed noise image;
and preprocessing the mixed noise image, wherein the preprocessing comprises overturning, translation or rotation, and finally obtaining a data set containing the Poisson-Gaussian mixed noise image.
Compared with the prior art, the invention has the following beneficial effects:
the method for removing Poisson-Gaussian mixed noise provided by the invention fully considers the noise interference of a digital image in the actual imaging process, the result of the method can be as close to real noise as possible, and the method has wider applicability compared with a common single Gaussian noise model.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only preferred embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a poisson-gaussian mixed noise removing method according to the present invention;
FIG. 2 is a schematic structural diagram of a noise image denoising model provided by the present invention;
FIG. 3 is a schematic diagram of a DnCNN deep learning network structure provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, exemplary embodiments according to the present invention will be described in detail below with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of embodiments of the invention and not all embodiments of the invention, with the understanding that the invention is not limited to the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention described herein without inventive step, shall fall within the scope of protection of the invention.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.
It is to be understood that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present invention, a detailed structure will be set forth in the following description in order to explain the present invention. Alternative embodiments of the invention are described in detail below, however, the invention may be practiced in other embodiments that depart from these specific details.
Referring to fig. 1-2, a poisson-gaussian mixture noise removing method includes the following steps:
s1, constructing a data set containing a Poisson-Gaussian mixed noise image, and dividing the data set into a training set and a test set;
s2, establishing a noise image denoising model which comprises a GAT layer, a CNN layer, a residual error layer and an inverse GAT layer, inputting the data in the training set into the non-blind Poisson-Gaussian mixture denoising model for training, and thus obtaining a trained non-blind Poisson-Gaussian mixture denoising model;
and S3, inputting the data of the test set into the non-blind Poisson-Gaussian mixture denoising model to obtain an image denoising result.
In step S1, the data set may be constructed by acquiring an image containing poisson-gaussian mixed noise, and the image containing poisson-gaussian mixed noise may be obtained by adding poisson-gaussian mixed noise to the clean image, where the process of adding poisson-gaussian mixed noise to the clean image includes:
acquiring a clean image, calculating the gray value of a pixel point of the clean image, and sequentially adding the gray value, the intensity lambda of Poisson noise and the standard deviation sigma of Gaussian noise to obtain a mixed noise image;
and preprocessing the mixed noise image, wherein the preprocessing comprises overturning, translation or rotation, and finally obtaining a data set containing the Poisson-Gaussian mixed noise image.
It should be noted that poisson noise generally occurs in a circuit with very low or high power electronic amplification, and the mathematical expression is:
Figure BDA0003264720310000061
where λ is related to the illumination intensity and σ represents the standard deviation of the gaussian noise, the probability density function is as follows:
Figure BDA0003264720310000062
as an example, the python code in this embodiment adds poisson-gaussian mixed noise specifying λ, σ to the image, which is specifically as follows:
Figure BDA0003264720310000063
Figure BDA0003264720310000071
as an example, when the noise level is known, the addition can be performed by the above steps, and for blind denoising, only the noise level estimator needs to be added in the process to obtain the parameters σ and λ of the noise, such as BP-aid. Poisson-Gaussian noise parameter estimation can also be realized through a neural network, such as PGE-Net, and parameters of the input mixed noise image are estimated through a preset network to obtain sigma and lambda.
Optionally, in step S2, the GAT layer is configured to include poisson-gaussian mixed noise np-gThe image fitting of (1) comprises Gaussian noise ngImage Y ofiWhich obtains an image YiThe specific expression of (A) is as follows:
Figure BDA0003264720310000072
wherein, YiFor noisy pixels, λ is the intensity of poisson noise, and σ is the standard deviation of gaussian noise.
Referring to fig. 3, optionally, in step S2, the CNN layer includes a DnCNN deep learning network for transforming the image YiInputting the DnCNN deep learning network for training to obtain fitting Gaussian noise ngThe network adopts a mode of combining Batch Normalization (Batch Normalization) and Residual Learning (Residual Learning), a single model can be trained to carry out Gaussian denoising, and the network outputs a Residual image of noise. With x representing a sharp image, ngRepresenting the Gaussian noise after the last step of GAT layer transformation, and for the transformed noisy image y being x + ngTraining a residual mapping function R (y) approximately equal to n through a residual networkg
In the training process, the mean square error is used as a loss function to train the parameter theta of the DnCNN deep learning network, and the expression of the loss function is as follows:
Figure BDA0003264720310000081
wherein, N represents the number of pairs of noise images and clear images converted by the GAT module, i.e. the number of samples of each training batch, then the parameter θ is optimized by using an Adam optimizer, and the weight is updated as:
Figure BDA0003264720310000082
Figure BDA0003264720310000083
wherein W is a convolution kernel, 1 is the current number of layers, b is the number of iterations, and α is the learning rate;
as one embodiment of the application, a PReLu function is adopted to replace ReLu activation as a nonlinear activation function in the DnCNN deep learning network.
The ReLu activation function filters the negative half shaft part, so that partial information is lost, the output data distribution is not centered at 0 any more, and the data distribution is changed. The PReLu function solves the influence brought by the 0 interval of ReLu, and the expression is as follows:
Figure BDA0003264720310000084
wherein a is a constant, such as a value of 0.02.
Further, the DnCNN deep learning network further comprises quasi-normalization processing, and in the batch normalization processing, corresponding input x is subjected to normalization operation through mini-batch, so that the output signals are distributed in a mode that the mean value is 0 and the variance is 1 in each dimension. This is the standard normal distribution, which is the main reason why the network shows strong removal capability to gaussian noise, and the batch normalization operation is as follows
Figure BDA0003264720310000085
Optionally, in step S2, the residual layer is used for transforming the image Y to be subjected to the GAT layeriAnd fitting Gaussian noise n extracted through CNN layergMaking a difference: x ═ Yi-ngAnd obtaining a primary clear image X.
Optionally, in step S2, the inverse GAT layer is used to combine each pixel X in the preliminary sharp image XiCarrying out inverse GAT conversion to obtain each pixel point I of the final clean imageiThe transformation process comprises the following steps:
Figure BDA0003264720310000091
where λ is the intensity of the poisson noise and σ is the standard deviation of the gaussian noise.
In summary, the non-blind poisson-gaussian mixed denoising model provided by the invention utilizes GAT transformation to process poisson-gaussian mixed noise, fits gaussian noise and performs denoising, and finally recovers a clear image through inverse GAT transformation. Noise interference of the digital image in the actual imaging process is fully considered, the Poisson-Gaussian noise model can be as close to real noise as possible, and the method has wider applicability than a common single Gaussian noise model.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A Poisson-Gaussian mixture noise removing method is characterized by comprising the following steps:
constructing a data set containing a Poisson-Gaussian mixed noise image, and dividing the data set into a training set and a test set;
establishing a noise image denoising model, wherein the model comprises a GAT layer, a CNN layer, a residual layer and an inverse GAT layer, and inputting the data in the training set into the non-blind Poisson-Gaussian mixture denoising model for training so as to obtain a trained non-blind Poisson-Gaussian mixture denoising model;
and inputting the data of the test set into the non-blind Poisson-Gaussian mixed denoising model to obtain an image denoising result.
2. The method according to claim 1, wherein the GAT layer is configured to include poisson-gaussian mixed noise np-gThe image fitting of (1) comprises Gaussian noise ngImage Y ofiWhich obtains an image YiThe specific expression of (A) is as follows:
Figure FDA0003264720300000011
wherein, YiFor noisy pixels, λ is the intensity of poisson noise, and σ is the standard deviation of gaussian noise.
3. The method of claim 2, wherein the CNN layer comprises a DnCNN deep learning network for transforming the image Y into the Poisson-Gaussian mixture noiseiInputting the DnCNN deep learning network for training to obtain a residual mapping function R (y) approximately equal to ngIn the training process, the mean square error is used as a loss function to train the parameter theta of the DnCNN deep learning network, and the expression of the loss function is as follows:
Figure FDA0003264720300000012
wherein, N represents the number of pairs of noise images and clear images converted by the GAT module, i.e. the number of samples of each training batch, then the parameter θ is optimized by using an Adam optimizer, and the weight is updated as:
Figure FDA0003264720300000021
Figure FDA0003264720300000022
where W is the convolution kernel, l is the current number of layers, b is the number of iterations, and α is the learning rate.
4. The method of claim 3, wherein a PReLu function is used as the nonlinear activation function in the DnCNN deep learning network.
5. The method according to claim 3, wherein the residual layer is used for transforming the image Y to be subjected to the GAT layeriAnd fitting Gaussian noise n extracted through CNN layergMaking a difference: x ═ Yi-ngAnd obtaining a primary clear image X.
6. The Poisson-Gaussian mixture noise removing method according to claim 5, wherein the inverse GAT layer is used for removing each pixel X in the preliminary sharp image XiCarrying out inverse GAT conversion to obtain each pixel point I of the final clean imageiThe transformation process comprises the following steps:
Figure FDA0003264720300000023
where λ is the intensity of the poisson noise and σ is the standard deviation of the gaussian noise.
7. A Poisson-Gaussian mixture noise removal method according to any one of claims 1-6, wherein in constructing a data set containing a Poisson-Gaussian mixture noise image, the method comprises:
acquiring a clean image, calculating the gray value of a pixel point of the clean image, and sequentially adding the gray value, the intensity lambda of Poisson noise and the standard deviation sigma of Gaussian noise to obtain a mixed noise image;
and preprocessing the mixed noise image, wherein the preprocessing comprises overturning, translation or rotation, and finally obtaining a data set containing the Poisson-Gaussian mixed noise image.
CN202111083142.5A 2021-09-15 2021-09-15 Poisson-Gaussian mixed noise removing method Pending CN113850736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111083142.5A CN113850736A (en) 2021-09-15 2021-09-15 Poisson-Gaussian mixed noise removing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111083142.5A CN113850736A (en) 2021-09-15 2021-09-15 Poisson-Gaussian mixed noise removing method

Publications (1)

Publication Number Publication Date
CN113850736A true CN113850736A (en) 2021-12-28

Family

ID=78974208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111083142.5A Pending CN113850736A (en) 2021-09-15 2021-09-15 Poisson-Gaussian mixed noise removing method

Country Status (1)

Country Link
CN (1) CN113850736A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160253787A1 (en) * 2014-03-25 2016-09-01 Spreadtrum Communications (Shanghai) Co., Ltd. Methods and systems for denoising images
CN107169932A (en) * 2017-03-21 2017-09-15 南昌大学 A kind of image recovery method based on Gauss Poisson mixed noise model suitable for neutron imaging system diagram picture
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160253787A1 (en) * 2014-03-25 2016-09-01 Spreadtrum Communications (Shanghai) Co., Ltd. Methods and systems for denoising images
CN107169932A (en) * 2017-03-21 2017-09-15 南昌大学 A kind of image recovery method based on Gauss Poisson mixed noise model suitable for neutron imaging system diagram picture
CN110599409A (en) * 2019-08-01 2019-12-20 西安理工大学 Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JAESEOK BYUN等: "Learning Blind Pixelwise Affine Image Denoiser With Single Noisy Images", 《IEEE SIGNAL PROCESSING LETTERS》, vol. 27, 31 December 2020 (2020-12-31), pages 1105 - 1107 *
KAI ZHANG等: "Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 26, no. 7, 31 July 2017 (2017-07-31), pages 3142 - 3145, XP011649039, DOI: 10.1109/TIP.2017.2662206 *
刘婷云: "基于深度卷积神经网络的混合噪声降噪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, 15 January 2021 (2021-01-15), pages 138 - 2022 *

Similar Documents

Publication Publication Date Title
Guo et al. Underwater image enhancement using a multiscale dense generative adversarial network
CN110599409B (en) Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
CN111062880A (en) Underwater image real-time enhancement method based on condition generation countermeasure network
CN108416753B (en) Image denoising algorithm based on non-parametric alternating direction multiplier method
CN110223245B (en) Method and system for processing blurred picture in sharpening mode based on deep neural network
Zhao et al. Skip-connected deep convolutional autoencoder for restoration of document images
CN113808180A (en) Method, system and device for registering different-source images
CN117893409A (en) Face super-resolution reconstruction method and system based on illumination condition constraint diffusion model
CN114820389B (en) Face image deblurring method based on unsupervised decoupling representation
CN114821368B (en) Electric power defect detection method based on reinforcement learning and transducer
CN108492264B (en) Single-frame image fast super-resolution method based on sigmoid transformation
CN113850736A (en) Poisson-Gaussian mixed noise removing method
CN115619677A (en) Image defogging method based on improved cycleGAN
Xi et al. Research on image deblurring processing technology based on genetic algorithm
CN110415190B (en) Method, device and processor for removing image compression noise based on deep learning
Wang et al. CNN-based Single Image Dehazing via Attention Module
Parihar et al. UndarkGAN: Low-light Image Enhancement with Cycle-consistent Adversarial Networks
CN113222953B (en) Natural image enhancement method based on depth gamma transformation
Chen et al. GADO-Net: an improved AOD-Net single image dehazing algorithm
Yao et al. A Deep Image Denoising Method at Transmit Electricity Surveillance Environment
CN112116580B (en) Detection method, system and equipment for camera support
CN117745593B (en) Diffusion model-based old photo scratch repairing method and system
CN114596219B (en) Image motion blur removing method based on condition generation countermeasure network
CN115482162B (en) Implicit image blind denoising method based on random rearrangement and label-free model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination