CN112801906A - Cyclic iterative image denoising method based on cyclic neural network - Google Patents

Cyclic iterative image denoising method based on cyclic neural network Download PDF

Info

Publication number
CN112801906A
CN112801906A CN202110146982.5A CN202110146982A CN112801906A CN 112801906 A CN112801906 A CN 112801906A CN 202110146982 A CN202110146982 A CN 202110146982A CN 112801906 A CN112801906 A CN 112801906A
Authority
CN
China
Prior art keywords
image
iteration
network
denoising
loop
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110146982.5A
Other languages
Chinese (zh)
Other versions
CN112801906B (en
Inventor
牛玉贞
郑路伟
陈钧荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202110146982.5A priority Critical patent/CN112801906B/en
Publication of CN112801906A publication Critical patent/CN112801906A/en
Application granted granted Critical
Publication of CN112801906B publication Critical patent/CN112801906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a cyclic iterative image denoising method based on a cyclic neural network, which comprises the following steps of S1, obtaining paired original noise images and noiseless images and preprocessing the images to obtain paired image blocks of the noise images and the noiseless images for training; step S2, constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image; and step S3, inputting the original noise image to be detected into the trained denoising network to obtain a denoising image. The invention removes noise more cleanly and keeps more image details in a circular iteration mode, thereby effectively reconstructing a de-noised image.

Description

Cyclic iterative image denoising method based on cyclic neural network
Technical Field
The invention relates to the technical field of image and video processing, in particular to a cyclic iteration image denoising method based on a cyclic neural network.
Background
With the rapid development of internet and multimedia technology, images have become an indispensable part of human information exchange and information transfer processes. The image has research value in the fields of communication, social contact, medicine and the like, and has practical significance for the development of information storage, information interaction technology and the like of the modern society. However, degradation of image content, such as image degradation caused by camera parameter settings, image degradation caused by ambient brightness, image quality degradation caused by image compression and decompression techniques, and the like, inevitably occurs. The degraded image will seriously affect the aesthetic sense of the image and even cause the image information not to be extracted effectively. Once the outline of the object is unclear, the foreground and the background of the image cannot be effectively segmented, and even more seriously, the content of the image cannot be distinguished. Therefore, the degraded image needs to be processed. Image denoising is one of the indispensable techniques for reconstructing degraded images to get image content closer to noise-free image content. As a low-level visual task, the calculation result can directly influence high-level computer visual tasks such as image segmentation, image classification, target identification and the like.
The goal of image denoising is to reconstruct the image content in the noisy image, so that the denoised image has more image detail information. The task of image denoising has a long history of research, and among the image denoising methods that have been proposed, it can be roughly divided into a conventional method and a method based on deep learning. The traditional method utilizes filters such as median filtering and Gaussian filtering to process noise images, but the processing efficiency is low due to the limitation of computing resources and the reason that the method needs to extract images a priori manually. The method needs to be processed and optimized to be applied to actual life. While deep learning based methods take advantage of the automatic feature extraction capabilities of convolutional neural networks and can use prior information extracted by traditional methods to help convolutional neural networks extract features of images. Therefore, methods based on deep learning have been extensively studied by researchers in recent years.
In recent years, with the improvement of computer computing power, methods based on deep learning are rapidly developed, image denoising methods based on deep learning are continuously proposed, and denoising performance is more advanced than that of the traditional methods. However, many image denoising methods still have some problems at present. For example, the denoising result is too smooth, the texture loss is serious, and the like. If the noise image is subjected to only one-time denoising operation, a denoising result is obtained, so that the once denoising result is too smooth to lose image details, and the result cannot be reversed. If the noise image is subjected to the iterative denoising processing for multiple times, the noise of the noise image can be partially removed each time until the denoising result obtained by the iteration can remove more image noise and reconstruct more image textures. And if the noise distribution in the noise image is reasonably estimated before the denoising operation, the estimated noise distribution information is simultaneously input into the denoising network. The noise amplitude and the noise position of the noise image can be estimated and positioned more accurately, so that the noise image can be denoised better, and a denoising result with better performance can be obtained.
Disclosure of Invention
In view of this, the present invention provides a recurrent iterative image denoising method based on a recurrent neural network, which removes noise more cleanly and retains more image details in a recurrent iterative manner, thereby effectively reconstructing a denoised image.
In order to achieve the purpose, the invention adopts the following technical scheme:
a cyclic iteration image denoising method based on a cyclic neural network comprises the following steps:
step S1, acquiring and preprocessing paired original noise images and noiseless images to obtain paired image blocks of the noise images and the noiseless images for training;
step S2, constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image;
and step S3, inputting the original noise image to be detected into the trained denoising network to obtain a denoising image.
The step S1 specifically includes:
step S11: the original noise image and the noiseless image which are paired are cut into blocks at the same position to obtain a plurality of groups of image blocks of the noise image and the noiseless image which are paired;
step S12: and carrying out the same random inversion and rotation on each group of paired image blocks, and enhancing the data to obtain the image blocks of paired noisy images and noiseless images for training.
Further, the recurrent iterative image denoising network based on the recurrent neural network comprises a noise estimation sub-network and a denoising sub-network.
Furthermore, the noise estimation sub-network is composed of an input convolutional layer, m series-connected ResNet residual blocks, a GRU module and an output convolutional layer. Wherein, the input convolution layer is composed of convolution kernels with convolution kernel of 3 × 3 and step length of 1, and the output convolution layer is composed of convolution kernels with convolution kernel of 1 × 1 and step length of 1; the input of the GRU module is the characteristic z obtained by splicing the output of the GRU module of the previous iteration and the output of the ResNet residual block of the current iteration times i through a channel.
Further, the specific calculation formula of the GRU module is as follows:
a=conv(z),
b=conv(z),
c=conv_(z),
d=conv(z),
Figure BDA0002930607880000041
Figure BDA0002930607880000042
wherein conv (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a sigmoid activation function, conv _ (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a tanh activation function,
Figure BDA0002930607880000043
convolution as outputThe input of the layer(s) is (are),
Figure BDA0002930607880000044
is part of the GRU module input for the (i + 1) th iteration.
Further, the denoising sub-network comprises two loop operations, each loop operation is composed of the same encoder, residual module and decoder:
the encoder consists of an input convolutional layer and two down-sampling layers; the input convolutional layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 1 and a GRU module, and the downsampling layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 2, an activation function and a GRU module; the input of the GRU module is the characteristics obtained by channel splicing of the output of the corresponding characteristic dimension of the previous layer and the output of the GRU module of the corresponding characteristic dimension of the previous iteration, and the calculation mode is the same as that of the GRU module of the noise estimation subnetwork; the encoder part divides the characteristics of the network into 3 different scales, which are respectively F from large scale to small scale1、F2And F3
The residual module is composed of n series ResNet residual blocks, and the input of the residual module is the characteristic F obtained by the encoder part3The output is characterized by Fc
The decoder consists of two upsampling layers and an output convolutional layer, wherein the operation of each upsampling layer comprises one nearest neighbor interpolation operation, a convolutional layer with the convolutional kernel size of 3 multiplied by 3 and the step length of 1 and a ReLU activation function, and the output convolutional layer is a convolutional layer with the convolutional kernel size of 1 multiplied by 1 and the step length of 1; the input to the first up-sampling layer is F3And FcThe output characteristic is f3(ii) a The input of the second up-sampling layer is F2And f3The output characteristic is f2(ii) a The input of the output convolutional layer is F1And f2And outputting the characteristics obtained by the channel splicing operation as a de-noised image.
Further, the denoising network comprises t loop iterations and the first loop iterationA part of the input of each GRU module is the characteristic of which the value of the corresponding characteristic dimension is 0, and from the second loop iteration, a part of the input of each GRU module is the GRU characteristic of the last loop iteration of the corresponding characteristic dimension; and the input of the noise estimation subnetwork at the first iteration of the loop is a noise image NoriAnd outputting a noise estimation image E as a first loop iteration1(ii) a Input of the denoising subnetwork at the first iteration of the loop is a noise estimation image E of the first iteration of the loop1And a noise image NoriOutputting the two-channel image subjected to channel splicing as a denoised image De of the first loop iteration1. The noise estimation sub-network starts from the ith (i is more than 1) loop iteration, and the input of the ith loop iteration is the denoised image De of the (i-1) loop iterationi-1And outputting a noise estimation image E of the ith loop iterationi. The denoising subnetwork starts from the second loop iteration, and the input of the ith loop iteration is a noise estimation image E of the ith loopiAnd the i-1 st loop iteration De-noised image Dei-1Outputting the two-channel image spliced by the channels as a De-noised image De of the ith stagei. The final denoised image is the output De of the t-th iteration of the loopt
Further, the training of the recurrent iterative image denoising network model based on the recurrent neural network specifically comprises:
(1): randomly dividing image blocks of a pair of a noise image and a noiseless image into a plurality of batches, wherein each batch comprises N image blocks;
(2): inputting the noise images of the corresponding N image blocks in the batch into the denoising network in the step S2 by taking the batch as a unit to obtain corresponding N denoising images;
(3): calculating the gradient of each parameter in the network by using a back propagation method according to a target loss function of a cyclic iterative image denoising network based on a cyclic neural network, and updating the parameters of the network by using a random gradient descent method;
(4): and (3) repeating the steps (2) to (3) by taking batches as units until the target loss function value of the recurrent iterative image denoising network based on the recurrent neural network tends to be stable, storing the network parameters and finishing the training process of the network.
Further, the target loss function of the recurrent iterative image denoising network based on the recurrent neural network is calculated as follows:
Loss=λ1Lstg12Lstg2+…+λkLstgk
in the target loss function, λ1,λ2,…,λkWeight lost for each iteration of the loop, Lstg1For the loss function of the first loop iteration, the specific calculation is as follows:
Figure BDA0002930607880000061
wherein N isgt1Representing the reference image of the noise estimation sub-network in the first iteration of the loop, i.e. the difference image of the original noisy image and the clean image, f (-) representing the noise estimation sub-network, NoriRepresenting the input image of the first cycle, wfModel parameters representing a noise estimation sub-network, IgtA reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E1Representing the output image of the noise estimation sub-network in the first iteration of the loop, wgModel parameters representing a de-noising subnetwork, | ·| non-woven phosphor1Represents L1Distance, | · | luminance2Represents L2A distance;
l in the objective loss functionstg2,LstgkFor the loss function of the second and k-th loop iteration, starting from the i (i > 1) th loop iteration, the loss function L of the i-th loop iterationstgiThe specific calculation is as follows:
Figure BDA0002930607880000071
wherein N isgtiRepresenting noise estimation sub-network in i-th iteration of loopThe reference image, i.e. the difference image of the output denoised image of the i-th iteration of the loop and the noise-free image, f (-) represents the noise estimation sub-network, Dei-1The output denoised image representing the ith iteration of the loop, which is also the input image for the ith iteration of the loop, wfModel parameters representing a noise estimation sub-network, IgtA reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, EiRepresenting the output image of the noise estimation sub-network in the ith iteration of the loop, wgModel parameters representing a de-noising subnetwork, | ·| non-woven phosphor1Represents L1Distance, | · | luminance2Represents L2Distance.
Compared with the prior art, the invention has the following beneficial effects:
the invention uses the multi-scale encoder and the residual error module, can effectively extract the characteristics of the noise image, and reconstructs the noise image through the decoder. And removing noise more cleanly by a loop iteration mode, and simultaneously reserving more image details, thereby effectively reconstructing a denoised image.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
fig. 2 is a schematic diagram illustrating an overall structure of the network training in step S2 according to an embodiment of the present invention (k ═ 2);
FIG. 3 is a schematic diagram of a network structure of a noise estimation sub-network in a first iteration according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a network structure of a denoising subnetwork in the first iteration according to the embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the present invention provides a method for denoising a cyclic iteration image based on a cyclic neural network, comprising the following steps:
step S1, acquiring and preprocessing paired original noise images and noiseless images to obtain paired image blocks of the noise images and the noiseless images for training;
step S2, constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image;
and step S3, inputting the original noise image to be detected into the trained denoising network to obtain a denoising image.
In this embodiment, step S1 specifically includes:
step S11: the original noise image and the noiseless image which are paired are cut into blocks at the same position to obtain a plurality of groups of image blocks of the noise image and the noiseless image which are paired;
step S12: and carrying out the same random inversion and rotation on each group of paired image blocks, and enhancing the data to obtain the image blocks of paired noisy images and noiseless images for training.
In the embodiment, the recurrent iterative image denoising network based on the recurrent neural network comprises a noise estimation sub-network and a denoising sub-network.
Preferably, the noise estimation sub-network consists of one input convolutional layer, m series-connected ResNet residual blocks, one GRU module and one output convolutional layer. Wherein, the input convolution layer is composed of convolution kernels with convolution kernel of 3 × 3 and step length of 1, and the output convolution layer is composed of convolution kernels with convolution kernel of 1 × 1 and step length of 1; the input of the GRU module is the characteristic z obtained by splicing the output of the GRU module of the previous iteration and the output of the ResNet residual block of the current iteration times i through a channel.
The concrete calculation formula of the GRU module is as follows:
a=conv(z),
b=comv(z),
c=conv_(z),
d=conv(z),
Figure BDA0002930607880000091
Figure BDA0002930607880000092
wherein conv (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a sigmoid activation function, conv _ (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a tanh activation function,
Figure BDA0002930607880000101
as an input to the output convolutional layer,
Figure BDA0002930607880000102
is part of the GRU module input for the (i + 1) th iteration.
Preferably, the denoising sub-network comprises two loop operations, each loop operation consisting of the same encoder, residual module and decoder:
the encoder consists of an input convolutional layer and two down-sampling layers; the input convolutional layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 1 and a GRU module, and the downsampling layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 2, an activation function and a GRU module; the input of the GRU module is the characteristics obtained by channel splicing of the output of the corresponding characteristic dimension of the previous layer and the output of the GRU module of the corresponding characteristic dimension of the previous iteration, and the calculation mode is the same as that of the GRU module of the noise estimation subnetwork; the encoder part divides the characteristics of the network into 3 different scales, which are respectively F from large scale to small scale1、F2And F3
The residual module is composed of n series ResNet residual blocks, and the input of the residual module is the characteristic F obtained by the encoder part3The output is characterized by Fc
The decoder consists of two upsampling layers and an output convolutional layer, wherein the operation of each upsampling layer comprises one nearest neighbor interpolation operation, a convolutional layer with the convolutional kernel size of 3 multiplied by 3 and the step length of 1 and a ReLU activation function, and the output convolutional layer is a convolutional layer with the convolutional kernel size of 1 multiplied by 1 and the step length of 1; the input to the first up-sampling layer is F3And FcThe output characteristic is f3(ii) a The input of the second up-sampling layer is F2And f3The output characteristic is f2(ii) a The input of the output convolutional layer is F1And f2And outputting the characteristics obtained by the channel splicing operation as a de-noised image.
The denoising network comprises t times of loop iteration, one part of input of each GRU module of the first loop iteration is the characteristic that the value of the corresponding characteristic dimension is 0, and one part of input of each GRU module is the GRU characteristic of the last loop iteration of the corresponding characteristic dimension from the second loop iteration; and the input of the noise estimation subnetwork at the first iteration of the loop is a noise image NoriAnd outputting a noise estimation image E as a first loop iteration1(ii) a Input of the denoising subnetwork at the first iteration of the loop is a noise estimation image E of the first iteration of the loop1And a noise image NoriOutputting the two-channel image subjected to channel splicing as a denoised image De of the first loop iteration1. The noise estimation sub-network starts from the ith (i is more than 1) loop iteration, and the input of the ith loop iteration is the denoised image De of the (i-1) loop iterationi-1And outputting a noise estimation image E of the ith loop iterationi. The denoising subnetwork starts from the second loop iteration, and the input of the ith loop iteration is a noise estimation image E of the ith loopiAnd the i-1 st loop iteration De-noised image Dei-1Outputting the two-channel image spliced by the channels as a De-noised image De of the ith stagei. The final denoised image is the output De of the t-th iteration of the loopt
Preferably, the training of the loop iteration image denoising network model based on the loop neural network specifically comprises:
(1): randomly dividing image blocks of a pair of a noise image and a noiseless image into a plurality of batches, wherein each batch comprises N image blocks;
(2): inputting the noise images of the corresponding N image blocks in the batch into the denoising network in the step S2 by taking the batch as a unit to obtain corresponding N denoising images;
(3): calculating the gradient of each parameter in the network by using a back propagation method according to a target loss function of a cyclic iterative image denoising network based on a cyclic neural network, and updating the parameters of the network by using a random gradient descent method;
(4): and (3) repeating the steps (2) to (3) by taking batches as units until the target loss function value of the recurrent iterative image denoising network based on the recurrent neural network tends to be stable, storing the network parameters and finishing the training process of the network.
Preferably, in this embodiment, the target loss function of the recurrent iterative image denoising network based on the recurrent neural network is calculated as follows:
Loss=λ1Lstg12Lstg2+…+λkLstgk
in the target loss function, λ1,λ2,…,λkWeight lost for each iteration of the loop, Lstg1For the loss function of the first loop iteration, the specific calculation is as follows:
Figure BDA0002930607880000121
wherein N isgt1Representing the reference image of the noise estimation sub-network in the first iteration of the loop, i.e. the difference image of the original noisy image and the clean image, f (-) representing the noise estimation sub-network, NoriRepresenting the input image of the first cycle, wfModel parameters representing a noise estimation sub-network, IgtA reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E1Representing the output image of the noise estimation sub-network in the first iteration of the loop, wgModel parameters representing a de-noising subnetwork, | ·| non-woven phosphor1Represents L1Distance, | · | luminance2Represents L2A distance;
l in the objective loss functionstg2,LstgkFor the loss function of the second and k-th loop iteration, starting from the i (i > 1) th loop iteration, the loss function L of the i-th loop iterationstgiThe specific calculation is as follows:
Figure BDA0002930607880000131
wherein N isgtiA reference image representing the noise estimation subnetwork in the ith iteration of the loop, i.e. the difference image of the output denoised image and the noiseless image of the ith iteration of the loop, f (-) represents the noise estimation subnetwork, Dei-1The output denoised image representing the ith iteration of the loop, which is also the input image for the ith iteration of the loop, wfModel parameters representing a noise estimation sub-network, IgtA reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, EiRepresenting the output image of the noise estimation sub-network in the ith iteration of the loop, wgModel parameters representing a de-noising subnetwork, | ·| non-woven phosphor1Represents L1Distance, | · | luminance2Represents L2Distance.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (9)

1. A cyclic iteration image denoising method based on a cyclic neural network is characterized by comprising the following steps:
step S1: acquiring and preprocessing a pair of original noise images and noiseless images to obtain a pair of image blocks of the noise images and the noiseless images for training;
step S2: constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image;
step S3: and inputting the original noise image to be detected into the trained denoising network to obtain a denoising image.
2. The method for denoising cyclic iterative images based on the recurrent neural network as claimed in claim 1, wherein the step S1 specifically comprises:
step S11: the original noise image and the noiseless image which are paired are cut into blocks at the same position to obtain a plurality of groups of image blocks of the noise image and the noiseless image which are paired;
step S12: and carrying out the same random inversion and rotation on each group of paired image blocks, and enhancing the data to obtain the image blocks of paired noisy images and noiseless images for training.
3. The method of claim 1, wherein the recurrent neural network-based iterative image denoising network comprises a noise estimation sub-network and a denoising sub-network.
4. The method of claim 3, wherein the noise estimation sub-network comprises an input convolutional layer, m series connected ResNet residual blocks, a GRU module and an output convolutional layer. Wherein, the input convolution layer is composed of convolution kernels with convolution kernel of 3 multiplied by 3 and step length of 1; the output convolution layer is composed of convolution kernels with convolution kernel of 1 multiplied by 1 and step length of 1; the input of the GRU module is the characteristic z obtained by splicing the output of the GRU module of the previous iteration and the output of the ResNet residual block of the current iteration times i through a channel.
5. The method of denoising cyclic iterative images based on cyclic neural network of claim 4, wherein the specific calculation formula of the GRU module is as follows:
a=conv(z),
b=conv(z),
c=conv_(z),
d=conv(z),
Figure FDA0002930607870000021
Figure FDA0002930607870000022
wherein conv (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a sigmoid activation function, conv _ (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a tanh activation function,
Figure FDA0002930607870000023
as an input to the output convolutional layer,
Figure FDA0002930607870000024
is part of the GRU module input for the (i + 1) th iteration.
6. The recurrent iterative image denoising method of claim 3, wherein the denoising sub-network comprises two recurrent operations, each recurrent operation consisting of the same encoder, residual module and decoder:
the encoder consists of an input convolutional layer and two down-sampling layers; the input convolutional layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 1 and a GRU module, and the downsampling layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 2, an activation function and a GRU module; the input of the GRU module is the characteristics obtained by channel splicing of the output of the corresponding characteristic dimension of the previous layer and the output of the GRU module of the corresponding characteristic dimension of the previous iteration, and the calculation mode is the same as that of the GRU module of the noise estimation subnetwork; the encoder part divides the characteristics of the network into 3 different scales, which are respectively F from large scale to small scale1、F2And F3
The residual module is composed of n series ResNet residual blocks, and the input of the residual module is the characteristic F obtained by the encoder part3The output is characterized by Fc
The decoder consists of two up-sampling layers and one output convolution layer, the operation of each up-sampling layer includes one nearest neighbor interpolation operation and one convolution kernelA convolution layer with a small size of 3 multiplied by 3 and a step length of 1 and a ReLU activation function, wherein the output convolution layer is a convolution layer with a convolution kernel size of 1 multiplied by 1 and a step length of 1; the input to the first up-sampling layer is F3And FcThe output characteristic is f3(ii) a The input of the second up-sampling layer is F2And f3The output characteristic is f2(ii) a The input of the output convolutional layer is F1And f2And outputting the characteristics obtained by the channel splicing operation as a de-noised image.
7. The recurrent neural network-based iterative image denoising method of claim 6, wherein the denoising network comprises t recurrent iterations, a portion of each GRU module input of a first recurrent iteration is a feature whose corresponding feature size value is 0, and a portion of each GRU module input is a GRU feature of a last recurrent iteration of a corresponding feature size from a second recurrent iteration; and the input of the noise estimation subnetwork at the first iteration of the loop is a noise image NoriAnd outputting a noise estimation image E as a first loop iteration1(ii) a Input of the denoising subnetwork at the first iteration of the loop is a noise estimation image E of the first iteration of the loop1And a noise image NoriOutputting the two-channel image subjected to channel splicing as a denoised image De of the first loop iteration1. The noise estimation sub-network starts from the ith (i is more than 1) loop iteration, and the input of the ith loop iteration is the denoised image De of the (i-1) loop iterationi-1And outputting a noise estimation image E of the ith loop iterationi. The denoising subnetwork starts from the second loop iteration, and the input of the ith loop iteration is a noise estimation image E of the ith loopiAnd the i-1 st loop iteration De-noised image Dei-1Outputting the two-channel image spliced by the channels as a De-noised image De of the ith stagei. The final denoised image is the output De of the t-th iteration of the loopt
8. The method for denoising cyclic iterative images based on the cyclic neural network as claimed in claim 1, wherein the training of the cyclic iterative image denoising network model based on the cyclic neural network is specifically:
(1): randomly dividing image blocks of a pair of a noise image and a noiseless image into a plurality of batches, wherein each batch comprises N image blocks;
(2): inputting the noise images of the corresponding N image blocks in the batch into the denoising network in the step S2 by taking the batch as a unit to obtain corresponding N denoising images;
(3): calculating the gradient of each parameter in the network by using a back propagation method according to a target loss function of a cyclic iterative image denoising network based on a cyclic neural network, and updating the parameters of the network by using a random gradient descent method;
(4): and (3) repeating the steps (2) to (3) by taking batches as units until the target loss function value of the recurrent iterative image denoising network based on the recurrent neural network tends to be stable, storing the network parameters and finishing the training process of the network.
9. The method of claim 8, wherein the objective loss function of the recurrent neural network-based image denoising network is calculated as follows:
Loss=λ1Lstg12Lstg2+…+λkLstgk
in the target loss function, λ1,λ2,…,λkWeight lost for each iteration of the loop, Lstg1For the loss function of the first loop iteration, the specific calculation is as follows:
Figure FDA0002930607870000051
wherein N isgt1Reference pictures representing a noise estimation sub-network in the first iteration of the loop, i.e. raw noiseDifference image of acoustic image and noise-free image, f (-) represents a noise estimation sub-network, NoriRepresenting the input image of the first cycle, wfModel parameters representing a noise estimation sub-network, IgtA reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E1Representing the output image of the noise estimation sub-network in the first iteration of the loop, wgModel parameters representing a de-noising subnetwork, | ·| non-woven phosphor1Represents L1Distance, | · | luminance2Represents L2A distance;
l in the objective loss functionstg2,LstgkFor the loss function of the second and k-th loop iteration, starting from the i (i > 1) th loop iteration, the loss function L of the i-th loop iterationstgiThe specific calculation is as follows:
Figure FDA0002930607870000052
wherein N isgtiA reference image representing the noise estimation subnetwork in the ith iteration of the loop, i.e. the difference image of the output denoised image and the noiseless image of the ith iteration of the loop, f (-) represents the noise estimation subnetwork, Dei-1The output denoised image representing the ith iteration of the loop, which is also the input image for the ith iteration of the loop, wfModel parameters representing a noise estimation sub-network, IgtA reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, EiRepresenting the output image of the noise estimation sub-network in the ith iteration of the loop, wgModel parameters representing a de-noising subnetwork, | ·| non-woven phosphor1Represents L1Distance, | · | luminance2Represents L2Distance.
CN202110146982.5A 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network Active CN112801906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110146982.5A CN112801906B (en) 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110146982.5A CN112801906B (en) 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network

Publications (2)

Publication Number Publication Date
CN112801906A true CN112801906A (en) 2021-05-14
CN112801906B CN112801906B (en) 2023-02-21

Family

ID=75813924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110146982.5A Active CN112801906B (en) 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network

Country Status (1)

Country Link
CN (1) CN112801906B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658118A (en) * 2021-08-02 2021-11-16 维沃移动通信有限公司 Image noise degree estimation method and device, electronic equipment and storage medium
CN114119428A (en) * 2022-01-29 2022-03-01 深圳比特微电子科技有限公司 Image deblurring method and device
CN114972981A (en) * 2022-04-19 2022-08-30 国网江苏省电力有限公司电力科学研究院 Power grid power transmission environment observation image denoising method, terminal and storage medium
CN115393227A (en) * 2022-09-23 2022-11-25 南京大学 Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160094843A1 (en) * 2014-09-25 2016-03-31 Google Inc. Frequency-domain denoising
US20180205965A1 (en) * 2015-09-29 2018-07-19 Huawei Technologies Co., Ltd, Image prediction method and apparatus
CN111145123A (en) * 2019-12-27 2020-05-12 福州大学 Image denoising method based on U-Net fusion detail retention
CN111192211A (en) * 2019-12-24 2020-05-22 浙江大学 Multi-noise type blind denoising method based on single deep neural network
CN111754438A (en) * 2020-06-24 2020-10-09 安徽理工大学 Underwater image restoration model based on multi-branch gating fusion and restoration method thereof
CN111861925A (en) * 2020-07-24 2020-10-30 南京信息工程大学滨江学院 Image rain removing method based on attention mechanism and gate control circulation unit

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160094843A1 (en) * 2014-09-25 2016-03-31 Google Inc. Frequency-domain denoising
US20180205965A1 (en) * 2015-09-29 2018-07-19 Huawei Technologies Co., Ltd, Image prediction method and apparatus
CN111192211A (en) * 2019-12-24 2020-05-22 浙江大学 Multi-noise type blind denoising method based on single deep neural network
CN111145123A (en) * 2019-12-27 2020-05-12 福州大学 Image denoising method based on U-Net fusion detail retention
CN111754438A (en) * 2020-06-24 2020-10-09 安徽理工大学 Underwater image restoration model based on multi-branch gating fusion and restoration method thereof
CN111861925A (en) * 2020-07-24 2020-10-30 南京信息工程大学滨江学院 Image rain removing method based on attention mechanism and gate control circulation unit

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
S.GUO 等: "Toward Convolutional Blind Denoising of Real Photographs", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
X.TAO 等: "Scale-Recurrent Network for Deep Image Deblurring", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
周泽南: "基于循环神经网络的SAR图像处理技术研究", 《中国优秀硕士学位论文全文数据库 (信息科技I辑)》 *
张文乐: "基于深度学习的交通路标图像识别研究", 《中国优秀硕士学位论文全文数据库(工程科技II辑)》 *
贾瑞明 等: "盲去模糊的多尺度编解码深度卷积网络", 《计算机应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising
CN113658118A (en) * 2021-08-02 2021-11-16 维沃移动通信有限公司 Image noise degree estimation method and device, electronic equipment and storage medium
CN114119428A (en) * 2022-01-29 2022-03-01 深圳比特微电子科技有限公司 Image deblurring method and device
CN114119428B (en) * 2022-01-29 2022-09-23 深圳比特微电子科技有限公司 Image deblurring method and device
CN114972981A (en) * 2022-04-19 2022-08-30 国网江苏省电力有限公司电力科学研究院 Power grid power transmission environment observation image denoising method, terminal and storage medium
CN114972981B (en) * 2022-04-19 2024-07-05 国网江苏省电力有限公司电力科学研究院 Power grid power transmission environment observation image denoising method, terminal and storage medium
CN115393227A (en) * 2022-09-23 2022-11-25 南京大学 Self-adaptive enhancing method and system for low-light-level full-color video image based on deep learning

Also Published As

Publication number Publication date
CN112801906B (en) 2023-02-21

Similar Documents

Publication Publication Date Title
CN112801906B (en) Cyclic iterative image denoising method based on cyclic neural network
CN108596841B (en) Method for realizing image super-resolution and deblurring in parallel
JP2022548712A (en) Image Haze Removal Method by Adversarial Generation Network Fusing Feature Pyramids
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
CN113450288B (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN109543548A (en) A kind of face identification method, device and storage medium
CN110148088B (en) Image processing method, image rain removing method, device, terminal and medium
CN110189260B (en) Image noise reduction method based on multi-scale parallel gated neural network
CN111583115A (en) Single image super-resolution reconstruction method and system based on depth attention network
CN114820341A (en) Image blind denoising method and system based on enhanced transform
CN113657532B (en) Motor magnetic shoe defect classification method
CN114723630A (en) Image deblurring method and system based on cavity double-residual multi-scale depth network
CN110674824A (en) Finger vein segmentation method and device based on R2U-Net and storage medium
CN113837959B (en) Image denoising model training method, image denoising method and system
CN117174105A (en) Speech noise reduction and dereverberation method based on improved deep convolutional network
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN113781343A (en) Super-resolution image quality improvement method
CN117315336A (en) Pollen particle identification method, device, electronic equipment and storage medium
CN115205148A (en) Image deblurring method based on double-path residual error network
Zhang et al. A new image filtering method: Nonlocal image guided averaging
CN112801909B (en) Image fusion denoising method and system based on U-Net and pyramid module
CN113034475B (en) Finger OCT (optical coherence tomography) volume data denoising method based on lightweight three-dimensional convolutional neural network
CN111986114B (en) Double-scale image blind denoising method and system based on self-supervision constraint
CN113888405A (en) Denoising and demosaicing method based on clustering self-adaptive expansion convolutional neural network
Tian et al. A modeling method for face image deblurring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant