CN112801906B - Cyclic iterative image denoising method based on cyclic neural network - Google Patents

Cyclic iterative image denoising method based on cyclic neural network Download PDF

Info

Publication number
CN112801906B
CN112801906B CN202110146982.5A CN202110146982A CN112801906B CN 112801906 B CN112801906 B CN 112801906B CN 202110146982 A CN202110146982 A CN 202110146982A CN 112801906 B CN112801906 B CN 112801906B
Authority
CN
China
Prior art keywords
image
iteration
network
loop
denoising
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110146982.5A
Other languages
Chinese (zh)
Other versions
CN112801906A (en
Inventor
牛玉贞
郑路伟
陈钧荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN202110146982.5A priority Critical patent/CN112801906B/en
Publication of CN112801906A publication Critical patent/CN112801906A/en
Application granted granted Critical
Publication of CN112801906B publication Critical patent/CN112801906B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a cyclic iterative image denoising method based on a cyclic neural network, which comprises the following steps of S1, obtaining paired original noise images and noiseless images and preprocessing the images to obtain paired image blocks of the noise images and the noiseless images for training; s2, constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image; and S3, inputting the original noise image to be detected into the trained denoising network to obtain a denoising image. The invention removes noise more cleanly and keeps more image details in a circular iteration mode, thereby effectively reconstructing a de-noised image.

Description

Cyclic iterative image denoising method based on cyclic neural network
Technical Field
The invention relates to the technical field of image and video processing, in particular to a cyclic iteration image denoising method based on a cyclic neural network.
Background
With the rapid development of internet and multimedia technology, images have become an indispensable part of human information exchange and information transmission processes. The image has research value in the fields of communication, social contact, medicine and the like, and has practical significance for the development of information storage, information interaction technology and the like of the modern society. However, degradation of image content, such as image degradation caused by camera parameter settings, image degradation caused by ambient brightness, image quality degradation caused by image compression and decompression techniques, and the like, inevitably occurs. The degraded image will seriously affect the aesthetic sense of the image and even cause the image information not to be extracted effectively. Once the outline of the object is unclear, the foreground and the background of the image cannot be effectively segmented, and even more seriously, the content of the image cannot be distinguished. Therefore, the degraded image needs to be processed. Image denoising is one of the indispensable techniques for reconstructing degraded images to get image content closer to noise-free image content. As a low-level visual task, the calculation result can directly influence high-level computer visual tasks such as image segmentation, image classification, target identification and the like.
The goal of image denoising is to reconstruct the image content in a noise image, so that the denoised image has more image detail information. The task of image denoising has a long history of research, and among the image denoising methods that have been proposed, it can be roughly divided into a conventional method and a method based on deep learning. The traditional method utilizes filters such as median filtering and Gaussian filtering to process noise images, but the processing efficiency is low due to the limitation of computing resources and the reason that the method needs to extract images a priori manually. The method needs to be processed and optimized to be applied to actual life. While deep learning based methods take advantage of the automatic feature extraction capabilities of convolutional neural networks and can use prior information extracted by traditional methods to help convolutional neural networks extract features of images. Therefore, methods based on deep learning have been extensively studied by researchers in recent years.
In recent years, with the improvement of computer computing power, methods based on deep learning are rapidly developed, image denoising methods based on deep learning are continuously proposed, and denoising performance is more advanced than that of the traditional methods. However, many image denoising methods still have some problems at present. For example, the denoising result is too smooth, the texture loss is serious, and the like. If the noise image is subjected to only one-time denoising operation, a denoising result is obtained, so that the once denoising result is too smooth to lose image details, and the result cannot be reversed. If the noise image is subjected to multiple iterative denoising processes, the noise of the noise image can be partially removed each time until the denoising result obtained by iteration can remove more image noise and reconstruct more image textures. And if the noise distribution in the noise image is reasonably estimated before the denoising operation, the estimated noise distribution information is simultaneously input into the denoising network. The method can estimate and position the noise amplitude and the noise position of the noise image more accurately, thereby better performing denoising operation on the noise image and obtaining a denoising result with better performance.
Disclosure of Invention
In view of this, the present invention provides a recurrent iterative image denoising method based on a recurrent neural network, which removes noise more cleanly and retains more image details in a recurrent iterative manner, thereby effectively reconstructing a denoised image.
In order to realize the purpose, the invention adopts the following technical scheme:
a cyclic iteration image denoising method based on a cyclic neural network comprises the following steps:
s1, acquiring and preprocessing paired original noise images and noiseless images to obtain paired image blocks of the noise images and the noiseless images for training;
s2, constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image;
and S3, inputting the original noise image to be detected into the trained denoising network to obtain a denoising image.
The step S1 specifically comprises the following steps:
step S11: the original noise image and the noiseless image which are paired are cut into blocks at the same position to obtain a plurality of groups of image blocks of the noise image and the noiseless image which are paired;
step S12: and carrying out the same random overturning and rotation on each group of paired image blocks, and enhancing the data to obtain the image blocks of the paired noisy images and noiseless images for training.
Further, the recurrent iterative image denoising network based on the recurrent neural network comprises a noise estimation sub-network and a denoising sub-network.
Furthermore, the noise estimation sub-network is composed of an input convolutional layer, m series-connected ResNet residual blocks, a GRU module and an output convolutional layer. Wherein, the input convolution layer is composed of convolution kernels with convolution kernel of 3 × 3 and step length of 1, and the output convolution layer is composed of convolution kernels with convolution kernel of 1 × 1 and step length of 1; the input of the GRU module is the characteristic z obtained by splicing the output of the GRU module of the previous iteration and the output of the ResNet residual block of the current iteration times i through a channel.
Further, the specific calculation formula of the GRU module is as follows:
a=conv(z),
b=conv(z),
c=conv_(z),
d=conv(z),
Figure BDA0002930607880000041
Figure BDA0002930607880000042
wherein conv (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a sigmoid activation function, conv _ (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a tanh activation function,
Figure BDA0002930607880000043
as an input to the output convolutional layer,
Figure BDA0002930607880000044
is part of the GRU module input for the (i + 1) th iteration.
Further, the denoising sub-network comprises two loop operations, each loop operation is composed of the same encoder, residual module and decoder:
the encoder consists of an input convolutional layer and two down-sampling layers; the input convolutional layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 1 and a GRU module, and the downsampling layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 2, an activation function and a GRU module; the input of the GRU module is the output of the corresponding characteristic dimension of the previous layer and the output of the GRU module of the corresponding characteristic dimension of the previous iteration through the characteristics obtained by channel splicing, and the calculation mode is the same as the calculation mode of the GRU module of the noise estimation subnetwork; the encoder part divides the characteristics of the network into 3 different scales, which are respectively F from large scale to small scale 1 、F 2 And F 3
The residual module is composed of n series ResNet residual blocks, and the input of the residual block is the characteristic F obtained by the encoder part 3 The output is characterized by F c
The decoder consists of two upsampling layers and an output convolutional layer, wherein the operation of each upsampling layer comprises one nearest neighbor interpolation operation, a convolutional layer with the convolutional kernel size of 3 multiplied by 3 and the step length of 1 and a ReLU activation function, and the output convolutional layer is a convolutional layer with the convolutional kernel size of 1 multiplied by 1 and the step length of 1; the input to the first up-sampling layer is F 3 And F c The output characteristic is f 3 (ii) a The input of the second up-sampling layer is F 2 And f 3 The output characteristic is f 2 (ii) a The input of the output convolutional layer is F 1 And f 2 And outputting the characteristics obtained by the channel splicing operation as a de-noised image.
Furthermore, the denoising network comprises t times of loop iteration, one part of input of each GRU module of the first loop iteration is the characteristic of which the value of the corresponding characteristic dimension is 0, and one part of input of each GRU module is the GRU characteristic of the last loop iteration of the corresponding characteristic dimension from the second loop iteration; and the input of the noise estimation subnetwork at the first iteration of the loop is a noise image N ori And outputting a noise estimation image E as a first loop iteration 1 (ii) a De-noisingThe input to the subnetwork at the first iteration of the loop is the noise estimation image E of the first iteration of the loop 1 And a noise image N ori Outputting the two-channel image subjected to channel splicing as a denoised image De of the first loop iteration 1 . The noise estimation sub-network starts from the ith (i is more than 1) loop iteration, and the input of the ith loop iteration is the denoised image De of the (i-1) loop iteration i-1 And outputting a noise estimation image E of the ith loop iteration i . The denoising subnetwork starts from the second loop iteration, and the input of the ith loop iteration is a noise estimation image E of the ith loop i And the De-noised image De of the i-1 th loop iteration i-1 Outputting the two-channel image spliced by the channels as a De-noised image De of the ith stage i . The final denoised image is the output De of the t-th iteration of the loop t
Further, the training of the recurrent iterative image denoising network model based on the recurrent neural network specifically comprises:
(1): randomly dividing image blocks of a pair of a noise image and a noiseless image into a plurality of batches, wherein each batch comprises N image blocks;
(2): inputting the noise images of the N corresponding image blocks in the batch into the denoising network in the step S2 by taking the batch as a unit to obtain N corresponding denoising images;
(3): calculating the gradient of each parameter in the network by using a back propagation method according to a target loss function of a recurrent iterative image denoising network based on a recurrent neural network, and updating the parameters of the network by using a random gradient descent method;
(4): and (4) repeating the steps (2) to (3) by taking batches as units until the target loss function value of the recurrent iterative image denoising network based on the recurrent neural network tends to be stable, storing network parameters and finishing the training process of the network.
Further, the target loss function of the recurrent iterative image denoising network based on the recurrent neural network is calculated as follows:
Loss=λ 1 L stg12 L stg2 +…+λ k L stgk
in said objective loss function, λ 1 ,λ 2 ,…,λ k Weight lost for each iteration of the loop, L stg1 The loss function for the first loop iteration is specifically calculated as follows:
Figure BDA0002930607880000061
wherein N is gt1 Representing the reference image of the noise estimation sub-network in the first iteration of the loop, i.e. the difference image of the original noisy image and the clean image, f (-) representing the noise estimation sub-network, N ori Representing the input image of the first cycle, w f Model parameters representing a noise estimation sub-network, I gt A reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E 1 Representing the output image of the noise estimation sub-network in the first iteration of the loop, w g Model parameters representing a de-noising subnetwork, | ·| non-woven phosphor 1 Represents L 1 Distance, | \ | live through 2 Represents L 2 A distance;
l in the objective loss function stg2 ,L stgk For the loss function of the second and k-th loop iteration, starting from the i (i > 1) th loop iteration, the loss function L of the i-th loop iteration stgi The specific calculation is as follows:
Figure BDA0002930607880000071
wherein N is gti A reference image representing the noise estimation subnetwork in the ith iteration of the loop, i.e. the difference image of the output denoised image and the noiseless image of the ith iteration of the loop, f (-) represents the noise estimation subnetwork, de i-1 The output denoised image representing the ith iteration of the loop, which is also the input image for the ith iteration of the loop, w f Model parameters representing a noise estimation sub-network, I gt A reference image representing a noisy image, g (-) representing a denoised sub-networkRow, concat (. Cndot.) denotes a channel splicing operation, E i Output image representing the noise estimation sub-network in the ith iteration of the loop, w g Model parameters representing a denoising subnetwork, | · | | calculation 1 Represents L 1 Distance, | · | luminance 2 Represents L 2 Distance.
Compared with the prior art, the invention has the following beneficial effects:
the invention uses the multi-scale encoder and the residual error module, can effectively extract the characteristics of the noise image, and reconstructs the noise image through the decoder. And removing noise more cleanly by a loop iteration mode, and simultaneously reserving more image details, thereby effectively reconstructing a denoised image.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
fig. 2 is a schematic diagram of the overall structure of the network training in step S2 according to the embodiment of the present invention (k = 2);
FIG. 3 is a schematic diagram of a network structure of a noise estimation subnetwork in a first iteration according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a network structure of a denoising subnetwork in the first iteration according to the embodiment of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
Referring to fig. 1, the present invention provides a recurrent iterative image denoising method based on a recurrent neural network, including the following steps:
s1, acquiring and preprocessing a pair of original noise image and a noise-free image to obtain a pair of image blocks of the noise image and the noise-free image for training;
s2, constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image;
and S3, inputting the original noise image to be detected into the trained denoising network to obtain a denoising image.
In this embodiment, step S1 specifically includes:
step S11: the original noise image and the noiseless image which are paired are cut into blocks at the same position to obtain a plurality of groups of image blocks of the noise image and the noiseless image which are paired;
step S12: and carrying out the same random inversion and rotation on each group of paired image blocks, and enhancing the data to obtain the image blocks of paired noisy images and noiseless images for training.
In the embodiment, the recurrent iterative image denoising network based on the recurrent neural network comprises a noise estimation sub-network and a denoising sub-network.
Preferably, the noise estimation sub-network consists of one input convolutional layer, m series-connected ResNet residual blocks, one GRU module and one output convolutional layer. Wherein, the input convolution layer is composed of convolution kernels with convolution kernel of 3 × 3 and step length of 1, and the output convolution layer is composed of convolution kernels with convolution kernel of 1 × 1 and step length of 1; the input of the GRU module is the characteristic z obtained by splicing the output of the GRU module of the previous iteration and the output of the ResNet residual block of the current iteration times i through a channel.
The concrete calculation formula of the GRU module is as follows:
a=conv(z),
b=comv(z),
c=conv_(z),
d=conv(z),
Figure BDA0002930607880000091
Figure BDA0002930607880000092
wherein conv (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a sigmoid activation function, conv _ (·) comprises a convolution layer with a convolution kernel of 3 and a step size of 1 and a tanh activation function,
Figure BDA0002930607880000101
as an input to the output convolutional layer,
Figure BDA0002930607880000102
is part of the GRU module input for the (i + 1) th iteration.
Preferably, the denoising sub-network comprises two loop operations, each loop operation consisting of the same encoder, residual module and decoder:
the encoder consists of an input convolutional layer and two down-sampling layers; the input convolutional layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 1 and a GRU module, and the downsampling layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 2, an activation function and a GRU module; the input of the GRU module is the characteristics obtained by channel splicing of the output of the corresponding characteristic dimension of the previous layer and the output of the GRU module of the corresponding characteristic dimension of the previous iteration, and the calculation mode is the same as that of the GRU module of the noise estimation subnetwork; the encoder part divides the characteristics of the network into 3 different scales, which are respectively F from large scale to small scale 1 、F 2 And F 3
The residual module is composed of n series ResNet residual blocks, and the input of the residual block is the characteristic F obtained by the encoder part 3 The output is characterized by F c
The decoder consists of two up-sampling layers and an output convolution layer, wherein the operation of each up-sampling layer comprises one-time nearest neighbor interpolation operation, a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1 and a ReLU activation function, and the output convolution layer is a convolution layer with the convolution kernel size of 1 multiplied by 1 and the step length of 1; the input to the first up-sampling layer is F 3 And F c The output characteristic is f 3 (ii) a The input of the second up-sampling layer is F 2 And f 3 The output characteristic is f 2 (ii) a The input of the output convolutional layer is F 1 And f 2 And outputting the characteristics obtained by the channel splicing operation as a de-noised image.
The denoising network comprises t times of loop iteration, wherein part of input of each GRU module of the first loop iteration is a corresponding characteristic scaleThe inch values are all 0 features, and from the second iteration of the loop, one part of the input of each GRU module is the GRU feature of the last iteration of the loop corresponding to the feature size; and the input of the noise estimation subnetwork at the first iteration of the loop is a noise image N ori And outputting a noise estimation image E as a first loop iteration 1 (ii) a Input of the denoising subnetwork at the first iteration of the loop is a noise estimation image E of the first iteration of the loop 1 And a noise image N ori Outputting the two-channel image subjected to channel splicing as a denoised image De of the first loop iteration 1 . The noise estimation sub-network starts from the ith (i > 1) loop iteration, and the input of the ith loop iteration is the denoised image De of the ith-1 loop iteration i-1 And outputting a noise estimation image E of the ith loop iteration i . The denoising subnetwork starts from the second loop iteration, and the input of the ith loop iteration is a noise estimation image E of the ith loop i And the i-1 st loop iteration De-noised image De i-1 Outputting the two-channel image spliced by the channels as a De-noised image De of the ith stage i . The final denoised image is the output De of the t-th iteration of the loop t
Preferably, the training of the loop iteration image denoising network model based on the loop neural network specifically comprises:
(1): randomly dividing image blocks of a pair of a noise image and a noiseless image into a plurality of batches, wherein each batch comprises N image blocks;
(2): inputting the noise images of the N corresponding image blocks in the batch into the denoising network in the step S2 by taking the batch as a unit to obtain N corresponding denoising images;
(3): calculating the gradient of each parameter in the network by using a back propagation method according to a target loss function of a recurrent iterative image denoising network based on a recurrent neural network, and updating the parameters of the network by using a random gradient descent method;
(4): and (3) repeating the steps (2) to (3) by taking batches as units until the target loss function value of the recurrent iterative image denoising network based on the recurrent neural network tends to be stable, storing the network parameters and finishing the training process of the network.
Preferably, in this embodiment, the target loss function of the recurrent iterative image denoising network based on the recurrent neural network is calculated as follows:
Loss=λ 1 L stg12 L stg2 +…+λ k L stgk
in said objective loss function, λ 1 ,λ 2 ,…,λ k Weight lost for each iteration of the loop, L stg1 The loss function for the first loop iteration is specifically calculated as follows:
Figure BDA0002930607880000121
wherein, N gt1 Representing the reference image of the noise estimation sub-network in the first iteration of the loop, i.e. the difference image of the original noisy image and the clean image, f (-) representing the noise estimation sub-network, N ori Representing the input image of the first cycle, w f Model parameters representing a noise estimation sub-network, I gt A reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E 1 Representing the output image of the noise estimation sub-network in the first iteration of the loop, w g Model parameters representing a de-noising subnetwork, | ·| non-woven phosphor 1 Represents L 1 Distance, | · | luminance 2 Represents L 2 A distance;
l in the objective loss function stg2 ,L stgk For the loss function of the second and k-th loop iteration, starting from the i (i > 1) th loop iteration, the loss function L of the i-th loop iteration stgi The specific calculation is as follows:
Figure BDA0002930607880000131
wherein N is gti Reference image representing noise estimation sub-network in ith loop iteration, i.e. output of ith loop iterationDifference image of noisy and non-noisy images, f (-) represents the noise estimation sub-network, de i-1 The output denoised image representing the ith loop iteration, which is also the input image of the ith loop iteration, w f Model parameters representing noise estimation sub-network, I gt A reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E i Representing the output image of the noise estimation sub-network in the ith iteration of the loop, w g Model parameters representing a denoising subnetwork, | · | | calculation 1 Represents L 1 Distance, | · | luminance 2 Represents L 2 Distance.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and all equivalent changes and modifications made in accordance with the claims of the present invention should be covered by the present invention.

Claims (6)

1. A cyclic iteration image denoising method based on a cyclic neural network is characterized by comprising the following steps:
s1, acquiring and preprocessing paired original noise images and noiseless images to obtain paired image blocks of the noise images and the noiseless images for training;
s2, constructing a cyclic iterative image denoising network based on a cyclic neural network, and training by using paired image blocks of a noise image and a noiseless image;
s3, inputting the original noise image to be detected into the trained denoising network to obtain a denoising image;
the recurrent iterative image denoising network based on the recurrent neural network comprises a noise estimation sub-network and a denoising sub-network;
the denoising sub-network comprises t times of cyclic operations, and each cyclic operation comprises the same encoder, a residual error module and a decoder:
the noise estimation sub-network consists of an input convolutional layer, m series-connected ResNet residual blocks, a GRU module and an output convolutional layer; wherein, the input convolution layer is composed of convolution kernels with convolution kernel of 3 multiplied by 3 and step length of 1; the output convolution layer is composed of convolution kernels with convolution kernel of 1 multiplied by 1 and step length of 1; the input of the GRU module is a characteristic z obtained by splicing the output of the GRU module of the previous iteration and the output of the ResNet residual block of the current iteration number i through a channel;
the encoder consists of an input convolutional layer and two down-sampling layers; the input convolutional layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 1 and a GRU module, and the downsampling layer comprises a convolutional layer with a convolutional kernel of 5 multiplied by 5 and a step size of 2, an activation function and a GRU module; the input of the GRU module is the characteristics obtained by channel splicing of the output of the corresponding characteristic dimension of the previous layer and the output of the GRU module of the corresponding characteristic dimension of the previous iteration, and the calculation mode is the same as that of the GRU module of the noise estimation subnetwork; the encoder part divides the characteristics of the network into 3 different scales, which are respectively F from large scale to small scale 1 、F 2 And F 3
The residual module is composed of n series ResNet residual blocks, and the input of the residual module is the characteristic F obtained by the encoder part 3 The output is characterized by F c
The decoder consists of two up-sampling layers and an output convolution layer, wherein the operation of each up-sampling layer comprises one-time nearest neighbor interpolation operation, a convolution layer with the convolution kernel size of 3 multiplied by 3 and the step length of 1 and a ReLU activation function, and the output convolution layer is a convolution layer with the convolution kernel size of 1 multiplied by 1 and the step length of 1; the input to the first up-sampling layer is F 3 And F c The output characteristic is f 3 (ii) a The input of the second up-sampling layer is F 2 And f 3 The output characteristic is f 2 (ii) a The input of the output convolutional layer is F 1 And f 2 And outputting the characteristics obtained by the channel splicing operation as a de-noised image.
2. The method for denoising cyclic iterative images based on the recurrent neural network as claimed in claim 1, wherein the step S1 specifically comprises:
step S11: the original noise image and the noiseless image which are paired are cut into blocks at the same position, and a plurality of groups of image blocks of the noise image and the noiseless image which are paired are obtained;
step S12: and carrying out the same random inversion and rotation on each group of paired image blocks, and enhancing the data to obtain the image blocks of paired noisy images and noiseless images for training.
3. The method of claim 1, wherein the GRU module has a specific calculation formula as follows:
a=conv(z),
b=conv(z),
c=conv_(z),
d=conv(z),
Figure FDA0003952509110000031
Figure FDA0003952509110000032
wherein conv (·) comprises a convolution layer with convolution kernel of 3 and step size of 1 and a sigmoid activation function, conv _ (·) comprises a convolution layer with convolution kernel of 3 and step size of 1 and a tanh activation function,
Figure FDA0003952509110000033
as an input to the output convolutional layer,
Figure FDA0003952509110000034
is part of the GRU module input for the (i + 1) th iteration.
4. The recurrent neural network-based recurrent iterative image denoising method of claim 1, wherein the denoising network comprises t recurrent iterations, and a portion of each GRU module input of a first recurrent iteration is a value of a corresponding feature sizeA feature of 0, starting from the second iteration of the loop, a portion of each GRU module input is the GRU feature of the last iteration of the loop corresponding to the feature size; and the input of the noise estimation subnetwork at the first iteration of the loop is a noise image N ori Output as a noise estimate image E for the first iteration of the loop 1 (ii) a The input of the denoising subnetwork in the first iteration of the loop is a noise estimation image E of the first iteration of the loop 1 And a noise image N ori Outputting the two-channel image after channel splicing as a De-noised image De of the first loop iteration 1 (ii) a The noise estimation sub-network starts from the ith (i is more than 1) loop iteration, and the input of the ith loop iteration is the denoised image De of the (i-1) loop iteration i-1 And outputting a noise estimation image E of the ith loop iteration i (ii) a The denoising subnetwork starts from the second loop iteration, and the input of the ith loop iteration is a noise estimation image E of the ith loop i And the De-noised image De of the i-1 th loop iteration i-1 Outputting the two-channel image spliced by the channels as a De-noised image De of the ith stage i (ii) a The final denoised image is the output De of the t-th iteration of the loop t
5. The method for denoising cyclic iterative images based on the cyclic neural network as claimed in claim 1, wherein the training of the cyclic iterative image denoising network model based on the cyclic neural network is specifically:
(1) Randomly dividing image blocks of a pair of a noisy image and a noiseless image into a plurality of batches, each batch comprising N image blocks;
(2): inputting the noise images of the corresponding N image blocks in the batch into the denoising network in the step S2 by taking the batch as a unit to obtain corresponding N denoising images;
(3): calculating the gradient of each parameter in the network by using a back propagation method according to a target loss function of a cyclic iterative image denoising network based on a cyclic neural network, and updating the parameters of the network by using a random gradient descent method;
(4): and (3) repeating the steps (2) to (3) by taking batches as units until the target loss function value of the recurrent iterative image denoising network based on the recurrent neural network tends to be stable, storing the network parameters and finishing the training process of the network.
6. The method of claim 5, wherein the objective loss function of the recurrent neural network-based image denoising network is calculated as follows:
Loss=λ 1 L stg12 L stg2 +...+λ k L stgk
in said objective loss function, λ 1 ,λ 2 ,...,λ k Weight lost for each iteration of the loop, L stg1 For the loss function of the first loop iteration, the specific calculation is as follows:
Figure FDA0003952509110000051
wherein, N gt1 Representing the reference image of the noise estimation sub-network in the first iteration of the loop, i.e. the difference image of the original noisy image and the clean image, f (-) representing the noise estimation sub-network, N ori Representing the input image of the first cycle, w f Model parameters representing a noise estimation sub-network, I gt A reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E 1 Output image, w, representing a noise estimation sub-network in a first iteration of the loop g Model parameters representing a de-noising subnetwork, | ·| non-woven phosphor 1 Represents L 1 Distance, | · | luminance 2 Represents L 2 A distance;
l in the objective loss function stg2 ,L stgk For the loss function of the second and kth loop iteration, starting from the ith (i > 1) loop iteration, the loss function L of the ith loop iteration stgi The specific calculation is as follows:
Figure FDA0003952509110000052
wherein N is gti A reference image representing the noise estimation subnetwork in the ith iteration of the loop, i.e. the difference image of the output denoised image and the noiseless image of the ith iteration of the loop, f (-) represents the noise estimation subnetwork, de i-1 The output denoised image representing the i-1 st iteration of the loop, which is also the input image for the i-th iteration of the loop, w f Model parameters representing noise estimation sub-network, I gt A reference image representing a noisy image, g (-) representing a denoising subnetwork, concat (-) representing a channel splicing operation, E i Representing the output image of the noise estimation sub-network in the ith iteration of the loop, w g Model parameters representing a de-noising subnetwork, | ·| non-woven phosphor 1 Represents L 1 Distance, | · | luminance 2 Represents L 2 Distance.
CN202110146982.5A 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network Active CN112801906B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110146982.5A CN112801906B (en) 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110146982.5A CN112801906B (en) 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network

Publications (2)

Publication Number Publication Date
CN112801906A CN112801906A (en) 2021-05-14
CN112801906B true CN112801906B (en) 2023-02-21

Family

ID=75813924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110146982.5A Active CN112801906B (en) 2021-02-03 2021-02-03 Cyclic iterative image denoising method based on cyclic neural network

Country Status (1)

Country Link
CN (1) CN112801906B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11540798B2 (en) 2019-08-30 2023-01-03 The Research Foundation For The State University Of New York Dilated convolutional neural network system and method for positron emission tomography (PET) image denoising
CN113658118A (en) * 2021-08-02 2021-11-16 维沃移动通信有限公司 Image noise degree estimation method and device, electronic equipment and storage medium
CN114119428B (en) * 2022-01-29 2022-09-23 深圳比特微电子科技有限公司 Image deblurring method and device
CN114972981B (en) * 2022-04-19 2024-07-05 国网江苏省电力有限公司电力科学研究院 Power grid power transmission environment observation image denoising method, terminal and storage medium
CN115393227B (en) * 2022-09-23 2023-06-06 南京大学 Low-light full-color video image self-adaptive enhancement method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145123A (en) * 2019-12-27 2020-05-12 福州大学 Image denoising method based on U-Net fusion detail retention
CN111192211A (en) * 2019-12-24 2020-05-22 浙江大学 Multi-noise type blind denoising method based on single deep neural network
CN111861925A (en) * 2020-07-24 2020-10-30 南京信息工程大学滨江学院 Image rain removing method based on attention mechanism and gate control circulation unit

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10102613B2 (en) * 2014-09-25 2018-10-16 Google Llc Frequency-domain denoising
CN109274974B (en) * 2015-09-29 2022-02-11 华为技术有限公司 Image prediction method and device
CN111754438B (en) * 2020-06-24 2021-04-27 安徽理工大学 Underwater image restoration model based on multi-branch gating fusion and restoration method thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111192211A (en) * 2019-12-24 2020-05-22 浙江大学 Multi-noise type blind denoising method based on single deep neural network
CN111145123A (en) * 2019-12-27 2020-05-12 福州大学 Image denoising method based on U-Net fusion detail retention
CN111861925A (en) * 2020-07-24 2020-10-30 南京信息工程大学滨江学院 Image rain removing method based on attention mechanism and gate control circulation unit

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Scale-Recurrent Network for Deep Image Deblurring;X.Tao 等;《2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition》;20180623;全文 *
Toward Convolutional Blind Denoising of Real Photographs;S.Guo 等;《2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)》;20190620;全文 *
基于深度学习的交通路标图像识别研究;张文乐;《中国优秀硕士学位论文全文数据库(工程科技II辑)》;20201215;全文 *
盲去模糊的多尺度编解码深度卷积网络;贾瑞明 等;《计算机应用》;20190910;第39卷(第9期);全文 *

Also Published As

Publication number Publication date
CN112801906A (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN112801906B (en) Cyclic iterative image denoising method based on cyclic neural network
CN108596841B (en) Method for realizing image super-resolution and deblurring in parallel
JP2022548712A (en) Image Haze Removal Method by Adversarial Generation Network Fusing Feature Pyramids
CN113450288B (en) Single image rain removing method and system based on deep convolutional neural network and storage medium
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
CN110148088B (en) Image processing method, image rain removing method, device, terminal and medium
CN109543548A (en) A kind of face identification method, device and storage medium
CN111007566B (en) Curvature-driven diffusion full-convolution network seismic data bad channel reconstruction and denoising method
US20220036167A1 (en) Sorting method, operation method and operation apparatus for convolutional neural network
CN110189260B (en) Image noise reduction method based on multi-scale parallel gated neural network
CN111583115A (en) Single image super-resolution reconstruction method and system based on depth attention network
CN114820341A (en) Image blind denoising method and system based on enhanced transform
CN113837959B (en) Image denoising model training method, image denoising method and system
CN114723630A (en) Image deblurring method and system based on cavity double-residual multi-scale depth network
CN113657532B (en) Motor magnetic shoe defect classification method
CN110674824A (en) Finger vein segmentation method and device based on R2U-Net and storage medium
CN117174105A (en) Speech noise reduction and dereverberation method based on improved deep convolutional network
CN112991199A (en) Image high-low frequency decomposition noise removing method based on residual error dense network
CN113689383B (en) Image processing method, device, equipment and storage medium
CN113781343A (en) Super-resolution image quality improvement method
CN117315336A (en) Pollen particle identification method, device, electronic equipment and storage medium
CN115205148A (en) Image deblurring method based on double-path residual error network
CN111047537A (en) System for recovering details in image denoising
CN112801909B (en) Image fusion denoising method and system based on U-Net and pyramid module
CN111986114B (en) Double-scale image blind denoising method and system based on self-supervision constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant