CN114897725A - Image noise reduction method, device, equipment and storage medium - Google Patents

Image noise reduction method, device, equipment and storage medium Download PDF

Info

Publication number
CN114897725A
CN114897725A CN202210497347.6A CN202210497347A CN114897725A CN 114897725 A CN114897725 A CN 114897725A CN 202210497347 A CN202210497347 A CN 202210497347A CN 114897725 A CN114897725 A CN 114897725A
Authority
CN
China
Prior art keywords
image
noise reduction
model
sample
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210497347.6A
Other languages
Chinese (zh)
Inventor
陈圣
曾定衡
蒋宁
王洪斌
周迅溢
吴海英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Consumer Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Consumer Finance Co Ltd filed Critical Mashang Consumer Finance Co Ltd
Priority to CN202210497347.6A priority Critical patent/CN114897725A/en
Publication of CN114897725A publication Critical patent/CN114897725A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application provides an image noise reduction method, device, equipment and storage medium, wherein an image to be processed is input into an image noise reduction model for noise reduction processing, and an initial noise reduction image corresponding to the image to be processed and output by the image noise reduction model is obtained; inputting the image to be processed into an annotation model for pixel annotation to obtain pixel information corresponding to the image to be processed output by the annotation model; and obtaining a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image. The image denoising is realized by combining the image denoising model and the pixel level labeling model, the problem of local blurring of the image can be effectively solved while the integral denoising effect of the image is ensured, and the possibility is provided for denoising and image detail recovery of image edge information.

Description

Image noise reduction method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image denoising method, device, equipment and storage medium.
Background
With the popularization of various digital instruments and digital products, images are becoming the main way for people to acquire external information as the most common information carrier in human activities. However, the image quality is often degraded by various noises during the acquisition, transmission and storage of the image, and therefore, it is necessary to perform noise reduction processing on the image, so as to remove useless information in the signal and improve the image quality.
With the development of deep learning, in the related art, image processing methods such as a neural network are proposed to reduce noise of an image, however, these noise reduction methods have a poor noise reduction effect for a locally blurred image.
Disclosure of Invention
The embodiment of the application provides an image noise reduction method, device and equipment and a storage medium, so as to improve the image noise reduction effect.
In a first aspect, an embodiment of the present application provides an image denoising method, including: inputting an image to be processed into an image noise reduction model for noise reduction processing to obtain an initial noise reduction image corresponding to the image to be processed and output by the image noise reduction model; inputting the image to be processed into an annotation model for pixel annotation to obtain pixel information corresponding to the image to be processed and output by the annotation model, wherein the pixel information is used for indicating the fuzzy degree of each pixel in the image to be processed; and obtaining a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image.
It can be seen that, in the embodiment of the application, image noise reduction is realized by combining an image noise reduction model and a pixel level labeling model, so that the problem of local blurring of an image can be effectively solved while the overall noise reduction effect of the image is ensured, and possibility is provided for noise reduction of image edge information and image detail recovery.
In a second aspect, an embodiment of the present application provides a method for training an image noise reduction model, including: acquiring a blurred sample image; inputting the blurred sample image into an annotation model for pixel annotation to obtain pixel information of the blurred sample image output by the annotation model, wherein the pixel information is used for indicating the blurring degree of each pixel in the blurred sample image; inputting the blurred sample image into an image noise reduction model for noise reduction processing to obtain an initial sample noise reduction image corresponding to the blurred sample image output by the image noise reduction model; and adjusting the model parameters of the image noise reduction model according to the pixel information and the initial sample noise reduction image.
It can be seen that, in the embodiment of the present application, in the process of training the image denoising model, the pixel-level quality evaluation network (i.e. the annotation model) is used to train the image denoising model, so that the precision of the image denoising model can be improved, and the processing effect of the image denoising model on the local blur can be improved while the overall image denoising effect is ensured in the denoising process of the image denoising model.
In a third aspect, an embodiment of the present application provides an image noise reduction apparatus, including: the noise reduction module is used for inputting the image to be processed into the image noise reduction model for noise reduction processing to obtain an initial noise reduction image corresponding to the image to be processed and output by the image noise reduction model; the annotation module is used for inputting the image to be processed into the annotation model to perform pixel annotation to obtain pixel information corresponding to the image to be processed output by the annotation model, and the pixel information is used for indicating the fuzzy degree of each pixel in the image to be processed; and the acquisition module is used for acquiring a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image.
In a fourth aspect, an embodiment of the present application provides a training apparatus for an image denoising model, including: the acquisition module is used for acquiring a blurred sample image; the labeling module is used for inputting the blurred sample image into a labeling model for pixel labeling to obtain pixel information of the blurred sample image output by the labeling model, and the pixel information is used for indicating the blurring degree of each pixel in the blurred sample image; the noise reduction module is used for inputting the blurred sample image into the image noise reduction model for noise reduction processing to obtain an initial sample noise reduction image corresponding to the blurred sample image output by the image noise reduction model; and the training module is used for adjusting the model parameters of the image noise reduction model according to the pixel information and the noise reduction image of the initial sample.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a memory for storing program instructions and at least one processor for invoking the program instructions in the memory for performing the image noise reduction method according to the first aspect and/or the training method of the image noise reduction model according to the second aspect.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored; the computer program, when executed, implements an image noise reduction method as in the first aspect, and/or a training method for an image noise reduction model as in the second aspect.
In a seventh aspect, an embodiment of the present application provides a computer program product, including: a computer program implementing the image noise reduction method according to the first aspect, and/or the training method of the image noise reduction model according to the second aspect, when executed by a processor.
The embodiment of the application provides an image noise reduction method, device, equipment and storage medium, wherein an image to be processed is input into an image noise reduction model for noise reduction processing, and an initial noise reduction image corresponding to the image to be processed and output by the image noise reduction model is obtained; inputting the image to be processed into an annotation model for pixel annotation to obtain pixel information corresponding to the image to be processed and output by the annotation model, wherein the pixel information is used for indicating the fuzzy degree of each pixel in the image to be processed; and obtaining a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image. The image denoising is realized by combining the image denoising model and the pixel level labeling model, the problem of local blurring of the image can be effectively solved while the integral denoising effect of the image is ensured, and the possibility is provided for denoising the image edge information and restoring the image details.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a scene schematic diagram of an image denoising method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image denoising method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating an image denoising method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a noise reduction network provided in an embodiment of the present application;
fig. 5(a) is a first flowchart illustrating a training method of an image denoising model according to an embodiment of the present application;
FIG. 5(b) is a schematic diagram illustrating a first principle of a training method of an image noise reduction model according to an embodiment of the present application;
fig. 6(a) is a second flowchart illustrating a training method of an image denoising model according to an embodiment of the present application;
fig. 6(b) is a schematic diagram illustrating a principle of a training method of an image noise reduction model according to an embodiment of the present application;
fig. 7(a) is a schematic flowchart three of a training method of an image denoising model provided in an embodiment of the present application;
fig. 7(b) is a schematic diagram illustrating a third principle of a training method of an image noise reduction model according to an embodiment of the present application;
fig. 8(a) is a fourth schematic flowchart of a training method of an image denoising model provided in an embodiment of the present application;
fig. 8(b) is a schematic diagram illustrating a principle of a training method of an image noise reduction model according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image noise reduction device according to an embodiment of the present application;
FIG. 10 is a schematic structural diagram of a training apparatus for an image denoising model according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first" and "second" and the like in the description and in the claims, and in the accompanying drawings of the embodiments of the application, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. "/" indicates a relationship of "or".
In order to improve image quality, noise reduction processing is often required to be performed on an image, and with the development of deep learning, image processing methods such as a neural network are proposed in the related art to perform noise reduction on the image. However, in the existing noise reduction algorithms, some noise reduction effects are good, but partial image edge information is lost, or research is focused on detecting image edge information, image details are retained, and comprehensively, the noise reduction effects of these noise reduction technologies cannot be guaranteed.
For example, for a DnCNN network, the DnCNN can effectively remove uniform gaussian noise and suppress noise within a certain noise level range by using Batch Normalization (Batch Normalization) and residual learning (residual learning). However, the real noise is not uniform gaussian noise, which is signal dependent, correlated with each color channel, and non-uniform, may vary with spatial position, and it is difficult to secure the noise reduction effect.
Therefore, how to find a better balance point in noise reduction and detail preservation becomes a focus of research in recent years. In view of this, embodiments of the present application provide an image denoising method, apparatus, device, and storage medium, in an image denoising process, an image denoising model and a dual-channel network of a pixel-level quality evaluation network (i.e., an annotation model) are combined to implement image denoising, so that an overall image denoising effect is ensured, a problem of local blurring of an image is effectively solved, and possibilities are provided for denoising and image detail restoration of image edge information.
Next, the image noise reduction method will be described in detail with reference to specific embodiments. Fig. 1 is a scene schematic diagram of an image denoising method according to an embodiment of the present application. As shown in fig. 1, the scenario includes: and (4) terminal equipment. The terminal device may also be referred to as a User Equipment (UE), a Mobile Station (MS), a mobile terminal (mobile terminal), a terminal (terminal), or the like. In practical applications, the terminal device is, for example: desktop computers, notebooks, Personal Digital Assistants (PDAs), smart phones, tablet computers, vehicle-mounted devices, wearable devices (e.g., smart watches, smart bands), smart home devices (e.g., smart display devices), and the like.
For example, after the terminal device obtains the image to be processed (for example, the image to be processed is obtained by shooting the terminal device, or the image to be processed is uploaded or sent to the terminal device in another way), the terminal device may perform noise reduction processing on the image to be processed by using the image noise reduction method provided in the embodiment of the present application, and further obtain the noise-reduced image of the image to be processed.
In some optional embodiments, a server may also be included in the scenario. Where a server is a service point that provides data processing, database, etc., the server may be a unitary server or a distributed server across multiple computers or computer data centers, and the server may include hardware, software, or embedded logic components or a combination of two or more such components for performing the appropriate functions supported or implemented by the server. The server is, for example, a blade server, a cloud server, or the like, or may be a server group composed of a plurality of servers.
The terminal device and the server may communicate with each other through a wired network or a wireless network, and in this embodiment, the server may perform some functions of the terminal device. Illustratively, an image to be processed may be uploaded to a server through a terminal device, and the server performs noise reduction processing on the image to be processed through the image noise reduction method provided in the embodiment of the present application, so as to obtain a noise-reduced image of the image to be processed, and then the terminal device outputs the noise-reduced image.
It should be understood that fig. 1 is only a schematic diagram of an application scenario provided in this embodiment of the present application, and the embodiment of the present application does not limit the types of devices and the number of devices included in fig. 1, for example, in the application scenario illustrated in fig. 1, a data storage device may also be included for storing service data, and the data storage device may be an external memory or an internal memory integrated in a terminal device or a server.
The following describes technical solutions of embodiments of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a schematic flowchart of an image denoising method according to an embodiment of the present application. It should be understood that the execution subject of the embodiment of the present application is the terminal device or the server described above, and as shown in fig. 2, the image denoising method includes the following steps:
s201, inputting the image to be processed into the image denoising model for denoising, and obtaining an initial denoising image corresponding to the image to be processed and output by the image denoising model.
It should be noted that, for a specific type of the image noise reduction model, the embodiment of the present application is not particularly limited, for example, the image noise reduction model in the embodiment of the present application may include, but is not limited to, one or more of the following: a Convolutional Neural Network noise reduction model (DnCNN), an FFDNet Network model, a CBDNet, and a Mobilenetv model (for example, Mobilenetv2), or the like, or the image noise reduction model may be a combination or a modification of these Network models.
It should be understood that, for the specific scheme of denoising the image to be processed by using the above several image denoising models, please refer to the related scheme in the prior art, and some examples are shown in the following embodiments.
S202, inputting the image to be processed into the annotation model for pixel annotation to obtain pixel information corresponding to the image to be processed output by the annotation model.
The annotation model is obtained by training based on the blurred sample image and the clear sample image corresponding to the blurred sample image, and the type of the annotation model is not particularly limited in the embodiments of the present application. Illustratively, the annotation model can be an image segmentation model, such as U2-Net, or the like.
In some embodiments, the pixel information is used to indicate the degree of blur for each pixel in the image to be processed. Illustratively, taking the labeled model as U2-Net as an example, the pixel information includes a mask of the image to be processed. Specifically, fuzzy pixels in the fuzzy sample image can be labeled as first pixel information, clear pixels in the fuzzy sample image can be labeled as second pixel information, in each training process, the fuzzy sample image is used for training the labeling model to obtain a model labeling result output in the current round, then the model labeling result and the actual labeling condition (namely the first pixel information or the second pixel information) of the fuzzy sample image are used for obtaining a loss function corresponding to the current round, and the parameter adjustment is performed on the labeling model through the loss function, so that the trained labeling model is obtained.
It should be noted that, for specific sizes of the first pixel information and the second pixel information, the embodiment of the present application is not particularly limited, and for example, taking the labeled model as U2-Net as an example, the mask value of the blurred pixel may be labeled as 1 (i.e., the first pixel information), and the mask value of the sharp pixel may be labeled as 0.2 (i.e., the second pixel information).
It should be understood that the execution order of the above steps S201 and S202 is not particularly limited.
And S203, obtaining a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image.
Still taking the labeled model as U2-Net and the pixel information as the mask of the image to be processed as an example, step S203 specifically includes: and multiplying the initial noise reduction image by the mask of the image to be processed to obtain a target noise reduction image corresponding to the image to be processed.
In the embodiment of the application, the image denoising is realized by combining the image denoising model and the pixel labeling model, so that the problem of local blurring of the image can be effectively solved while the integral denoising effect of the image is ensured, and the possibility is provided for denoising and recovering image details of image edge information.
In some optional embodiments, fig. 3 is a schematic diagram illustrating an image denoising method according to an embodiment of the present disclosure. As shown in fig. 3, the image denoising model includes a denoising network, and in this embodiment, the step S201 specifically includes: and carrying out noise reduction processing on the image to be processed through a noise reduction network to obtain an initial noise reduction image.
Next, a specific structure of the noise reduction network will be described with reference to fig. 4, taking an image noise reduction model as mobilenetv2 as an example. It should be noted that, when the image noise reduction model is mobilenetv2, the noise reduction network may be a network structure based on bottleeck.
Fig. 4 is a schematic structural diagram of a noise reduction network provided in the embodiment of the present application. As shown in fig. 4, the noise reduction network includes: an extension layer (extension layer), a depth separable convolution layer (Projection layer), and a mapping layer (Projection layer). Specifically, when a target noise reduction image is acquired through a noise reduction network, the method specifically includes the following steps:
(1) performing dimension expansion on an image to be processed through an expansion layer to obtain a first characteristic image;
(2) performing feature extraction on the first feature image through the depth separable convolution layer to obtain a second feature image;
(3) and performing dimensionality compression on the second characteristic image through the mapping layer to obtain an initial noise reduction image.
In some embodiments, the mapping layer may be a 1 × 1 network structure that may map the second feature image from the high-dimensional features to a low-dimensional space to obtain an initial noise-reduced image. It should be noted that, the design that uses a 1 × 1 network structure to map the high dimensional space to the low latitude space may also be called a bottleeck layer. Accordingly, the expansion layer may be a 1 × 1 network structure for mapping the image to be processed from a low-dimensional space to a high-dimensional space to obtain the first feature image.
The expansion factor of the expansion layer is not particularly limited in the embodiments of the present application, and for example, the expansion factor may be set to any value such as 3 times or 6 times. In the related art, the network structure of mobilenetv2 is shown in table 1 below:
table 1:
Figure BDA0003634018640000061
in the embodiment of the present application, the network structure of mobilenetv2 is modified by at least one of the following methods:
(1) removing 7 × 7 of the average pooling layer (Avgpool);
(2) adjusting the step length s of each processing layer to a first value;
(3) the last layer 1 × 1 of the convolutional layer (Conv2d) was set to 1 × 3 of convolutional layers.
It should be noted that the size of the first value is not limited in the embodiment of the present application, and for example, the first value may be set to 1. The network structure of the noise reduction network is exemplarily shown in table 2 below:
table 2:
Figure BDA0003634018640000071
compared with the existing mobilenetv2 network structure, in the embodiment of the application, the input and output sizes of the layers can not be changed any more by adjusting the step length S of the layers to be the first value, and meanwhile, the complexity of the image noise reduction model can be reduced by removing the pooling layer.
Next, a method for training an image noise reduction model will be described in detail with reference to specific embodiments.
Fig. 5(a) is a first flowchart illustrating a training method of an image denoising model according to an embodiment of the present application. It should be understood that the main body of the embodiment of the present application may be the image noise reduction device described above, and may also be other devices, and the embodiment of the present application is not particularly limited.
As shown in fig. 5(a), the training method of the image noise reduction model includes the following steps:
and S501, acquiring a blurred sample image.
And S502, inputting the fuzzy sample image into an annotation model for pixel annotation to obtain pixel information of the fuzzy sample image output by the annotation model.
The labeling model is a model trained in advance based on the fuzzy sample image, and the pixel information is used for indicating the fuzzy degree of each pixel in the fuzzy sample image. It should be understood that the blurred sample image used to train the annotation model and the blurred sample image used to train the image noise reduction model may be the same data set, or may be different data sets.
For a specific scheme of training the annotation model by the blurred sample image, please refer to the embodiment shown in fig. 2, which is not described herein again.
S503, inputting the blurred sample image into the image denoising model for denoising, and obtaining an initial sample denoising image corresponding to the blurred sample image output by the image denoising model.
In some embodiments, the image denoising model comprises: a noise reduction network, wherein the noise reduction network comprises: the specific structure of the noise reduction network and the implementation of each layer are please refer to the embodiment shown in fig. 4, and will not be described herein again.
In some alternative embodiments, fig. 5(b) is a schematic diagram illustrating a first principle of a training method of an image noise reduction model provided by an embodiment of the present application. As shown in fig. 5(b), the image noise reduction model in the embodiment of the present application further includes: and the image augmentation network is used for performing augmentation processing on the fuzzy sample image to obtain an augmented image corresponding to the fuzzy sample image.
Further, noise reduction processing is carried out on the augmented image and the fuzzy sample image through a noise reduction network, and an initial sample noise reduction image corresponding to the augmented image and an initial sample noise reduction image corresponding to the fuzzy sample image are obtained.
Specifically, when the noise reduction network obtains the initial sample noise reduction image, the method specifically includes the following steps:
(1) performing dimensionality extension on the augmented image and the blurred sample image through the extension layer to obtain a third feature image output by the extension layer;
(2) performing feature extraction on the third feature image through the depth-separable convolutional layer to obtain a fourth feature image output by the depth-separable convolutional layer;
(3) and performing dimensionality compression on the fourth characteristic image through the mapping layer to obtain an initial sample noise reduction image output by the mapping layer.
It should be understood that the specific structure of the noise reduction network and the implementation of each layer are please refer to the embodiment shown in fig. 4, which is not described herein again.
S504, adjusting model parameters of the image noise reduction model according to the pixel information and the initial sample noise reduction image.
It should be noted that, a mode of training the image noise reduction model by using the pixel information and the initial sample noise reduction image is described in detail in the following embodiments.
In the embodiment of the application, the pixel-level quality evaluation network (namely, the labeling model) is used for training the image denoising model, so that the precision of the image denoising model can be improved, the overall denoising effect of the image is guaranteed in the denoising process of the image denoising model, and the processing effect of the image denoising model on local blurring can be improved. In addition, the image to be processed is augmented through the augmentation network, a series of random changes can be made on the image to be processed, similar but different training samples can be generated, the scale of a training data set is enlarged, meanwhile, the dependence of the model on certain attributes can be reduced through the random changes of the training samples, and the generalization capability of the image noise reduction model can be improved.
In a first example of the present application, fig. 6(a) is a schematic flowchart of a training method of an image noise reduction model provided in an embodiment of the present application, and fig. 6(b) is a schematic diagram of a principle of the training method of the image noise reduction model provided in the embodiment of the present application. As shown in fig. 6(a) and 6(b), the method for training an image denoising model provided in the embodiment of the present application specifically includes:
and S601, acquiring a blurred sample image.
And S602, inputting the fuzzy sample image into an annotation model for pixel annotation to obtain pixel information of the fuzzy sample image output by the annotation model.
The labeling model is a model trained in advance based on the fuzzy sample image, and the pixel information is used for indicating the fuzzy degree of each pixel in the fuzzy sample image;
s603, inputting the blurred sample image into the image denoising model for denoising, and obtaining an initial sample denoising image corresponding to the blurred sample image output by the image denoising model.
It should be noted that the principles and schemes of steps S601 to S603 are similar to those of steps S501 to S503 in the embodiment shown in fig. 5(a), and reference may be specifically made to the above embodiment, which is not described herein again.
S604, obtaining a first loss value through the first loss function, the pixel information and the initial sample noise reduction image.
Wherein, the calculation formula of the first loss function is as follows:
Loss1=‖HR-Mobike(LR)‖ 1
where Loss1 is the first Loss value, LR is the image feature of each pixel in the initial sample noise reduction image, and HR is the pixel information of each pixel in the blurred sample image, and exemplarily, taking the pixel information as a mask, if a certain pixel is a sharp pixel, the HR value of the pixel is the second pixel information, and if a certain pixel is a blurred pixel, the HR value of the pixel is the first pixel information.
And S605, adjusting the model parameters of the image noise reduction model based on the first loss value.
In the embodiment of the application, the pixel-level quality evaluation network (namely, the labeling model) is used for training the image denoising model, so that the precision of the image denoising model can be improved, the overall denoising effect of the image is guaranteed in the denoising process of the image denoising model, and the processing effect of the image denoising model on local blurring is improved.
In a second example of the present application, fig. 7(a) is a third schematic flowchart of a training method of an image noise reduction model provided in an embodiment of the present application, and fig. 7(b) is a third schematic principle diagram of the training method of the image noise reduction model provided in the embodiment of the present application. As shown in fig. 7(a) and 7(b), the training method of the image denoising model provided in the embodiment of the present application specifically includes the following steps:
s701, acquiring a fuzzy sample image;
and S702, inputting the fuzzy sample image into an annotation model for pixel annotation to obtain pixel information of the fuzzy sample image output by the annotation model.
The labeling model is a model trained in advance based on the blurred sample image, and the pixel information is used for indicating the blurring degree of each pixel in the blurred sample image.
And S703, inputting the blurred sample image into the image denoising model for denoising to obtain an initial sample denoising image corresponding to the blurred sample image output by the image denoising model.
And S704, obtaining a target sample image through the pixel information and the initial sample noise reduction image.
It should be noted that steps S601 to S603 are similar to the principle and scheme of steps S501 to S503 in the embodiment shown in fig. 5; the scheme for obtaining the target sample image in step S704 is similar to the scheme for obtaining the target noise-reduced image in step S203 in the embodiment shown in fig. 2, and reference may be made to the above embodiment specifically, and details are not repeated here.
S705, obtaining a second loss value through the second loss function, the pixel information and the target sample image.
Wherein the formula of the second loss function is as follows:
Loss2=‖HR-DDR)‖ 1
where Loss2 is the second Loss value, DR is the image feature of each pixel in the target sample noise reduction image, and HR is the pixel information of each pixel in the blurred sample image, and exemplarily, taking the pixel information as a mask, if a certain pixel is a sharp pixel, the HR value of the pixel is the second pixel information, and if a certain pixel is a blurred pixel, the HR value of the pixel is the first pixel information.
And S706, adjusting the model parameters of the image noise reduction model through the second loss value.
In the first embodiment, the model parameters of the image noise reduction model may be adjusted based on only the second loss value to implement training of the image noise reduction model.
In the second embodiment, the image noise reduction model may also be trained synchronously in combination with the first loss value and the second loss value. Specifically, the step S706 specifically includes:
and adjusting the model parameters of the image noise reduction model through the first loss value and the second loss value.
It should be noted that, please refer to the embodiment shown in fig. 6(a) to 6(b) for the manner of obtaining the first loss value, which is not described herein again.
In the embodiment of the application, the pixel-level quality evaluation network (namely, the labeling model) is used for training the image denoising model, so that the precision of the image denoising model can be improved, the overall denoising effect of the image is guaranteed in the denoising process of the image denoising model, and the processing effect of the image denoising model on local blurring is improved.
In addition, the image noise reduction model can be trained through any one or the combination of the first loss value and the second loss value, so that the accuracy of the image noise reduction model can be improved, and the image noise reduction effect of the image noise reduction model can be further improved.
In a third example of the present application, fig. 8(a) is a fourth schematic flowchart of a training method of an image noise reduction model provided in an embodiment of the present application, and fig. 8(b) is a fourth schematic principle diagram of the training method of the image noise reduction model provided in the embodiment of the present application. As shown in fig. 8(a) and 8(b), the training method of the image denoising model provided in the embodiment of the present application specifically includes the following steps:
s801, acquiring a blurred sample image.
S802, inputting the fuzzy sample image into an annotation model for pixel annotation to obtain pixel information of the fuzzy sample image output by the annotation model.
The labeling model is a model trained in advance based on the fuzzy sample image, and the pixel information is used for indicating the fuzzy degree of each pixel in the fuzzy sample image.
And S803, inputting the blurred sample image into the image noise reduction model for noise reduction processing to obtain an initial sample noise reduction image corresponding to the blurred sample image output by the image noise reduction model.
And S804, obtaining a target sample image through the pixel information and the initial sample noise reduction image.
It should be noted that steps S801 to S803 are similar to the principle and scheme of steps S501 to S503 in the embodiment shown in fig. 5; the scheme for obtaining the target sample image in step S804 is similar to the scheme for obtaining the target noise-reduced image in step S203 in the embodiment shown in fig. 2, and reference may be made to the above embodiment specifically, and details are not repeated here.
And S805, acquiring a clear sample image corresponding to the blurred sample image.
And S806, obtaining a third loss value based on the third loss function, the target sample image and the clear sample image through the perception model.
And S807, adjusting the model parameters of the image noise reduction model through the third loss value.
It should be noted that, regarding the type of the perception model, the embodiment of the present application is not particularly limited. For example, the perceptual model may be a VGG16 network or the like, and accordingly, the third loss value may be a content loss function indicating a similarity between the target sample image and the clear sample image.
Alternatively, taking the example of the perceptual model being a VGG16 network, the third loss value may be a euclidean distance between corresponding pixels of the target sample image and the clear sample image.
For example, as shown in fig. 8(b), the clear sample image and the target sample image corresponding to the blurred sample image are respectively input into the pre-trained perceptual model, the euclidean distance between the clear sample image and the target sample image is obtained by the k-th convolutional layer through the perceptual model, the jth feature map output by the k-th convolutional layer of VGG16 is represented, and then the third loss value is obtained:
specifically, the formula of the third loss function is as follows:
Figure BDA0003634018640000111
wherein L is VGG Is the third loss value, G i (x) As image features of the target sample image, X i For the image features of the clear sample image, N represents the total amount of feature maps output by the kth convolutional layer.
In the embodiment of the application, the target noise reduction image obtained by the image noise reduction model can be more real by constraining the training process of the image noise reduction model through the perception model, and the noise reduction effect is improved.
In a first example, the step S807 specifically includes:
(1) obtaining a first loss value through a first loss function, pixel information and an initial sample noise reduction image;
it should be noted that the scheme for obtaining the first loss value is similar to the scheme in the embodiments shown in fig. 6(a) and fig. 6(b), and reference may be specifically made to the above embodiments, which are not repeated herein.
(2) And adjusting the model parameters of the image noise reduction model based on the first loss value and the third loss value.
Specifically, a target loss value may be determined according to the target loss function, and then the model parameter of the image denoising model may be adjusted by using the target loss value.
Illustratively, the target Loss value Loss may be obtained by the following formula:
Loss=a 1 ×Loss 1 +a 3 ×L VGG
wherein, a 1 Is the weight value of the first loss function, a 3 Is the weight value of the third loss function, as for a 1 And a 3 The specific values of (A) are not particularly limited in the examples of the present application, for example 1 And a 3 Can be any value such as 0.5, 1, etc., a 1 And a 3 The same or different values may be used.
In a second example, the step S807 specifically includes:
(1) obtaining a second loss value through a second loss function, the pixel information and the target sample image;
it should be noted that the scheme for obtaining the second loss value is similar to the scheme in the embodiments shown in fig. 7(a) and fig. 7(b), and reference may be specifically made to the above embodiments, which are not repeated herein.
(2) And adjusting the model parameters of the image noise reduction model through the second loss value and the third loss value.
Specifically, the target loss value may be determined according to the target loss function, and then the model parameter of the image denoising model may be adjusted by using the target loss value.
Illustratively, the target Loss value Loss may be obtained by the following formula:
Loss=a 2 ×Loss 2 +a 3 ×L VGG
wherein, a 2 Is the weight value of the second loss function, a 3 Is the weight value of the third loss function, as for a 2 And a 3 Specific values of (a) are not particularly limited in the examples of the present application, either, and 2 and a 3 The same or different values may be used.
In a third example, the step S807 specifically includes:
(1) obtaining a first loss value through a first loss function, pixel information and an initial sample noise reduction image;
(2) obtaining a second loss value through a second loss function, the pixel information and the target sample image;
it should be noted that, the scheme for obtaining the first loss value is similar to the scheme in the embodiment shown in fig. 6(a) and fig. 6(b), and the scheme for obtaining the second loss value is similar to the scheme in the embodiment shown in fig. 7(a) and fig. 7(b), which may specifically refer to the above embodiments, and is not repeated here.
(3) And adjusting the model parameters of the image noise reduction model through the first loss value, the second loss value and the third loss value. Specifically, the step (3) specifically includes:
I. and obtaining a target loss value through the first loss value, the second loss value and the third loss value based on the target loss function.
Wherein, the target Loss value Loss can be obtained by the following formula:
Loss=a 1 ×Loss 1 +a 2 ×Loss 2 +a 3 ×L VGG
in one possible example, with a 1 Has a value of 0.5, a 2 Has a value of 2, a 3 For example, a value of 0.5, then,
Loss=0.5Loss 1 +2Loss 2 +0.5L VGG
II. And adjusting the model parameters of the image noise reduction model through the target loss value.
In the embodiment of the application, the image denoising model is trained through the perception model, so that the target denoising image obtained by the image denoising model is more real, and the denoising effect of the image is improved.
In addition, in the training process, the image noise reduction model is trained through any combination of the first loss value, the second loss value and the third loss value, so that the accuracy of the image noise reduction model can be improved, and the image noise reduction model can meet the noise reduction requirements of various scenes.
Fig. 9 is a schematic structural diagram of an image noise reduction device according to an embodiment of the present application. The image noise reduction device can be realized by software and/or hardware. In practical application, the image noise reduction device can be integrated in the terminal equipment or the server.
As shown in fig. 9, the image noise reduction apparatus 900 includes: the denoising module 901 is configured to input the image to be processed into the image denoising model for denoising, so as to obtain an initial denoising image corresponding to the image to be processed output by the image denoising model;
the labeling module 902 is configured to input the image to be processed into a labeling model for pixel labeling, so as to obtain pixel information corresponding to the image to be processed output by the labeling model, where the pixel information is used to indicate a blurring degree of each pixel in the image to be processed;
an obtaining module 903, configured to obtain a target noise-reduced image corresponding to the image to be processed based on the pixel information and the initial noise-reduced image.
In some embodiments, the image denoising model comprises: a noise reduction network, the noise reduction network comprising: the device comprises an extension layer, a depth separable convolution layer and a mapping layer, wherein the step length of each processing layer in the mapping layer is a first step length;
the noise reduction module 901 is specifically configured to: performing dimensionality extension on an image to be processed through an extension layer to obtain a first characteristic image output by the extension layer; performing feature extraction on the first feature image through the depth-separable convolutional layer to obtain a second feature image output by the depth-separable convolutional layer; and performing dimensionality compression on the second characteristic image through the mapping layer to obtain an initial noise reduction image output by the mapping layer.
In some embodiments, the pixel information comprises a mask of the image to be processed; the obtaining module 903 is specifically configured to: and multiplying the initial noise reduction image by the mask of the image to be processed to obtain a target noise reduction image corresponding to the image to be processed.
It should be understood that the image denoising device 900 provided in the embodiment of the present application can be applied to the technical solutions in the embodiments of the image denoising method, and the implementation principles and technical effects thereof are similar and will not be described herein again.
Fig. 10 is a schematic structural diagram of a training apparatus for an image noise reduction model according to an embodiment of the present application. It is to be understood that the training device may be implemented in software and/or hardware. In practical application, the voice recognition device can be integrated in the terminal equipment or the server.
As shown in fig. 10, the training apparatus 1000 includes: an obtaining module 1001 configured to obtain a blurred sample image;
the labeling module 1002 is configured to input the blurred sample image into a labeling model for pixel labeling, so as to obtain pixel information of the blurred sample image output by the labeling model, where the pixel information is used to indicate a blurring degree of each pixel in the blurred sample image;
the noise reduction module 1003 is configured to input the blurred sample image into an image noise reduction model to perform noise reduction processing, so as to obtain an initial sample noise reduction image corresponding to the blurred sample image output by the image noise reduction model; and the training module 1004 is configured to adjust model parameters of the image noise reduction model according to the pixel information and the noise reduction image of the initial sample.
In some embodiments, the image denoising model comprises: an image augmentation network and a noise reduction network; the denoising module 1003 is specifically configured to: carrying out augmentation processing on the fuzzy sample image through an image augmentation network to obtain an augmentation image corresponding to the fuzzy sample image output by the augmentation network; and carrying out noise reduction processing on the augmented image and the fuzzy sample image through a noise reduction network to obtain an initial sample noise reduction image output by the noise reduction network.
In some embodiments, the noise reduction network comprises: the device comprises an extension layer, a depth separable convolution layer and a mapping layer, wherein the step length of each processing layer in the mapping layer is a first step length; the denoising module 1003 is specifically configured to: performing dimensionality extension on the augmented image and the blurred sample image through the extension layer to obtain a third feature image output by the extension layer; performing feature extraction on the third feature image through the depth-separable convolutional layer to obtain a fourth feature image output by the depth-separable convolutional layer; and performing dimensionality compression on the fourth characteristic image through the mapping layer to obtain an initial sample noise reduction image output by the mapping layer.
In some embodiments, training module 1004 is specifically configured to: obtaining a first loss value through a first loss function, pixel information and an initial sample noise reduction image; and adjusting the model parameters of the image noise reduction model based on the first loss value.
In some embodiments, training module 1004 is specifically configured to: obtaining a target sample image through pixel information and an initial sample noise reduction image; obtaining a second loss value through a second loss function, the pixel information and the target sample image; and adjusting the model parameters of the image noise reduction model through the first loss value and the second loss value.
In some embodiments, training module 1004 is specifically configured to: acquiring a clear sample image corresponding to the blurred sample image; obtaining a third loss value based on the third loss function, the target sample image and the clear sample image through the perception model, wherein the third loss value is used for indicating the similarity between the target sample image and the clear sample image; and adjusting the model parameters of the image noise reduction model through the first loss value, the second loss value and the third loss value.
In some embodiments, training module 1004 is specifically configured to: obtaining a target loss value through the first loss value, the second loss value and the third loss value based on the target loss function; and adjusting the model parameters of the image noise reduction model through the target loss value.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 11, the electronic device 1100 includes: a processor 1101, a memory 1102, a communication interface 1103, and a system bus 1104.
The memory 1102 and the communication interface 1103 are connected to the processor 1101 through the system bus 1104 and communicate with each other, the memory 1102 is used for storing program instructions, the communication interface 1103 is used for communicating with other devices, and the processor 1101 is used for calling the program instructions in the memory to execute the scheme of the image noise reduction method according to the embodiment of the method and/or execute the scheme of the training method of the image noise reduction model according to the embodiment of the method.
In particular, processor 1101 may include one or more processing units, such as: the Processor 1101 may be a Central Processing Unit (CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in the incorporated application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
The memory 1102 may be used to store program instructions. The memory 1102 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function) required by at least one function, and the like. The storage data area may store data (e.g., audio data, etc.) created during use of the electronic device 1100, and the like. In addition, the memory 1102 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a Universal Flash Storage (UFS), and the like. The processor 1101 executes various functional applications of the electronic device 1100 and data processing by executing program instructions stored in the memory 1102.
The communication interface 1103 may provide a solution for wireless communication including 2G/3G/4G/110G, etc. applied to the electronic device 1100. The communication interface 1103 may receive electromagnetic waves from an antenna, filter, amplify, etc. the received electromagnetic waves, and transmit the processed electromagnetic waves to a modem processor for demodulation. The communication interface 1103 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves via the antenna for radiation.
In some embodiments, at least some of the functional modules of the communication interface 1103 may be disposed in the processor 1101.
In some embodiments, at least some of the functional blocks of the communication interface 1103 may be disposed in the same device as at least some of the blocks of the processor 1101.
The system bus 1104 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus 1104 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
It should be noted that the number of the memory 1102 and the processor 1101 is not limited in the embodiment of the present application, and may be one or more, and fig. 11 illustrates one example; the memory 1102 and the processor 1101 may be connected by various means such as a bus, either wired or wirelessly.
In practice, the electronic device 1100 may be various forms of computers or mobile terminals. Wherein the computer is, for example, a laptop computer, a desktop computer, a workbench, a server, a blade server, a mainframe computer, etc.; mobile terminals are, for example, personal digital assistants, cellular phones, smart phones, wearable devices, and other similar computing devices.
The electronic device of this embodiment may be configured to implement the technical solutions in the method embodiments, and the implementation principle and the technical effects are similar, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, on which program instructions are stored, and when the program instructions are executed, the method for image noise reduction and/or the method for training an image noise reduction model according to any of the above embodiments are implemented.
An embodiment of the present application further provides a computer program product, including: a computer program which, when being executed by a processor, implements the image denoising method, and/or the training method of the image denoising model, as in any one of the above method embodiments.
In the above embodiments, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed.
In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form. In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The unit formed by the modules can be realized in a hardware form, and can also be realized in a form of hardware and a software functional unit.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks, and so forth. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.

Claims (15)

1. An image noise reduction method, comprising:
inputting an image to be processed into an image noise reduction model for noise reduction processing to obtain an initial noise reduction image corresponding to the image to be processed and output by the image noise reduction model;
inputting the image to be processed into a labeling model for pixel labeling to obtain pixel information of the image to be processed output by the labeling model, wherein the pixel information is used for indicating the fuzzy degree of each pixel in the image to be processed;
and obtaining a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image.
2. The method of claim 1, wherein the image denoising model comprises a denoising network comprising: the device comprises an extension layer, a depth separable convolution layer and a mapping layer, wherein the step length of each processing layer in the mapping layer is a first step length;
the method for inputting the image to be processed into the image denoising model to perform denoising processing to obtain the initial denoising image corresponding to the image to be processed and output by the image denoising model comprises the following steps:
performing dimensionality extension on the image to be processed through the extension layer to obtain a first characteristic image output by the extension layer;
performing feature extraction on the first feature image through the depth separable convolution layer to obtain a second feature image output by the depth separable convolution layer;
and performing dimensionality compression on the second characteristic image through the mapping layer to obtain the initial noise reduction image output by the mapping layer.
3. The method according to claim 1 or 2, characterized in that the pixel information comprises a mask of the image to be processed; the obtaining a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image includes:
and multiplying the initial noise reduction image by the mask of the image to be processed to obtain a target noise reduction image corresponding to the image to be processed.
4. A training method of an image noise reduction model is characterized by comprising the following steps:
acquiring a blurred sample image;
inputting the blurred sample image into an annotation model for pixel annotation to obtain pixel information of the blurred sample image output by the annotation model, wherein the pixel information is used for indicating the blurring degree of each pixel in the blurred sample image;
inputting the blurred sample image into an image noise reduction model for noise reduction processing to obtain an initial sample noise reduction image corresponding to the blurred sample image output by the image noise reduction model;
and adjusting the model parameters of the image noise reduction model according to the pixel information and the initial sample noise reduction image.
5. The method of claim 4, wherein the image denoising model comprises: an image augmentation network and a noise reduction network;
the inputting the blurred sample image into an image noise reduction model for noise reduction processing to obtain an initial sample noise reduction image corresponding to the blurred sample image output by the image noise reduction model includes:
carrying out augmentation processing on the blurred sample image through the image augmentation network to obtain an augmented image corresponding to the blurred sample image output by the augmentation network;
and carrying out noise reduction processing on the augmented image and the fuzzy sample image through the noise reduction network to obtain the initial sample noise reduction image output by the noise reduction network.
6. The method of claim 5, wherein the noise reduction network comprises an extension layer, a depth separable convolutional layer, and a mapping layer, wherein the step size of each processing layer in the mapping layer is a first step size;
the denoising processing is performed on the augmented image and the blurred sample image through the denoising network to obtain the initial sample denoising image output by the denoising network, and the denoising processing includes:
performing dimensionality extension on the augmented image and the blurred sample image through the extension layer to obtain a third feature image output by the extension layer;
performing feature extraction on the third feature image through the depth-separable convolutional layer to obtain a fourth feature image output by the depth-separable convolutional layer;
and performing dimensionality compression on the fourth feature image through the mapping layer to obtain the initial sample noise reduction image output by the mapping layer.
7. The method according to any one of claims 4 to 6, wherein said adjusting model parameters of said image noise reduction model based on said pixel information and said initial sample noise reduced image comprises:
obtaining a first loss value through a first loss function, the pixel information and the initial sample noise reduction image;
and adjusting the model parameters of the image noise reduction model based on the first loss value.
8. The method of claim 7, wherein adjusting model parameters of the image noise reduction model based on the first loss value comprises:
obtaining a target sample image through the pixel information and the initial sample noise reduction image;
obtaining a second loss value through a second loss function, the pixel information and the target sample image;
and adjusting the model parameters of the image noise reduction model according to the first loss value and the second loss value.
9. The method of claim 8, wherein the adjusting model parameters of the image denoising model by the first loss value and the second loss value comprises:
acquiring a clear sample image corresponding to the blurred sample image;
obtaining a third loss value based on a third loss function, the target sample image and the clear sample image through a perception model, wherein the third loss value is used for indicating the similarity between the target sample image and the clear sample image;
and adjusting the model parameters of the image noise reduction model according to the first loss value, the second loss value and the third loss value.
10. The method of claim 9, wherein the adjusting the model parameters of the image noise reduction model by the first loss value, the second loss value, and the third loss value comprises:
obtaining a target loss value through the first loss value, the second loss value and the third loss value based on a target loss function;
and adjusting the model parameters of the image noise reduction model through the target loss value.
11. An image noise reduction apparatus, comprising:
the noise reduction module is used for inputting an image to be processed into an image noise reduction model for noise reduction processing to obtain an initial noise reduction image which is output by the image noise reduction model and corresponds to the image to be processed;
the labeling module is used for inputting the image to be processed into a labeling model to perform pixel labeling to obtain pixel information corresponding to the image to be processed and output by the labeling model, and the pixel information is used for indicating the fuzzy degree of each pixel in the image to be processed;
and the acquisition module is used for acquiring a target noise reduction image corresponding to the image to be processed based on the pixel information and the initial noise reduction image.
12. An apparatus for training an image noise reduction model, comprising:
the acquisition module is used for acquiring a blurred sample image;
the labeling module is used for inputting the blurred sample image into a labeling model for pixel labeling to obtain pixel information of the blurred sample image output by the labeling model, wherein the pixel information is used for indicating the blurring degree of each pixel in the blurred sample image;
the noise reduction module is used for inputting the blurred sample image into an image noise reduction model for noise reduction processing to obtain an initial sample noise reduction image corresponding to the blurred sample image output by the image noise reduction model;
and the training module is used for adjusting the model parameters of the image noise reduction model according to the pixel information and the initial sample noise reduction image.
13. An electronic device, comprising: a memory for storing program instructions and at least one processor for invoking the program instructions in the memory, for performing the image noise reduction method of any of claims 1 to 3, and/or for training the image noise reduction model of any of claims 4 to 10.
14. A computer-readable storage medium, characterized in that the readable storage medium has stored thereon a computer program; the computer program, when executed, implements an image noise reduction method as claimed in any of claims 1 to 3, and/or a training method of an image noise reduction model as claimed in any of claims 4 to 10.
15. A computer program product, comprising: computer program which, when being executed by a processor, implements the image noise reduction method of any of claims 1 to 3 and/or the training method of the image noise reduction model of any of claims 4 to 10.
CN202210497347.6A 2022-05-09 2022-05-09 Image noise reduction method, device, equipment and storage medium Pending CN114897725A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210497347.6A CN114897725A (en) 2022-05-09 2022-05-09 Image noise reduction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210497347.6A CN114897725A (en) 2022-05-09 2022-05-09 Image noise reduction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114897725A true CN114897725A (en) 2022-08-12

Family

ID=82720881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210497347.6A Pending CN114897725A (en) 2022-05-09 2022-05-09 Image noise reduction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114897725A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN112419184A (en) * 2020-11-19 2021-02-26 重庆邮电大学 Spatial attention map image denoising method integrating local information and global information
WO2021055585A1 (en) * 2019-09-17 2021-03-25 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
CN112991212A (en) * 2021-03-16 2021-06-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113409203A (en) * 2021-06-10 2021-09-17 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
CN113627314A (en) * 2021-08-05 2021-11-09 Oppo广东移动通信有限公司 Face image blur detection method and device, storage medium and electronic equipment
CN113643189A (en) * 2020-04-27 2021-11-12 深圳市中兴微电子技术有限公司 Image denoising method, device and storage medium
CN113658128A (en) * 2021-08-13 2021-11-16 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
CN113744160A (en) * 2021-09-15 2021-12-03 马上消费金融股份有限公司 Image processing model training method, image processing device and electronic equipment
CN114399440A (en) * 2022-01-13 2022-04-26 马上消费金融股份有限公司 Image processing method, image processing network training method and device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
WO2021055585A1 (en) * 2019-09-17 2021-03-25 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
CN113643189A (en) * 2020-04-27 2021-11-12 深圳市中兴微电子技术有限公司 Image denoising method, device and storage medium
CN112419184A (en) * 2020-11-19 2021-02-26 重庆邮电大学 Spatial attention map image denoising method integrating local information and global information
CN112991212A (en) * 2021-03-16 2021-06-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113409203A (en) * 2021-06-10 2021-09-17 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
CN113627314A (en) * 2021-08-05 2021-11-09 Oppo广东移动通信有限公司 Face image blur detection method and device, storage medium and electronic equipment
CN113658128A (en) * 2021-08-13 2021-11-16 Oppo广东移动通信有限公司 Image blurring degree determining method, data set constructing method and deblurring method
CN113744160A (en) * 2021-09-15 2021-12-03 马上消费金融股份有限公司 Image processing model training method, image processing device and electronic equipment
CN114399440A (en) * 2022-01-13 2022-04-26 马上消费金融股份有限公司 Image processing method, image processing network training method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XUEBIN QIN等: "U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection", PATTERN RECOGNITION, vol. 106, 31 October 2020 (2020-10-31), pages 1 - 8 *
张娜娜 等: "经典图像去噪方法研究综述", 化工自动化及仪表, vol. 48, no. 5, 16 September 2021 (2021-09-16), pages 409 - 412 *

Similar Documents

Publication Publication Date Title
CN109064428B (en) Image denoising processing method, terminal device and computer readable storage medium
WO2020098250A1 (en) Character recognition method, server, and computer readable storage medium
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN112308866B (en) Image processing method, device, electronic equipment and storage medium
CN111950723A (en) Neural network model training method, image processing method, device and terminal equipment
CN111340077B (en) Attention mechanism-based disparity map acquisition method and device
CN112614110B (en) Method and device for evaluating image quality and terminal equipment
CN111131688B (en) Image processing method and device and mobile terminal
CN106296576A (en) Image processing method and image processing apparatus
CN110853071A (en) Image editing method and terminal equipment
CN110910326B (en) Image processing method and device, processor, electronic equipment and storage medium
CN113283319A (en) Method and device for evaluating face ambiguity, medium and electronic equipment
CN113658065A (en) Image noise reduction method and device, computer readable medium and electronic equipment
CN111754435B (en) Image processing method, device, terminal equipment and computer readable storage medium
CN113158773A (en) Training method and training device for living body detection model
CN110349108B (en) Method, apparatus, electronic device, and storage medium for processing image
CN111767924A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN115393868B (en) Text detection method, device, electronic equipment and storage medium
CN110765304A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN114897725A (en) Image noise reduction method, device, equipment and storage medium
CN116258906A (en) Object recognition method, training method and device of feature extraction model
CN111784726A (en) Image matting method and device
CN113160942A (en) Image data quality evaluation method and device, terminal equipment and readable storage medium
CN113362260A (en) Image optimization method and device, storage medium and electronic equipment
CN113658050A (en) Image denoising method, denoising device, mobile terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination