CN112308785A - Image denoising method, storage medium and terminal device - Google Patents

Image denoising method, storage medium and terminal device Download PDF

Info

Publication number
CN112308785A
CN112308785A CN201910708364.8A CN201910708364A CN112308785A CN 112308785 A CN112308785 A CN 112308785A CN 201910708364 A CN201910708364 A CN 201910708364A CN 112308785 A CN112308785 A CN 112308785A
Authority
CN
China
Prior art keywords
image
denoised
denoising
training
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910708364.8A
Other languages
Chinese (zh)
Other versions
CN112308785B (en
Inventor
郑加章
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan TCL Group Industrial Research Institute Co Ltd
Original Assignee
Wuhan TCL Group Industrial Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan TCL Group Industrial Research Institute Co Ltd filed Critical Wuhan TCL Group Industrial Research Institute Co Ltd
Priority to CN201910708364.8A priority Critical patent/CN112308785B/en
Publication of CN112308785A publication Critical patent/CN112308785A/en
Application granted granted Critical
Publication of CN112308785B publication Critical patent/CN112308785B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image denoising method, a storage medium and a terminal device. The image denoising model is obtained by performing deep learning on a denoising process of a training image set with a plurality of groups of training image groups, wherein the training image groups comprise a first image and a second image with the same image content, and the signal-to-noise ratio of the second image is greater than that of the first image. Therefore, the method adopts the trained image denoising model for deep learning based on the training image set to denoise, so that the operation performance of the image denoising model can be improved, the time consumed by image denoising is reduced, and the image denoising efficiency is improved.

Description

Image denoising method, storage medium and terminal device
Technical Field
The invention relates to the field of computer vision and digital image processing, in particular to an image denoising method, a storage medium and a terminal device.
Background
In recent years, with the continuous development of image acquisition technology, people have higher and higher requirements on image quality, wherein one of important indexes of the image quality is the signal-to-noise ratio. The image acquisition process is influenced by hardware, environment and human factors, and noise and various noises exist in the image, so that the details of the image are influenced to a great extent, and the image quality is influenced finally. Accordingly, various denoising methods, such as non-local self-similarity (NSS) model based denoising, sparse model based denoising, gradient model based denoising, and Markov (MRF) model based denoising, sequentially occur. Although the image denoising method has high denoising quality, a large amount of calculation is needed in the denoising process, so that the denoising time is long, and the image processing efficiency is influenced.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide an image denoising method, a storage medium and a terminal device, aiming at the defects of the prior art, so as to solve the problem of long time consumption of the image denoising method.
The technical scheme adopted by the invention is as follows:
an image denoising method, comprising:
acquiring an image to be denoised, and inputting the image to be denoised into a trained image denoising model, wherein the image denoising model is obtained by training based on a training image set, the training image set comprises a plurality of groups of training image groups, each group of training image group comprises a first image and a second image with the same image content, and the signal-to-noise ratio of the second image is greater than that of the first image;
and denoising the image to be denoised through the image denoising model to obtain a denoised image corresponding to the image to be denoised.
The image denoising method comprises the following steps of:
acquiring the training image set;
inputting a first image in the training image set into a preset neural network model, and acquiring a generated image corresponding to the first image output by the preset neural network model;
and correcting the model parameters of the preset neural network model according to the second image corresponding to the first image and the generated image corresponding to the first image until the training condition of the preset neural network model meets a preset condition, so as to obtain a trained image denoising model.
The image denoising method comprises the steps of obtaining a first image and a second image, wherein the first image is an image with first exposure duration, the second image is an image with second exposure duration, the first image and the second image are both original image data, and the second exposure duration is larger than the first exposure duration.
Before the inputting a first image in the training image set into a preset neural network model and acquiring a generated image corresponding to the first image output by the preset neural network model, the image denoising method further includes:
carrying out color channel separation on a first image in the training image set to obtain a first image block corresponding to the first image;
and adjusting the exposure duration of the image block corresponding to the first image to obtain the first image block with the adjusted exposure duration, and taking the first image block with the adjusted exposure duration as the first image.
The image denoising method includes the steps of adjusting exposure duration of an image block corresponding to the first image to obtain a first image block with adjusted exposure duration, and taking the first image block with adjusted exposure duration as the first image:
acquiring a first exposure duration of the first image and a second exposure duration of a second image corresponding to the first image, and calculating an exposure adjustment coefficient according to the second exposure duration and the first exposure duration;
and adjusting the exposure duration of the first image block corresponding to the first image according to the exposure adjustment coefficient to obtain the first image block with the adjusted exposure duration, and taking the first image block with the adjusted exposure duration as the first image.
The image denoising method comprises the steps that the preset neural network model comprises a down-sampling module, a processing module and an up-sampling module; the sequentially inputting each first image in the training image set into a preset neural network model, and acquiring a generated image corresponding to the first image output by the preset neural network model specifically includes:
inputting a first image in the training image set into the downsampling module to obtain a first feature image corresponding to the first image;
inputting the first characteristic image into the processing module, and processing the first characteristic image through the processing module to obtain a second characteristic image, wherein the signal-to-noise ratio of the second characteristic image is higher than that of the first characteristic image;
and inputting the second characteristic image into the up-sampling module, and adjusting the resolution of the second characteristic image through the up-sampling module to obtain a generated image corresponding to the first image, wherein the resolution of the generated image is the same as that of the first image.
The image denoising method, wherein the modifying the model parameters of the preset neural network model according to the second image corresponding to the first image and the generated image corresponding to the first image until the training condition of the preset neural network model meets a preset condition to obtain the trained image denoising model specifically includes:
calculating a multi-scale structure similarity loss function value and a cosine similarity loss function value corresponding to the preset neural network model according to a second image corresponding to the first image and a generated image corresponding to the first image;
obtaining a loss function value of the preset neural network model according to the multi-scale structure similarity loss function value and the cosine similarity loss function value;
and iteratively training the preset neural network model based on the total loss function value until the training condition of the preset neural network model meets a preset condition so as to obtain a trained image denoising model.
The image denoising method comprises the following steps that the trained image denoising model comprises a down-sampling module, a processing module and an up-sampling module; the denoising of the image to be denoised through the image denoising model to obtain a denoised image corresponding to the image to be denoised specifically includes:
inputting the image to be denoised into the down-sampling module to obtain a first characteristic image corresponding to the image to be denoised;
inputting the first characteristic image into the processing module, and processing the first characteristic image through the processing module to obtain a second characteristic image, wherein the signal-to-noise ratio of the second characteristic image is higher than that of the first characteristic image;
inputting the second characteristic image into the up-sampling module, and adjusting the resolution of the second characteristic image through the up-sampling module to obtain a denoised image corresponding to the image to be denoised, wherein the resolution of the denoised image is the same as the resolution of the image to be denoised.
The image denoising method includes the steps of obtaining an image to be denoised, and inputting the image to be denoised into a trained image denoising model:
acquiring an image to be denoised, and judging the image type of the image to be denoised, wherein the image type comprises an original image data type or an RGB image type acquired by a camera device;
when the image type is an original image data type acquired by a camera device, performing color channel separation on the image to be denoised to obtain a second image block corresponding to the image to be denoised, taking the second image block as the image to be denoised, and inputting the image to be denoised into a trained image denoising model;
and when the image type is an RGB image type, inputting the image to be denoised into a trained image denoising model.
The image denoising method, wherein when the image type is an original image data type acquired by a camera device, denoising the image to be denoised through the image denoising model to obtain a denoised image corresponding to the image to be denoised specifically includes:
denoising the image to be denoised through the image denoising model to obtain an output image;
converting the pixel value of each pixel point contained in the output image to a preset pixel value interval to obtain a converted output image;
stretching the converted output image by a preset multiple to obtain a stretched output image;
and carrying out white balance and demosaicing processing on the stretched output image so as to convert the stretched output image into an RGB image, and taking the RGB image as a de-noised image.
The image denoising method, wherein when the image type is an RGB image type, denoising the image to be denoised by the image denoising model to obtain a denoised image corresponding to the image to be denoised comprises:
denoising the image to be denoised through the image denoising model to obtain an output image;
converting the pixel value of each pixel point contained in the output image to a preset pixel value interval to obtain a converted output image;
and stretching the converted output image by a preset multiple to obtain a de-noised image.
A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps in the image denoising method as in any of the above.
A terminal device, comprising: a processor and a memory; the memory has stored thereon a computer readable program executable by the processor; the processor, when executing the computer readable program, implements the steps in the image denoising method as described in any one of the above.
Has the advantages that: compared with the prior art, the image denoising method, the storage medium and the terminal device are provided by the invention, the image to be removed is input into the trained image denoising model, and the denoising processing is carried out on the image to be denoised through the trained image denoising model, so as to obtain the denoised image. The image denoising model is obtained by performing deep learning on a denoising process of a training image set with a plurality of groups of training image groups, wherein the training image groups comprise a first image and a second image with the same image content, and the signal-to-noise ratio of the second image is greater than that of the first image. Therefore, the method adopts the trained image denoising model for deep learning based on the training image set to denoise, so that the operation performance of the image denoising model can be improved, the time consumed by image denoising is reduced, and the image denoising efficiency is improved.
Drawings
Fig. 1 is a flowchart of an image denoising method provided by the present invention.
Fig. 2 is a schematic diagram of a first image after color channel separation in the image denoising method provided by the present invention.
Fig. 3 is a schematic diagram of a training process of an image denoising model in the image denoising method provided by the present invention.
Fig. 4 is a schematic diagram of a preset network model in a training process of an image denoising model in the image denoising method provided by the present invention.
Fig. 5 is a flowchart of step S10 in the image denoising method provided by the present invention.
Fig. 6 is a flowchart of step S20 in the image denoising method provided by the present invention.
Fig. 7 is a data diagram of processing time of raw image data of 4032 × 3024 × 1 in the image denoising method provided in the present invention.
FIG. 8 is a diagram illustrating an image to be denoised.
Fig. 9 is a schematic diagram of the image to be denoised in fig. 8 after being processed by the image denoising method provided by the present invention.
FIG. 10 is a diagram illustrating another image to be denoised.
Fig. 11 is a schematic diagram of the image to be denoised in fig. 10 after being processed by the image denoising method provided by the present invention.
Fig. 12 is a schematic structural diagram of a terminal device provided in the present invention.
Detailed Description
The present invention provides an image denoising method, a storage medium and a terminal device, and in order to make the objects, technical solutions and effects of the present invention clearer and clearer, the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention will be further explained by the description of the embodiments with reference to the drawings.
The present embodiment provides an image denoising method, as shown in fig. 1, the method includes:
s100, obtaining an image to be denoised, and inputting the image to be denoised into a trained image denoising model, wherein the image denoising model is obtained by training based on a training image set, the training image set comprises a plurality of groups of training image groups, each group of training image group comprises a first image and a second image with the same image content, and the signal-to-noise ratio of the second image is greater than that of the first image.
Specifically, the trained image denoising model is a neural network model trained based on a training image set, for example, a convolutional neural network model CNN. The plurality of sets of training images included in the training image set may be captured by an imaging device (e.g., a camera), or may be captured via a network (e.g., hundreds degrees). The training image group comprises two training images which are respectively marked as a first image and a second image. The first image and the second image have the same image content means that the object content carried by the first image is the same as the object content carried by the second image, and when the first image and the second image are overlapped, the object carried by the first image can cover the object corresponding to the first image in the second image. Meanwhile, in this embodiment, the signal-to-noise ratio of the second image is greater than the signal-to-noise ratio of the first image, where the signal-to-noise ratio refers to a ratio of normal image information to noise information in an image, and generally expressed by dB, a higher signal-to-noise ratio of an image indicates less noise in the image.
In an implementation manner of this embodiment, as shown in fig. 3, the image denoising model is a neural network model obtained by training based on a training image set, and a training process of the image denoising model may include the following steps:
and M10, acquiring the training image set.
Specifically, the training image set includes a plurality of sets of training images, each set of training image set includes a first image and a second image having the same image content, and a signal-to-noise ratio of the second image is greater than a signal-to-noise ratio of the first image. In this embodiment, the first image and the second image are raw image data collected by the camera device, and the camera device shoots the same scene based on the same configuration parameters. The raw image data may be raw data in which a CMOS (Complementary Metal-Oxide-Semiconductor) or CCD (Charge Coupled Device) image sensor converts a captured light source signal into a digital signal, and the raw image data is unprocessed and compressed image data. In addition, the exposure duration of the second image is greater than the exposure duration of the first image. The exposure duration refers to a time interval from the opening of the shutter to the closing of the shutter, the diaphragm blade of the camera lens can leave the influence of an object on the negative film in the time interval to leave an image, and when the exposure duration of the camera is long, more light enters the diaphragm, so that the noise carried by the image can be reduced, and the signal-to-noise ratio of the second image is higher than that of the first image.
Further, a first image in each group of training images is an input item of a preset neural network model corresponding to an image denoising model, a second image is a reference item of the preset neural network model corresponding to the image denoising model, and the second image is used for comparing with an output image corresponding to the first image input by the image denoising model, determining a loss function value of the preset neural network model, and comparing the preset neural network model according to the loss function. In addition, since the first image is RAW image data, the first image needs to be preprocessed before being input to a preset neural network model, so that the first image meets the requirement of the preset neural network model.
In an implementation manner of this embodiment, the process of preprocessing the first image may include the following steps:
carrying out color channel separation on a first image in the training image set to obtain a first image block corresponding to the first image;
and adjusting the exposure duration of the image block corresponding to the first image to obtain the first image block with the adjusted exposure duration, and taking the first image block with the adjusted exposure duration as the first image.
Specifically, the first image is original image data, and performing color channel separation on the first image means performing color channel separation on the first image data according to the color sequence of the first image. For example, as shown in fig. 2, the first image is original image data of H × W × 1, and the first image color order is RGBG, where H denotes a height of the first image, W denotes a width of the first image, and 1 denotes color channel data of the first image. The first image may be separated by a color channel to generate a first image block of H/2W/2 4, where H/2 denotes the height of the first image block, W/2 denotes the width of the first image block, and 4 denotes the color channel data of the first image. The 4 color channels are referred to herein as a first color channel 1 storing R data, a second color channel 2 storing G data, a third color channel 3 storing B data, and a fourth color channel 4 storing G data, respectively.
Further, the exposure adjustment coefficient is obtained according to a first exposure duration of the first image and a second exposure duration of the second image. Correspondingly, the adjusting the exposure duration of the image block corresponding to the first image to obtain the first image block after the exposure duration is adjusted, and taking the first image block after the exposure duration is adjusted as the first image specifically includes:
acquiring a first exposure duration of the first image and a second exposure duration of a second image corresponding to the first image, and calculating an exposure adjustment coefficient according to the second exposure duration and the first exposure duration;
and adjusting the exposure duration of the first image block corresponding to the first image according to the exposure adjustment coefficient to obtain the first image block with the adjusted exposure duration, and taking the first image block with the adjusted exposure duration as the first image.
Specifically, the exposure adjustment coefficient is a ratio of the second exposure time period to a first exposure time period, for example, the first exposure time period is 0.1s, the second exposure time period is 10s, and then the exposure adjustment coefficient is 10/0.1 — 100. In addition, after the exposure adjustment coefficient is obtained, adjusting the exposure duration of the first image block according to the exposure adjustment coefficient specifically includes: and multiplying the exposure duration corresponding to the first image block by the exposure adjustment coefficient to obtain a first image block with the exposure duration adjusted, and taking the first image block with the exposure duration adjusted as a first image, so that the exposure duration of the preprocessed first image is equal to the exposure duration of the second image, the brightness of the first image is similar to that of the second image, and when the first image and the second image are adopted for training the preset neural network, the influence of the image brightness on the preset neural network training can be reduced, the training speed of the preset neural network is improved, and the training speed of the image denoising model can be improved.
In addition, in an implementation manner of this embodiment, the preprocessing manner may further include black level removal, normalization processing, and clamping processing. The black level removal and normalization processing is performed after color channel separation and before exposure duration adjustment; the clamping processing is performed after the exposure duration adjustment, that is, the color channel separation is performed on the first image in the training image set to obtain the exposure duration of the image block corresponding to the first image before the first image block corresponding to the first image is adjusted, so as to obtain the first image block after the exposure duration adjustment, and the first image block after the exposure duration adjustment is used as the first image before the first image block includes: sequentially removing black level and normalizing the first image; and adjusting the exposure duration of the image block corresponding to the first image to obtain the first image block with the adjusted exposure duration, and taking the first image block with the adjusted exposure duration as the first image and then performing clamp processing.
In this embodiment, the black level removal is used to correct the data offset, and the black level removal is performed by subtracting a black level value from the image data of each color channel, where the black level value may be 7.5 or the like. The normalization processing is to divide the image data of each color channel of the first image block with the black level removed by a normalization coefficient to normalize the image data of each color channel of the first image block to [0,1], where the normalization coefficient may be determined according to the number of storage bits of the first image block, for example, when the number of storage bits of the first image block is 14 bits, and the maximum value of the first image block is 16383, the normalization coefficient is 16383; when the number of bits stored in the first image block is 8 bits and the maximum value of the first image block is 255, the normalized coefficient is 255. The clamping processing is to clamp the pixel values of all the pixel points included in the first image block after the exposure duration is adjusted to a preset pixel value interval, and when the pixel values of all the pixel points of which the pixel values are greater than the upper limit value of the preset pixel value interval in the first image block after the exposure duration is adjusted are replaced by the upper limit value of the preset pixel value interval, the first image block is prevented from being over exposed. The preset pixel value interval is preferably [0,1], whether each pixel value included in the first image block after exposure duration adjustment is within the [0,1] interval is judged, the pixel value of a pixel point of which the pixel value is within the [0,1] interval is kept unchanged, and the pixel value of a pixel point of which the pixel value is not within the [0,1] interval is adjusted to be 1.
Further, in an implementation manner of this embodiment, as shown in fig. 4, the preset neural network model includes a down-sampling module 10, a processing module 20, and an up-sampling module 30, where the down-sampling module 10, the processing module 20, and the up-sampling module 30 are sequentially arranged, and an output of a previous module is an input of a next module. Correspondingly, the sequentially inputting each first image in the training image set into a preset neural network model, and acquiring a generated image corresponding to the first image output by the preset neural network model specifically includes:
inputting a first image in the training image set into the downsampling module to obtain a first feature image corresponding to the first image;
inputting the first characteristic image into the processing module, and processing the first characteristic image through the processing module to obtain a second characteristic image, wherein the signal-to-noise ratio of the second characteristic image is higher than that of the first characteristic image;
and inputting the second characteristic image into the up-sampling module, and adjusting the resolution of the second characteristic image through the up-sampling module to obtain a generated image corresponding to the first image, wherein the resolution of the generated image is the same as that of the first image.
Specifically, the input item of the down-sampling module 10 is a preprocessed first image, the down-sampling module 10 is configured to acquire an image feature of the first image to obtain a first feature image, and the first feature image is a first feature image of the first image input by the down-sampling module 10. In this embodiment, the down-sampling module 10 includes 5 down-sampling layers, a first down-sampling layer 11 of the 5 down-sampling layers includes 5 × 5 convolutional layers with a step size of 2 and 1 × 1 convolutional layers with a step size of 1, and the second to fifth down-sampling layers 12 to 15 each include a 3 × 3 convolutional layer with a step size of 2 and an inverse residual error module with an expansion coefficient of 4. In the first lower sampling layer 11, the number of channels of the 5 × 5 convolution layers is 32, and the number of channels of the 1 × 1 convolution layers is 16. The number of channels of the 3 × 3 convolutional layer and the inverse residual block in the second downsampling layer 12 is 32, the number of channels of the 3 × 3 convolutional layer and the inverse residual block in the third downsampling layer 13 is 32, the number of channels of the 3 × 3 convolutional layer and the inverse residual block in the fourth downsampling layer 14 is 64, and the number of channels of the 3 × 3 convolutional layer and the inverse residual block in the fifth downsampling layer 15 is 128. The first downsampling layer 11 to the fifth downsampling layer 15 are all used for acquiring image features of the first image, and the image features extracted by the next downsampling layer extract refined features from the image features extracted by the previous downsampling layer, so that the image features extracted by the next downsampling layer are abstract compared with the features extracted by the previous downsampling layer, and the accuracy of feature extraction can be improved.
Further, the intermediate processing module 20 adopts 4 inverse residual blocks (Inverted residuals) with expansion coefficients of 4, the number of channels of each inverse residual block is 128, and the 4 inverse residual blocks perform nonlinear operation on the first feature image extracted by the downsampling module to obtain a second feature image, so that the signal-to-noise ratio of the second feature image is higher than the signal-to-noise ratio of the first feature image. In the embodiment, the reverse residual block is adopted to perform nonlinear operation, so that the learning capability of the model can be enhanced, and the training speed of the preset neural network can be improved.
Further, the upsampling module 30 includes 5 upsampling layers, and each of the first upsampling layer 31 to the fourth upsampling layer 34 of the 5 upsampling layers includes a bilinear interpolation layer +1 × 1 convolutional layer with a step size of 1 +1 short connection layer +1 inverse residual error blocks (Inverted residuals) with an expansion coefficient of 4; the fifth upsampling layer 35 uses a 2 x 2 deconvolution layer. The number of channels of the bilinear interpolation layer, the 1 × 1 convolution layer, the short connection layer and the inverse residual block in the first upsampling layer 31 is 64, the number of channels of the bilinear interpolation layer, the 1 × 1 convolution layer, the short connection layer and the inverse residual block in the second upsampling layer 32 and the third upsampling layer 33 is 32, and the number of channels of the bilinear interpolation layer, the 1 × 1 convolution layer, the short connection layer and the inverse residual block in the fourth upsampling layer 34 is 16. The fifth upsampling layer 35 adopts a 2 × 2 deconvolution layer with a step length of 2, the number of channels is 4, and an image output by the fifth upsampling layer 35 is an output image of the preset neural network. In addition, the 1 × 1 convolutional layer in the first upsampling layer 31, the second upsampling layer 32 and the fourth upsampling layer 34 performs channel compression on the result of performing upsampling on the bilinear interpolation to reduce the number of channels by half, and the short connection operation in the first upsampling layer to the fourth upsampling layer is to perform point-to-point addition on the result of processing the 1 × 1 convolutional layer with the step size of 1 and the final result of the same number of channel layers in the downsampling process to fuse the low-order feature and the high-order feature.
M30, correcting the model parameters of the preset neural network model according to the second image corresponding to the first image and the generated image corresponding to the first image until the training condition of the preset neural network model meets a preset condition, so as to obtain a trained image denoising model.
Specifically, the preset condition includes that the loss function value meets a preset requirement or the training times reach a preset number. The preset requirement may be determined according to the accuracy of the image recognition model, which is not described in detail herein, and the preset number may be a maximum training number of the preset neural network, for example, 4000 times. Therefore, a generated image is output at a preset neural network, a loss function value of the preset neural network is calculated according to the generated image and the second image, and after the loss function value is calculated, whether the loss function value meets a preset requirement is judged; if the loss function value meets the preset requirement, ending the training; if the loss function value does not meet the preset requirement, judging whether the training times of the preset neural network reach the prediction times, and if not, correcting the network parameters of the preset neural network according to the loss function value; and if the preset times are reached, ending the training. Therefore, whether the preset neural network training is finished or not is judged through the loss function value and the training frequency, and the phenomenon that the training of the preset neural network enters a dead cycle due to the fact that the loss function value cannot meet the preset requirement can be avoided.
Furthermore, in this embodiment, a post-processing operation may also be performed on the generated image before calculating the loss function value from the generated image and the second image. The post-processing operation may specifically include:
converting the pixel value of each pixel point contained in the generated image into a preset pixel value interval to obtain a converted output image;
and stretching the converted output image by a preset multiple to obtain a stretched generated image, and taking the stretched generated image as the generated image.
Specifically, the converting of the pixel values of the pixels included in the generated image into the preset pixel value interval may be implemented by comparing all the pixel values included in the generated image with an upper limit value of the preset pixel value interval, and replacing the pixel values of all the pixels greater than the upper limit value in the generated image with the upper limit value of the preset pixel value interval to prevent the first image block from being over exposed. For example, the preset pixel value interval is [0,1], whether each pixel value included in the first image block after the exposure duration adjustment is within the [0,1] interval is determined, the pixel value of a pixel point whose pixel value is within the [0,1] interval is kept unchanged, and the pixel value of a pixel point whose pixel value is not within the [0,1] interval is adjusted to be the pixel value 1.
Further, the preset multiple is preferably 255, the stretching the converted generated image by the preset multiple is to multiply a pixel value of each pixel included in the converted output image by 255 to obtain a stretched generated image, and the stretched generated image is used as the generated image.
Further, in an implementation manner of this embodiment, the loss function value is calculated according to a multi-scale structure similarity loss function and a cosine similarity loss function. Correspondingly, the modifying the model parameters of the preset neural network model according to the second image corresponding to the first image and the generated image corresponding to the first image until the training condition of the preset neural network model meets a preset condition to obtain the trained image denoising model specifically includes:
calculating a multi-scale structure similarity loss function value and a cosine similarity loss function value corresponding to the preset neural network model according to a second image corresponding to the first image and a generated image corresponding to the first image;
obtaining a loss function value of the preset neural network model according to the multi-scale structure similarity loss function value and the cosine similarity loss function value;
and iteratively training the preset neural network model based on the total loss function value until the training condition of the preset neural network model meets a preset condition so as to obtain a trained image denoising model.
Specifically, the preset neural network uses a combination of a multi-scale structure similarity loss function and a cosine similarity loss function as a loss function, and the multi-scale structure similarity loss function value and the cosine similarity loss function value can be respectively calculated when the loss function value of the preset neural network is calculated, and then the calculation is performed according to the multi-scale structure similarity loss function value and the cosine similarity loss function value. In this embodiment, the preset neural network model has a loss function value of a multiple-scale structural similarity + b cosine similarity, where a and b are weight coefficients. For example, if the weight coefficient a and the weight coefficient b are both 1, the loss function value of the preset neural network model is the multi-scale structure similarity loss function value + cosine similarity loss function value. In one implementation of this embodiment, the multi-scale structure similarity loss function is preferably a 5-scale structure similarity loss function, where β 1 ═ γ 1 ═ 0.0448 for the first scale, β 2 ═ γ 2 ═ 0.2856 for the second scale, β 3 ═ γ 3 ═ 0.3001 for the third scale, β 4 ═ γ 4 ═ 0.2363 for the fourth scale, and α 5 ═ β 5 ═ γ 5 ═ 0.1333 for the fifth scale.
Further, in an implementation manner of the embodiment, since the image denoising model is trained by using an original data image, the image to be denoised may be an RGB image or original image data. Therefore, after the image to be denoised is obtained, the image type of the image to be denoised can be judged, and corresponding processing is carried out according to the image type of the image to be denoised. Correspondingly, as shown in fig. 5, the acquiring an image to be denoised and inputting the image to be denoised into a trained image denoising model specifically includes:
s11, acquiring an image to be denoised, and judging the image type of the image to be denoised, wherein the image type comprises an original image data type or an RGB image type acquired by a camera device;
s12, when the image type is an original image data type acquired by a camera device, performing color channel separation on the image to be denoised to obtain a second image block corresponding to the image to be denoised, taking the second image block as the image to be denoised, and inputting the image to be denoised into a trained image denoising model;
and S13, when the image type is an RGB image type, inputting the image to be denoised into a trained image denoising model.
Specifically, the image to be denoised may be raw image data or RGB data, and when the image to be denoised is the raw image data, the image to be denoised needs to be preprocessed before the image to be denoised is input into the image denoising model. The preprocessing mode comprises color channel separation, and in addition, the preprocessing mode can also comprise black level removal, normalization processing, exposure duration amplification processing and clamping processing. The processing procedures of the color channel separation, black level removal, normalization processing, exposure duration adjustment processing and clamping processing are the same as the processing procedures of the image denoising model, and are not repeated here. Of course, it should be noted that when the exposure duration of the image to be denoised is adjusted, the second exposure duration in the exposure duration adjustment process is the expected exposure duration, for example, 10 s. In addition, when the image to be denoised is an RGB image, the RGB image is directly input into the image denoising model.
S200, denoising the image to be denoised through the image denoising model to obtain a denoised image corresponding to the image to be denoised.
Specifically, the denoising of the image to be denoised through the image denoising model refers to inputting the image to be denoised into the image denoising model as an input item of the image denoising model, and removing noise of the image to be denoised through the denoising image model to obtain a denoised image, wherein a signal-to-noise ratio of the denoised image is higher than a signal-to-noise ratio of the image to be denoised.
Further, as can be known from the training process of the image denoising model, the image denoising model includes a down-sampling module, a processing module, and an up-sampling module. Correspondingly, as shown in fig. 6, the denoising the image to be denoised by the image denoising model to obtain a denoised image corresponding to the image to be denoised specifically includes:
s21, inputting the image to be denoised into the down-sampling module to obtain a first characteristic image corresponding to the image to be denoised;
s22, inputting the first characteristic image into the processing module, and processing the first characteristic image through the processing module to obtain a second characteristic image, wherein the signal-to-noise ratio of the second characteristic image is higher than that of the first characteristic image;
s22, inputting the second characteristic image into the up-sampling module, and adjusting the resolution of the second characteristic image through the up-sampling module to obtain a denoised image corresponding to the image to be denoised, wherein the resolution of the denoised image is the same as the resolution of the image to be denoised.
Specifically, the down-sampling module, the processing module, and the up-sampling module have been described in detail in the training process of the image denoising model, and are not described herein again.
Further, in an implementation manner of this embodiment, since the image to be denoised may be an original image or an RGB image, after the denoised image output by the image denoising model, the denoised image output by the image denoising model needs to be post-processed according to the image type of the image to be denoised. Correspondingly, when the image type is an original image data type acquired by a camera device, denoising the image to be denoised by the image denoising model to obtain a denoised image corresponding to the image to be denoised specifically includes:
denoising the image to be denoised through the image denoising model to obtain an output image;
converting the pixel value of each pixel point contained in the output image to a preset pixel value interval to obtain a converted output image;
stretching the converted output image by a preset multiple to obtain a stretched output image;
and carrying out white balance and demosaicing processing on the stretched output image so as to convert the stretched output image into an RGB image, and taking the RGB image as a de-noised image.
Specifically, the converting of the pixel values of the pixels included in the output image into the preset pixel value interval includes comparing all the pixel values included in the output image with an upper limit value of the preset pixel value interval, and replacing the pixel values of all the pixels larger than the upper limit value in the output image with the upper limit value of the preset pixel value interval to prevent the first image block from being over exposed. For example, the preset pixel value interval is [0,1], whether each pixel value included in the first image block after the exposure duration adjustment is within the [0,1] interval is determined, the pixel value of a pixel point whose pixel value is within the [0,1] interval is kept unchanged, and the pixel value of a pixel point whose pixel value is not within the [0,1] interval is adjusted to be the pixel value 1.
Further, the preset multiple is preferably 255, and the stretching the converted output image by the preset multiple is to multiply a pixel value of each pixel included in the converted output image by 255 to obtain a stretched output image. Further, since the stretched output image is original image data, white balancing and demosaicing of the stretched output image are required to convert the stretched output image into an RGB image.
Further, in an implementation manner of this embodiment, when the image type is an RGB image type, denoising the image to be denoised by the image denoising model to obtain a denoised image corresponding to the image to be denoised includes:
denoising the image to be denoised through the image denoising model to obtain an output image;
converting the pixel value of each pixel point contained in the output image to a preset pixel value interval to obtain a converted output image;
and stretching the converted output image by a preset multiple to obtain a de-noised image.
Specifically, the conversion operation and the stretching operation performed on the output image are the same as those performed when the image type is the original image data type acquired by the camera device, and are not described herein again.
In this embodiment, an image denoising model trained based on a training image set including a plurality of training image groups, each training image group including a first image and a second image with the same image content, and a signal-to-noise ratio of the second image being greater than that of the first image is used to denoise the image to be denoised, and the image to be denoised is preprocessed before the image to be denoised is denoised, so as to improve the denoising effect, for example, the image to be denoised shown in fig. 8 is shown, the denoised image obtained by the image denoising method of this embodiment is shown in fig. 9, and the image to be denoised is shown in fig. 10, and the denoised image obtained by the image denoising method of this embodiment is shown in fig. 11; on the other hand, it takes time to reduce the noise removal, and for example, as shown in fig. 7, the processing time for the raw image data of 4032 × 3024 × 1 is 1.8 s.
Based on the image denoising method, the present invention further provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps in the image denoising method according to the above embodiment.
Based on the image denoising method, the present invention further provides a terminal device, as shown in fig. 12, including at least one processor (processor) 20; a display screen 21; and a memory (memory)22, and may further include a communication Interface (Communications Interface)23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.
Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 30 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.
The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.
In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the terminal device are described in detail in the method, and are not stated herein.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (13)

1. An image denoising method, comprising:
acquiring an image to be denoised, and inputting the image to be denoised into a trained image denoising model, wherein the image denoising model is obtained by training based on a training image set, the training image set comprises a plurality of groups of training image groups, each group of training image group comprises a first image and a second image with the same image content, and the signal-to-noise ratio of the second image is greater than that of the first image;
and denoising the image to be denoised through the image denoising model to obtain a denoised image corresponding to the image to be denoised.
2. The image denoising method of claim 1, wherein the training process of the image denoising model comprises:
acquiring the training image set;
inputting a first image in the training image set into a preset neural network model, and acquiring a generated image corresponding to the first image output by the preset neural network model;
and correcting the model parameters of the preset neural network model according to the second image corresponding to the first image and the generated image corresponding to the first image until the training condition of the preset neural network model meets a preset condition, so as to obtain a trained image denoising model.
3. The image denoising method of claim 2, wherein the first image is an image with a first exposure duration, the second image is an image with a second exposure duration, the first image and the second image are both raw image data, and the second exposure duration is greater than the first exposure duration.
4. The image denoising method of claim 2, wherein before inputting a first image in the training image set into a preset neural network model and obtaining a generated image corresponding to the first image output by the preset neural network model, the method further comprises:
carrying out color channel separation on a first image in the training image set to obtain a first image block corresponding to the first image;
and adjusting the exposure duration of the image block corresponding to the first image to obtain the first image block with the adjusted exposure duration, and taking the first image block with the adjusted exposure duration as the first image.
5. The image denoising method of claim 4, wherein the adjusting the exposure duration of the image block corresponding to the first image to obtain the first image block with the adjusted exposure duration, and the using the first image block with the adjusted exposure duration as the first image specifically comprises:
acquiring a first exposure duration of the first image and a second exposure duration of a second image corresponding to the first image, and calculating an exposure adjustment coefficient according to the second exposure duration and the first exposure duration;
and adjusting the exposure duration of the first image block corresponding to the first image according to the exposure adjustment coefficient to obtain the first image block with the adjusted exposure duration, and taking the first image block with the adjusted exposure duration as the first image.
6. The image denoising method according to any one of claims 2 to 5, wherein the predetermined neural network model comprises a down-sampling module, a processing module, and an up-sampling module; the sequentially inputting each first image in the training image set into a preset neural network model, and acquiring a generated image corresponding to the first image output by the preset neural network model specifically includes:
inputting a first image in the training image set into the downsampling module to obtain a first feature image corresponding to the first image;
inputting the first characteristic image into the processing module, and processing the first characteristic image through the processing module to obtain a second characteristic image, wherein the signal-to-noise ratio of the second characteristic image is higher than that of the first characteristic image;
and inputting the second characteristic image into the up-sampling module, and adjusting the resolution of the second characteristic image through the up-sampling module to obtain a generated image corresponding to the first image, wherein the resolution of the generated image is the same as that of the first image.
7. The image denoising method according to any one of claims 2 to 5, wherein the modifying the model parameters of the preset neural network model according to the second image corresponding to the first image and the generated image corresponding to the first image until the training condition of the preset neural network model meets a preset condition to obtain the trained image denoising model specifically comprises:
calculating a multi-scale structure similarity loss function value and a cosine similarity loss function value corresponding to the preset neural network model according to a second image corresponding to the first image and a generated image corresponding to the first image;
obtaining a loss function value of the preset neural network model according to the multi-scale structure similarity loss function value and the cosine similarity loss function value;
and iteratively training the preset neural network model based on the total loss function value until the training condition of the preset neural network model meets a preset condition so as to obtain a trained image denoising model.
8. The image denoising method of claim 1, wherein the trained image denoising model comprises a down-sampling module, a processing module, and an up-sampling module; the denoising of the image to be denoised through the image denoising model to obtain a denoised image corresponding to the image to be denoised specifically includes:
inputting the image to be denoised into the down-sampling module to obtain a first characteristic image corresponding to the image to be denoised;
inputting the first characteristic image into the processing module, and processing the first characteristic image through the processing module to obtain a second characteristic image, wherein the signal-to-noise ratio of the second characteristic image is higher than that of the first characteristic image;
inputting the second characteristic image into the up-sampling module, and adjusting the resolution of the second characteristic image through the up-sampling module to obtain a denoised image corresponding to the image to be denoised, wherein the resolution of the denoised image is the same as the resolution of the image to be denoised.
9. The image denoising method according to claim 1 or 8, wherein the obtaining the image to be denoised and inputting the image to be denoised into the trained image denoising model specifically comprises:
acquiring an image to be denoised, and judging the image type of the image to be denoised, wherein the image type comprises an original image data type or an RGB image type acquired by a camera device;
when the image type is an original image data type acquired by a camera device, performing color channel separation on the image to be denoised to obtain a second image block corresponding to the image to be denoised, taking the second image block as the image to be denoised, and inputting the image to be denoised into a trained image denoising model;
and when the image type is an RGB image type, inputting the image to be denoised into a trained image denoising model.
10. The image denoising method according to claim 9, wherein when the image type is an original image data type acquired by a camera device, denoising the image to be denoised by the image denoising model to obtain a denoised image corresponding to the image to be denoised specifically comprises:
denoising the image to be denoised through the image denoising model to obtain an output image;
converting the pixel value of each pixel point contained in the output image to a preset pixel value interval to obtain a converted output image;
stretching the converted output image by a preset multiple to obtain a stretched output image;
and carrying out white balance and demosaicing processing on the stretched output image so as to convert the stretched output image into an RGB image, and taking the RGB image as a de-noised image.
11. The image denoising method of claim 9, wherein when the image type is an RGB image type, denoising the image to be denoised by the image denoising model to obtain a denoised image corresponding to the image to be denoised comprises:
denoising the image to be denoised through the image denoising model to obtain an output image;
converting the pixel value of each pixel point contained in the output image to a preset pixel value interval to obtain a converted output image;
and stretching the converted output image by a preset multiple to obtain a de-noised image.
12. A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of the image denoising method according to any one of claims 1 to 11.
13. A terminal device, comprising: a processor and a memory; the memory has stored thereon a computer readable program executable by the processor; the processor, when executing the computer readable program, implements the steps of the image denoising method according to any one of claims 1 to 11.
CN201910708364.8A 2019-08-01 2019-08-01 Image denoising method, storage medium and terminal equipment Active CN112308785B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910708364.8A CN112308785B (en) 2019-08-01 2019-08-01 Image denoising method, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910708364.8A CN112308785B (en) 2019-08-01 2019-08-01 Image denoising method, storage medium and terminal equipment

Publications (2)

Publication Number Publication Date
CN112308785A true CN112308785A (en) 2021-02-02
CN112308785B CN112308785B (en) 2024-05-28

Family

ID=74486423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910708364.8A Active CN112308785B (en) 2019-08-01 2019-08-01 Image denoising method, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN112308785B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112424A (en) * 2021-04-08 2021-07-13 深圳思谋信息科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113610725A (en) * 2021-08-05 2021-11-05 深圳市慧鲤科技有限公司 Picture processing method and device, electronic equipment and storage medium
WO2023216057A1 (en) * 2022-05-09 2023-11-16 Shanghai United Imaging Healthcare Co., Ltd. System and method for medical imaging

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408522A (en) * 2016-06-27 2017-02-15 深圳市未来媒体技术研究院 Image de-noising method based on convolution pair neural network
CN106600568A (en) * 2017-01-19 2017-04-26 沈阳东软医疗***有限公司 Low-dose CT image denoising method and device
WO2018018470A1 (en) * 2016-07-27 2018-02-01 华为技术有限公司 Method, apparatus and device for eliminating image noise and convolutional neural network
CN108280811A (en) * 2018-01-23 2018-07-13 哈尔滨工业大学深圳研究生院 A kind of image de-noising method and system based on neural network
US10032256B1 (en) * 2016-11-18 2018-07-24 The Florida State University Research Foundation, Inc. System and method for image processing using automatically estimated tuning parameters
US20180293710A1 (en) * 2017-04-06 2018-10-11 Pixar De-noising images using machine learning
CN108876735A (en) * 2018-06-01 2018-11-23 武汉大学 A kind of blind denoising method of true picture based on depth residual error network
CN109345485A (en) * 2018-10-22 2019-02-15 北京达佳互联信息技术有限公司 A kind of image enchancing method, device, electronic equipment and storage medium
CN109872288A (en) * 2019-01-31 2019-06-11 深圳大学 For the network training method of image denoising, device, terminal and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408522A (en) * 2016-06-27 2017-02-15 深圳市未来媒体技术研究院 Image de-noising method based on convolution pair neural network
WO2018018470A1 (en) * 2016-07-27 2018-02-01 华为技术有限公司 Method, apparatus and device for eliminating image noise and convolutional neural network
US10032256B1 (en) * 2016-11-18 2018-07-24 The Florida State University Research Foundation, Inc. System and method for image processing using automatically estimated tuning parameters
CN106600568A (en) * 2017-01-19 2017-04-26 沈阳东软医疗***有限公司 Low-dose CT image denoising method and device
US20180293710A1 (en) * 2017-04-06 2018-10-11 Pixar De-noising images using machine learning
CN108280811A (en) * 2018-01-23 2018-07-13 哈尔滨工业大学深圳研究生院 A kind of image de-noising method and system based on neural network
CN108876735A (en) * 2018-06-01 2018-11-23 武汉大学 A kind of blind denoising method of true picture based on depth residual error network
CN109345485A (en) * 2018-10-22 2019-02-15 北京达佳互联信息技术有限公司 A kind of image enchancing method, device, electronic equipment and storage medium
CN109872288A (en) * 2019-01-31 2019-06-11 深圳大学 For the network training method of image denoising, device, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113112424A (en) * 2021-04-08 2021-07-13 深圳思谋信息科技有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113610725A (en) * 2021-08-05 2021-11-05 深圳市慧鲤科技有限公司 Picture processing method and device, electronic equipment and storage medium
WO2023216057A1 (en) * 2022-05-09 2023-11-16 Shanghai United Imaging Healthcare Co., Ltd. System and method for medical imaging

Also Published As

Publication number Publication date
CN112308785B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
US20230080693A1 (en) Image processing method, electronic device and readable storage medium
US20210073957A1 (en) Image processor and method
CN115442515A (en) Image processing method and apparatus
CN112308785B (en) Image denoising method, storage medium and terminal equipment
US11526962B2 (en) Image processing apparatus, image processing method, and storage medium
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN110555805B (en) Image processing method, device, equipment and storage medium
CN114862698B (en) Channel-guided real overexposure image correction method and device
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN117011194A (en) Low-light image enhancement method based on multi-scale dual-channel attention network
CN114581355A (en) Method, terminal and electronic device for reconstructing HDR image
CN111953888B (en) Dim light imaging method and device, computer readable storage medium and terminal equipment
CN111147924B (en) Video enhancement processing method and system
CN116389913A (en) Image exposure correction method based on depth curve estimation
Li et al. Rendering nighttime image via cascaded color and brightness compensation
CN111754412A (en) Method and device for constructing data pairs and terminal equipment
CN114240794A (en) Image processing method, system, device and storage medium
CN111383171B (en) Picture processing method, system and terminal equipment
CN115311149A (en) Image denoising method, model, computer-readable storage medium and terminal device
CN113962844A (en) Image fusion method, storage medium and terminal device
CN113256501B (en) Image processing method, storage medium and terminal equipment
CN114511462B (en) Visual image enhancement method
CN115984137B (en) Dim light image recovery method, system, equipment and storage medium
CN117058062B (en) Image quality improvement method based on layer-by-layer training pyramid network
JP7455234B2 (en) Methods, devices, equipment and storage medium for facial pigment detection model training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant