CN117575943A - 4K dim light image enhancement method combining contrast enhancement and noise reduction - Google Patents

4K dim light image enhancement method combining contrast enhancement and noise reduction Download PDF

Info

Publication number
CN117575943A
CN117575943A CN202311712419.5A CN202311712419A CN117575943A CN 117575943 A CN117575943 A CN 117575943A CN 202311712419 A CN202311712419 A CN 202311712419A CN 117575943 A CN117575943 A CN 117575943A
Authority
CN
China
Prior art keywords
image
enhancement
sub
light image
dim light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311712419.5A
Other languages
Chinese (zh)
Other versions
CN117575943B (en
Inventor
姚平
宋小民
刘征
王曼
李子清
郑慧明
孙忠武
李智勇
李怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Original Assignee
Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd filed Critical Sichuan Xinshi Chuangwei Ultra High Definition Technology Co ltd
Priority to CN202311712419.5A priority Critical patent/CN117575943B/en
Publication of CN117575943A publication Critical patent/CN117575943A/en
Application granted granted Critical
Publication of CN117575943B publication Critical patent/CN117575943B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a 4K dim light image enhancement method combining contrast enhancement and noise reduction, which comprises the following steps: constructing and training an image enhancement model; the image enhancement model comprises a dim light processing module, a condition encoder, a reversible network and a second ACE enhancer module; the dark light processing module is used for generating a contrast enhancement image, a color image and a noise image according to the input dark light image; the condition encoder is used for generating a color recovery image according to the dim light image, the contrast enhancement image, the color image and the noise image; the reversible network is used for generating an illumination image according to the color map and the color recovery image fitting; the second ACE enhancement submodule carries out enhancement processing on the illumination image to obtain a final enhanced image; the beneficial effects achieved by the invention are as follows: the denoising network is added in the conditional coder to effectively reduce the influence of noise on image restoration, so that the conditional coder has better local texture restoration capability, retains details and has better noise resistance.

Description

4K dim light image enhancement method combining contrast enhancement and noise reduction
Technical Field
The invention relates to the technical field of image processing, in particular to a 4K dim light image enhancement method combining contrast enhancement and noise reduction.
Background
The dark-light image is an image shot in a dark-light environment, and the shot image often has the problems of low contrast, more noise, poor detail performance and the like due to insufficient light. The dim light enhancement technology is used as a basic application, not only can be used for detecting and tracking an image target, but also can be developed for the second time by improving the recovery effect of a dim light image.
Although the prior art can improve the restoration and gain effects of the image to some extent, the restoration effect for the dark-light image is not ideal. In addition, these methods tend to focus on the restoration enhancement of the image, but ignore the suppression of noise and the restoration of texture information, so that the processed image still has noise interference and even has an excessive smoothing phenomenon, and the overall quality of the image is affected.
Aiming at the defects of the prior art, the invention provides a 4K dim light image enhancement method combining contrast enhancement and noise reduction.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a 4K dim light image enhancement method combining contrast enhancement and noise reduction, which has more ideal texture information recovery and better noise reduction effect, so that the problems of automatic recovery enhancement and noise reduction treatment on a 4K dim light image and improvement of the visual effect and quality of the dim light image are solved.
The aim of the invention is achieved by the following technical scheme: A4K dim light image enhancement method combining contrast enhancement and noise reduction is characterized in that: comprising the following steps:
constructing and training an image enhancement model;
inputting the dim light image to be enhanced into an image enhancement model to obtain an enhanced image;
the image enhancement model comprises a dim light processing module, a condition encoder, a reversible network and a second ACE enhancer module;
the dark light processing module is used for generating a contrast enhancement image, a color image and a noise image according to the input dark light image;
the condition encoder is used for generating a color recovery image according to the dim light image, the contrast enhancement image, the color image and the noise image;
the reversible network is used for generating an illumination image according to the color map and the color recovery image fitting;
and the second ACE enhancement submodule carries out enhancement processing on the illumination image to obtain a final enhanced image.
Further, the dim light processing module includes:
the first ACE enhancer module is used for carrying out ACE self-adaptive contrast enhancement on the dim light image to obtain a contrast enhanced image;
the color map sub-module is used for carrying out chromaticity calculation and brightness normalization on the dim light image to generate a color map with unchanged illumination;
and the noise submodule is used for generating a noise figure according to the color figure obtained by the color figure submodule.
Further, the method for generating the contrast enhancement image includes:
carrying out local average processing on the dim light image to obtain a low-frequency part of the dim light image;
smoothing the low-frequency part of the dim light image by adopting nonlinear filtering;
subtracting the low-frequency part subjected to the smoothing treatment from the dim light image to obtain a high-frequency part of the dim light image;
enhancing the high-frequency part of the dim light image by adopting a histogram equalization method;
and combining the enhanced high-frequency part and the low-frequency part of the dark-light image which is not subjected to smoothing treatment to obtain the enhanced contrast enhancement image.
Further, the condition encoder includes:
the characteristic extraction convolution layer is used for extracting characteristics of the dim light image, the contrast enhancement image, the color image and the noise image to obtain a characteristic image;
the denoising network is used for denoising the characteristic image to obtain a denoised image;
the maximum pooling convolution layer is used for extracting edge information of the denoising image, returning the edge information to the denoising network and performing dimension reduction on the denoising network;
and the linear layer is used for carrying out color recovery on the denoising image to obtain a color recovery image.
Further, the denoising network includes:
the first convolution layer is 3*3 in convolution kernel size and is used for extracting local details of the characteristic image to obtain a first image;
the second convolution layer has a convolution kernel of 3*3 and is used for combining the characteristic image and the first image to obtain a second image;
the third residual error layer is used for calculating the difference between the second image and the characteristic image to obtain first residual error information;
a fourth residual layer, configured to add the first residual information to the input feature image to obtain a denoised image;
and a fourth convolution layer, the convolution kernel of which is 1*1 convolution kernel, is used for adjusting the channel number of the denoised image from 256 to the original input channel number 64 and outputting the denoised image.
Further, in the denoising network, a nonlinear activation function LeakyReLU is arranged between the first convolution layer and the second convolution layer.
Further, the loss function of the reversible network is such as to minimize log likelihood.
Building and training an image enhancement model, comprising:
constructing an image enhancement model;
s001, reducing the normal light image and the dark light image by 2 times in an equal ratio, wherein the normal light image and the dark light image are obtained by shooting in a room by using a 4K camera;
s002 randomly buckling a first sub-image with 600x600 from the normal light image after the 2 times reduction;
s003, randomly buckling a second sub-image which corresponds to the first sub-image and has the same size from the dark-light image which is reduced by 2 times;
s004, training an image enhancement model by using the first sub-image and the second sub-image;
s005 repeats the steps from S001 to S004 until the preset iteration times are reached, and a pre-training model is obtained;
s006, reducing the normal light image and the dark light image by 4 times in an equal ratio;
s007 randomly buckling a third sub-image with 960x540 from the normal light image after 4 times of reduction;
s008, randomly buckling a fourth sub-image which corresponds to the first sub-image and has the same size from the dark-light image which is reduced by 4 times;
s009 trains the pre-training model by using the third sub-image and the fourth sub-image;
s010 repeats the steps from S006 to S009 until the preset iteration number is reached, and a final image enhancement model is obtained.
Further, constructing and training an image enhancement model, further comprising:
before inputting the first sub-image and the second sub-image into the image enhancement model;
flipping the first sub-image and the second sub-image; gaussian noise is added to the first sub-image and the second sub-image.
Further, constructing and training an image enhancement model, further comprising:
before the third sub-image and the fourth sub-image are input into the pre-training model for training;
flipping the third sub-image and the fourth sub-image; gaussian noise is added to the third sub-image and the fourth sub-image.
The invention has the following advantages:
(1) The invention uses a denoising network, uses a residual error learning idea in the denoising network, and reduces noise of a dim light image;
(2) The invention comprehensively considers the characteristics of the dim light image, adopts the image input mode of 2 times downsampling and 4 times downsampling, thereby more accurately recovering the texture and detail information of the original image;
(3) The image enhanced by the neural network sometimes loses some detail and texture information, and the ACE self-adaptive contrast enhancement algorithm is applied to the illumination image generated by fitting of the reversible network output, so that the contrast of the image can be improved by using the method, and the problem that the texture information is too smoothly lost in a partial area of the image is solved.
Drawings
FIG. 1 is a process step diagram of the present invention;
FIG. 2 is a schematic flow diagram of an image enhancement model structure framework;
Detailed Description
The present invention will be further described with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
It should be noted that, the azimuth or positional relationship indicated by "left", "right", etc. is based on the azimuth or positional relationship shown in the drawings, or the azimuth or positional relationship conventionally put in use of the inventive product, or the azimuth or positional relationship conventionally understood by those skilled in the art, such terms are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or element referred to must have a specific azimuth, be configured and operated in a specific azimuth, and therefore should not be construed as limiting the present invention.
It should be noted that, under the condition of no conflict, the embodiments of the present invention and the features and technical solutions in the embodiments may be combined with each other.
As shown in fig. 1 to 2, the present invention provides a method for enhancing a 4K dark-light image by combining contrast enhancement and noise reduction, which includes S100 to S200.
S100, constructing and training an image enhancement model;
the image enhancement model comprises a dim light processing module, a condition encoder, a reversible network and a second ACE enhancer module.
The dark light processing module is used for generating a contrast enhancement image, a color image and a noise image according to the input dark light image.
Further, the dim light processing module includes:
the first ACE enhancer module is used for carrying out ACE self-adaptive contrast enhancement on the dim light image to obtain a contrast enhanced image;
the color map sub-module is used for carrying out chromaticity calculation and brightness normalization processing on the dim light image and producing a color map with unchanged illumination;
color map sub-model uses each pixel of a darkness image in calculating chromaticity according to Retinex theory
C (x): a chromaticity value of a point;
x: a pixel value for a point;
mean c (x) The method comprises the following steps An RGB channel mean value of the image;
and carrying out brightness normalization processing on the pixel values of the dark light image to obtain brightness values, wherein the color map is equal to chromaticity plus brightness, and obtaining the color map with unchanged illumination through adding the chromaticity values and the brightness values.
And the noise submodule is used for generating a noise figure according to the color figure obtained by the color figure submodule.
The noise calculation formula is as follows:
n (x): a noise value;
abs: an absolute value;
the following steps: and (5) differentiating.
The invention inputs the darkness image, contrast enhancement image, color image and noise image obtained by darkness image processing into the condition encoder. The multi-image input mode can comprehensively consider the characteristics of the dark-light image, so that the texture and detail information of the original image can be restored more accurately.
The contrast enhancement image processed by the ACE enhancement submodule is used as a part of input, so that the contrast of the image can be effectively improved, and the visual effect of the dim light image is improved. Meanwhile, the introduction of the color map and the noise map can provide more information for the denoising network, and help the denoising network to better recover the textures and the details of the image.
Further, the method for generating the contrast enhancement image includes:
carrying out local average processing on the dim light image to obtain a low-frequency part of the dim light image;
smoothing the low-frequency part of the dim light image by adopting nonlinear filtering;
subtracting the low-frequency part subjected to the smoothing treatment from the dim light image to obtain a high-frequency part of the dim light image;
enhancing the high-frequency part of the dim light image by adopting a histogram equalization method;
and combining the enhanced high-frequency part and the low-frequency part of the dark-light image which is not subjected to smoothing treatment to obtain the enhanced contrast enhancement image.
The invention applies ACE self-adaptive contrast enhancement algorithm in the generation method of the contrast enhancement image, the method can improve the contrast of the image, dynamically adjust and restore enhanced parameters according to the statistical information of the local area to keep more details and texture information, solve the problem that the texture information of the image is excessively smoothly lost in the partial area, thereby avoiding the loss of excessive whole image information, and the ACE self-adaptive contrast enhancement has certain noise resistance.
The condition encoder is configured to generate a color recovery image from the darkness image, the contrast enhanced image, the color map, and the noise map.
Further, the condition encoder includes:
the characteristic extraction convolution layer is used for extracting characteristics of the dim light image, the contrast enhancement image, the color image and the noise image to obtain a characteristic image;
the denoising network is used for denoising the characteristic image to obtain a denoised image;
the maximum pooling convolution layer is used for extracting edge information of the denoising image, returning the edge information to the denoising network and performing dimension reduction on the denoising network;
and the linear layer is used for carrying out color recovery on the denoising image to obtain a color recovery image.
The condition encoder adopts a RRDB network structure to extract the characteristics of the input image, and uses the extracted potential characteristics as the input of each layer in the middle of the model, so that the structure can effectively extract the characteristics in the image and provide powerful support for the subsequent recovery process;
specifically, extracting a characteristic image through a characteristic extraction convolution layer, and sending the characteristic image into a denoising network; the denoising network adopts the idea of residual error learning, and can learn the high-frequency information of the image more effectively through residual error connection and output the denoised image at the same time; then, the maximum pooling convolution layer extracts edge information of the denoising image, and simultaneously, the maximum pooling layer is utilized to carry out dimension reduction processing on the denoising network, and the high-latitude characteristic representation in the denoising image is converted into the low-latitude characteristic representation, so that the dimension of the denoising image is reduced, the calculation amount is reduced, and the generalization capability of the condition encoder is improved; and finally, carrying out fine adjustment on color recovery by adopting an nn.sequential container and a Sigmoid activation function through a linear layer, and converting the result into a probability distribution diagram of a [0,1] interval through the Sigmoid activation function, thereby obtaining a more accurate and natural color recovery result.
Further, the denoising network includes:
the first convolution layer is 3*3 in convolution kernel size and is used for extracting local details of the characteristic image to obtain a first image;
the second convolution layer has a convolution kernel of 3*3 and is used for combining the characteristic image and the first image to obtain a second image;
the third residual error layer is used for calculating the difference between the second image and the characteristic image to obtain first residual error information;
a fourth residual layer, configured to add the first residual information to the input feature image to obtain a denoised image;
a fourth convolution layer, the size of the convolution kernel of which is 1*1 convolution kernel, for adjusting the channel number 256 back to the original input channel number 64 and outputting a denoised image; residual information of the denoised image and the feature image is calculated, and meanwhile, the features of the input feature image are reserved.
Further, in the denoising network, a nonlinear activation function LeakyReLU is arranged between the first convolution layer and the second convolution layer, and in the denoising and image feature extracting process, the nonlinear activation function increases the expression capacity of the model, so that the complex characteristics of the image can be better simulated.
The invention improves the effect of recovering the dim light enhancement by the neural network by adding the denoising network, reduces the influence of noise on the image recovery, has better effect on the recovery of the image texture, solves the influence of partial noise on the image, and also solves the problem of image texture loss.
The reversible network is used for generating an illumination image according to the color map and the color recovery image fitting; further, the loss function of the reversible network is the minimum log likelihood, the performance of the image enhancement model is evaluated by taking the minimum log likelihood as the loss function, the brightness index is constructed by evaluating the performance, the image enhancement model can learn the brightness distribution of the image under the normal illumination condition, and the restored normal illumination image is generated.
And the second ACE enhancement submodule carries out enhancement processing on the illumination image to obtain a final enhanced image.
The image enhanced by the neural network sometimes loses some detail and texture information, and the ACE self-adaptive contrast enhancement algorithm is applied to the illumination image generated by fitting of the reversible network output, so that the contrast of the image can be improved by using the method, and the problem that the texture information is too smoothly lost in a partial area of the image is solved.
An image enhancement model is constructed and trained,
firstly, acquiring paired data (a dim light image-a normal light image, wherein the image resolution is 3840 x 2160) from a dim light laboratory by using a 4K camera, wherein the normal light image and the dim light image are a group of corresponding images under the dim light and the normal light shot indoors by using the 4K camera, and the normal light image is input into an image enhancement model as a label image and is used as a standard embodiment image for learning, contrast and training of the image enhancement model; the 4K camera is used for guaranteeing that the 4K image with high resolution is used for the subsequent operation pixel points;
the method comprises the steps of reducing the geometric proportion of a dark light image to a normal light image by 2 times, and randomly buckling a first sub-image with 600x600 from the normal light image reduced by 2 times; randomly buckling a second sub-image with the same size corresponding to the first sub-image from the dark-light image which is reduced by 2 times;
before the training process, randomly selecting a 320x320 region from the 600x 600-sized image for training, and turning over the first sub-image and the second sub-image; adding gaussian noise to the first sub-image and the second sub-image to increase the amount of data and enhance the generalization ability of the model; the first sub-image and the second sub-image are sent into an image enhancement model to construct a pre-training model.
Repeating the steps from reducing the dark light image and the normal light image in an equal ratio by 2 times to sending the first sub-image and the second sub-image into the image enhancement model to construct a pre-training model until the preset iteration times are reached, and obtaining the pre-training model; the pre-training model simultaneously learns training information recovery in construction, and global illumination information and feature recovery; the purpose of this pre-training model is to increase the amount of data and let the network learn some of the texture features.
After training and constructing the pre-training model, reducing the normal light image and the dim light image by 4 times in an equal ratio; randomly buckling a third sub-image with 960x540 from the normal light image after 4 times of shrinking; randomly buckling a fourth sub-image which corresponds to the first sub-image and has the same size from the dark-light image after 4 times of reduction; and the third sub-image and the fourth sub-image are turned over; adding gaussian noise to the third sub-image and the fourth sub-image to increase the data volume and enhance the generalization ability of the model; inputting the third sub-image and the fourth sub-image into a pre-training model for training; repeating the steps of reducing the normal light image and the dim light image in an equal ratio by 4 times to inputting the third sub-image and the fourth sub-image into the pre-training model for training until the preset iteration times are reached, and obtaining a final image enhancement model;
according to the invention, through two-step training and different training area sizes, the image enhancement model can better adapt to illumination change and retain detail information when processing a dim light image. The method can improve the visual effect and quality of the dim light image, and has better noise resistance and local texture restoration capability.
S200, inputting the dim light image to be enhanced into an image enhancement model to obtain an enhanced image.
The foregoing examples represent only preferred embodiments, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that variations and modifications can be made by those skilled in the art without departing from the spirit of the invention, which falls within the scope of the invention.

Claims (10)

1. A4K dim light image enhancement method combining contrast enhancement and noise reduction is characterized in that: comprising the following steps:
constructing and training an image enhancement model;
inputting the dim light image to be enhanced into an image enhancement model to obtain an enhanced image;
the image enhancement model comprises a dim light processing module, a condition encoder, a reversible network and a second ACE enhancer module;
the dark light processing module is used for generating a contrast enhancement image, a color image and a noise image according to the input dark light image;
the condition encoder is used for generating a color recovery image according to the dim light image, the contrast enhancement image, the color image and the noise image;
the reversible network is used for generating an illumination image according to the color map and the color recovery image fitting;
and the second ACE enhancement submodule carries out enhancement processing on the illumination image to obtain a final enhanced image.
2. The method for enhancing a 4K dark-light image by combining contrast enhancement and noise reduction according to claim 1, wherein: the dim light processing module includes:
the first ACE enhancer module is used for carrying out ACE self-adaptive contrast enhancement on the dim light image to obtain a contrast enhanced image;
the color map sub-module is used for carrying out chromaticity calculation and brightness normalization on the dim light image to generate a color map with unchanged illumination;
and the noise submodule is used for generating a noise figure according to the color figure obtained by the color figure submodule.
3. A method of 4K dim light image enhancement in combination with contrast enhancement and noise reduction according to claim 2, wherein: the method for generating the contrast enhancement image comprises the following steps:
carrying out local average processing on the dim light image to obtain a low-frequency part of the dim light image;
smoothing the low-frequency part of the dim light image by adopting nonlinear filtering;
subtracting the low-frequency part subjected to the smoothing treatment from the dim light image to obtain a high-frequency part of the dim light image;
enhancing the high-frequency part of the dim light image by adopting a histogram equalization method;
and combining the enhanced high-frequency part and the low-frequency part of the dark-light image which is not subjected to smoothing treatment to obtain the enhanced contrast enhancement image.
4. The method for enhancing a 4K dark-light image by combining contrast enhancement and noise reduction according to claim 1, wherein: the condition encoder includes:
the characteristic extraction convolution layer is used for extracting characteristics of the dim light image, the contrast enhancement image, the color image and the noise image to obtain a characteristic image;
the denoising network is used for denoising the characteristic image to obtain a denoised image;
the maximum pooling convolution layer is used for extracting edge information of the denoising image, returning the edge information to the denoising network and performing dimension reduction on the denoising network;
and the linear layer is used for carrying out color recovery on the denoising image to obtain a color recovery image.
5. The method for 4K dark-light image enhancement combined with contrast enhancement and noise reduction of claim 4, wherein: the denoising network includes:
the first convolution layer is 3*3 in convolution kernel size and is used for extracting local details of the characteristic image to obtain a first image;
the second convolution layer has a convolution kernel of 3*3 and is used for combining the characteristic image and the first image to obtain a second image;
the third residual error layer is used for calculating the difference between the second image and the characteristic image to obtain first residual error information;
a fourth residual layer, configured to add the first residual information to the input feature image to obtain a denoised image;
a fourth convolution layer, whose convolution kernel is 1*1 convolution kernel, is used to adjust the number of channels of the denoised image from 256 back to 64 and then output the denoised image.
6. The method for enhancing a 4K dark-light image with combined contrast enhancement and noise reduction of claim 5, wherein: in the denoising network, a nonlinear activation function LeakyReLU is arranged between the first convolution layer and the second convolution layer.
7. The method for enhancing a 4K dark-light image by combining contrast enhancement and noise reduction according to claim 1, wherein: the loss function of the reversible network is such as to minimize the log-likelihood.
8. The method for enhancing a 4K dark-light image by combining contrast enhancement and noise reduction according to claim 1, wherein: building and training an image enhancement model, comprising:
s001, reducing the normal light image and the dark light image in an equal ratio by 2 times, wherein the normal light image and the dark light image are obtained by shooting in a room by using a 4K camera;
s002 randomly picking up a first sub-image with 600x600 from the normal light image which is reduced by 2 times;
s003, randomly picking a second sub-image which corresponds to the first sub-image and has the same size from the dark-light image which is reduced by 2 times;
s004, training an image enhancement model by using the first sub-image and the second sub-image;
s005 repeats the steps from S001 to S004 until the preset iteration times are reached, and a pre-training model is obtained;
s006, reducing the normal light image and the dark light image by 4 times in an equal ratio;
s007 randomly picking up a third sub-image with 960x540 from the normal light image after 4 times of reduction;
s008, randomly picking a fourth sub-image which corresponds to the first sub-image and has the same size from the dark-light image which is reduced by 4 times;
s009 trains the pre-training model by using the third sub-image and the fourth sub-image;
s010 repeats the steps from S006 to S009 until reaching the preset iteration number, and the final image enhancement model is obtained.
9. The method for 4K darkness image enhancement in combination with noise reduction according to claim 8, wherein: building and training an image enhancement model, further comprising:
before inputting the first sub-image and the second sub-image into the image enhancement model;
flipping the first sub-image and the second sub-image; gaussian noise is added to the first sub-image and the second sub-image.
10. The method for 4K darkness image enhancement in combination with noise reduction according to claim 8, wherein: building and training an image enhancement model, further comprising:
before the third sub-image and the fourth sub-image are input into the pre-training model for training;
flipping the third sub-image and the fourth sub-image; gaussian noise is added to the third sub-image and the fourth sub-image.
CN202311712419.5A 2023-12-13 2023-12-13 4K dim light image enhancement method combining contrast enhancement and noise reduction Active CN117575943B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311712419.5A CN117575943B (en) 2023-12-13 2023-12-13 4K dim light image enhancement method combining contrast enhancement and noise reduction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311712419.5A CN117575943B (en) 2023-12-13 2023-12-13 4K dim light image enhancement method combining contrast enhancement and noise reduction

Publications (2)

Publication Number Publication Date
CN117575943A true CN117575943A (en) 2024-02-20
CN117575943B CN117575943B (en) 2024-07-19

Family

ID=89884419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311712419.5A Active CN117575943B (en) 2023-12-13 2023-12-13 4K dim light image enhancement method combining contrast enhancement and noise reduction

Country Status (1)

Country Link
CN (1) CN117575943B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530848A (en) * 2013-09-27 2014-01-22 中国人民解放军空军工程大学 Double exposure implementation method for inhomogeneous illumination image
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method
CN111047537A (en) * 2019-12-18 2020-04-21 清华大学深圳国际研究生院 System for recovering details in image denoising
CN112070703A (en) * 2020-09-16 2020-12-11 山东建筑大学 Bionic robot fish underwater visual image enhancement method and system
US20230080693A1 (en) * 2020-05-15 2023-03-16 Samsung Electronics Co., Ltd. Image processing method, electronic device and readable storage medium
CN116012243A (en) * 2022-12-26 2023-04-25 合肥工业大学 Real scene-oriented dim light image enhancement denoising method, system and storage medium
CN116309191A (en) * 2023-05-18 2023-06-23 山东恒昇源智能科技有限公司 Intelligent gas inspection display method based on image enhancement
CN116503286A (en) * 2023-05-05 2023-07-28 昆明理工大学 Retinex theory-based low-illumination image enhancement method
CN116523794A (en) * 2023-05-14 2023-08-01 哈尔滨理工大学 Low-light image enhancement method based on convolutional neural network
CN116797491A (en) * 2023-07-12 2023-09-22 福州大学 Dim light blurred image enhancement method based on task decoupling
CN116797488A (en) * 2023-07-07 2023-09-22 大连民族大学 Low-illumination image enhancement method based on feature fusion and attention embedding
CN116993621A (en) * 2023-09-14 2023-11-03 中国科学技术大学 Dim light image enhancement method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103530848A (en) * 2013-09-27 2014-01-22 中国人民解放军空军工程大学 Double exposure implementation method for inhomogeneous illumination image
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method
CN111047537A (en) * 2019-12-18 2020-04-21 清华大学深圳国际研究生院 System for recovering details in image denoising
US20230080693A1 (en) * 2020-05-15 2023-03-16 Samsung Electronics Co., Ltd. Image processing method, electronic device and readable storage medium
CN112070703A (en) * 2020-09-16 2020-12-11 山东建筑大学 Bionic robot fish underwater visual image enhancement method and system
CN116012243A (en) * 2022-12-26 2023-04-25 合肥工业大学 Real scene-oriented dim light image enhancement denoising method, system and storage medium
CN116503286A (en) * 2023-05-05 2023-07-28 昆明理工大学 Retinex theory-based low-illumination image enhancement method
CN116523794A (en) * 2023-05-14 2023-08-01 哈尔滨理工大学 Low-light image enhancement method based on convolutional neural network
CN116309191A (en) * 2023-05-18 2023-06-23 山东恒昇源智能科技有限公司 Intelligent gas inspection display method based on image enhancement
CN116797488A (en) * 2023-07-07 2023-09-22 大连民族大学 Low-illumination image enhancement method based on feature fusion and attention embedding
CN116797491A (en) * 2023-07-12 2023-09-22 福州大学 Dim light blurred image enhancement method based on task decoupling
CN116993621A (en) * 2023-09-14 2023-11-03 中国科学技术大学 Dim light image enhancement method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JINZHEN MU: ""Detection and mapping of an uncooperative spinning target under low-light illumination condition"", 《IEEE》, 14 March 2020 (2020-03-14) *
XUTONG REN 等: ""LR3M:Robust low-light enhancement via low-rank regularized retinex model"", 《TRANSACTION ON IMAGE PROCESSING》, 31 December 2020 (2020-12-31) *
伍旭: ""基于变分自编码器的低照度图像增强方法研究"", 《中国优秀硕士学位论文全文数据库》, 1 June 2021 (2021-06-01) *
姚迪: ""低空红外监视***图像质量提升关键技术研究"", 《中国优秀硕士学位论文全文数据库》, 15 January 2022 (2022-01-15) *

Also Published As

Publication number Publication date
CN117575943B (en) 2024-07-19

Similar Documents

Publication Publication Date Title
Tian et al. Deep learning on image denoising: An overview
CN110599409B (en) Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
US20230080693A1 (en) Image processing method, electronic device and readable storage medium
CN111028163A (en) Convolution neural network-based combined image denoising and weak light enhancement method
CN111275637A (en) Non-uniform motion blurred image self-adaptive restoration method based on attention model
CN109410127A (en) A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN114049283A (en) Self-adaptive gray gradient histogram equalization remote sensing image enhancement method
Wang et al. MAGAN: Unsupervised low-light image enhancement guided by mixed-attention
CN110969589A (en) Dynamic scene fuzzy image blind restoration method based on multi-stream attention countermeasure network
Shi et al. Low-light image enhancement algorithm based on retinex and generative adversarial network
CN112785637B (en) Light field depth estimation method based on dynamic fusion network
CN114066747B (en) Low-illumination image enhancement method based on illumination and reflection complementarity
CN116797488A (en) Low-illumination image enhancement method based on feature fusion and attention embedding
CN112651917A (en) Space satellite low-illumination image enhancement method based on generation countermeasure network
JP7353803B2 (en) Image processing device, image processing method, and program
CN111145102A (en) Synthetic aperture radar image denoising method based on convolutional neural network
CN114511480A (en) Underwater image enhancement method based on fractional order convolution neural network
CN115965544A (en) Image enhancement method and system for self-adaptive brightness adjustment
CN115063318A (en) Adaptive frequency-resolved low-illumination image enhancement method and related equipment
CN116452469B (en) Image defogging processing method and device based on deep learning
CN117557482A (en) Low-illumination image enhancement method based on illumination component optimization
CN117575943B (en) 4K dim light image enhancement method combining contrast enhancement and noise reduction
CN117422653A (en) Low-light image enhancement method based on weight sharing and iterative data optimization
CN116843559A (en) Underwater image enhancement method based on image processing and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant