CN115375598A - Unsupervised dim light image enhancement method and unsupervised dim light image enhancement device - Google Patents
Unsupervised dim light image enhancement method and unsupervised dim light image enhancement device Download PDFInfo
- Publication number
- CN115375598A CN115375598A CN202211004861.8A CN202211004861A CN115375598A CN 115375598 A CN115375598 A CN 115375598A CN 202211004861 A CN202211004861 A CN 202211004861A CN 115375598 A CN115375598 A CN 115375598A
- Authority
- CN
- China
- Prior art keywords
- light image
- image
- loss function
- dim
- normal light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000006870 function Effects 0.000 claims abstract description 72
- 230000008447 perception Effects 0.000 claims abstract description 8
- 230000007246 mechanism Effects 0.000 claims abstract description 7
- 230000003042 antagnostic effect Effects 0.000 claims abstract description 6
- 125000004122 cyclic group Chemical group 0.000 claims description 17
- 230000002708 enhancing effect Effects 0.000 claims description 7
- 238000012935 Averaging Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 238000013507 mapping Methods 0.000 abstract description 11
- 230000002457 bidirectional effect Effects 0.000 abstract description 4
- 238000013461 design Methods 0.000 description 7
- 238000012549 training Methods 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 5
- 239000000126 substance Substances 0.000 description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000002787 reinforcement Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 241001522296 Erithacus rubecula Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an unsupervised dim light image enhancement method and a device, wherein the method comprises the following steps: constructing a dark light image generation module guided by a brightness mask; the dark light image is adopted to constrain the brightness mask from the reconstruction loss function, so that the brightness mask can model the dark light characteristic of the dark light image; constructing a dim light image enhancement module based on brightness-independent representation; the reconstructed normal light image is constrained by the self-reconstruction loss function of the normal light image, so that a decoder can implicitly model the brightness information of the normal light image; based on the bidirectional mapping between the dim light image and the normal light image, the reconstruction based on the cycle consistency is constructed to develop effective supervision; meanwhile, a cycle consistency loss function is constructed to constrain the cycle reconstruction result; and constructing a multi-loss function mechanism by using an antagonistic loss function, a self-reconstruction loss function, a characteristic perception loss function and a cycle consistency loss function. The device comprises: a processor and a memory.
Description
Technical Field
The invention relates to the field of deep learning and dark light image enhancement, in particular to an unsupervised dark light image enhancement method and device.
Background
With the advent of artificial intelligence and the big data era, the number of images has increased explosively. However, when shooting in a dark environment, due to insufficient light received by the camera sensor, the shot image may suffer from a series of quality degradation problems, such as: low visibility, low contrast, color distortion, etc., which seriously affect the visual effect. The dim light image enhancement task aims at enhancing the image shot in the dim light environment and recovering the clear content of the image so as to improve the visual quality of the image, and can be widely applied to the fields of all-weather automatic driving, 24-hour intelligent security and the like. In addition, as an underlying vision task, scotopic image enhancement has also prompted the development of many advanced vision tasks, such as: target detection, image segmentation, target tracking, and the like.
In recent years, a dark light image enhancement method based on deep learning has come to be noticed. Lore et al first applied deep learning to the scotopic image enhancement task, by employing a stacked sparse denoising autoencoder to perform brightness enhancement and noise removal on the scotopic image. Lv et al propose an end-to-end multi-branch enhanced network, extracting effective feature representation through a feature extraction module, an enhancement module and a fusion module. These supervised-based methods typically rely on large-scale paired data sets for training, where scotopic and normal light images are often obtained by adjusting the exposure time of the camera. However, there are domain differences between the paired data acquired in this way and the real data, making it difficult for supervised methods trained on such data to be applied directly to real scenes. Considering that a large number of normal light images are easily obtained in the real world, it is natural to guide the enhancement of the unpaired dim light image with the normal light image as a reference. Therefore, the method for researching the dim light image enhancement method based on the unpaired dim light image and normal light image has important significance and application value.
In recent years unsupervised dim light based image enhancement methods have been developed. Jiang et al designs a network structure by using a generation countermeasure network, and realizes non-paired dark light image enhancement. Guo et al considers scotopic image enhancement as a depth curve estimation task for a particular image and designs a series of differentiable non-reference loss functions to achieve curve estimation. Zhang et al proposed a method for separating the illumination component and the reflectance component based on the maximum entropy theory and the theoretical model of the retinal cortex. However, these methods construct implicit constraints based primarily on approximating assumptions about the characteristics of the scotopic and normal light images. Due to the lack of effective supervision, it remains challenging to solve the dim image enhancement problem with unpaired dim and normal light data. Without paired normal light image supervision, the mapping from dim light images to normal light images is highly unconstrained. Therefore, how to construct effective supervision for developing the dark light image enhancement based on the unpaired data and realize unsupervised dark light image enhancement is worthy of further research.
Disclosure of Invention
The invention provides an unsupervised dim light image enhancement method and a device, the invention fully explores bidirectional mapping between a dim light image and a normal light image based on unpaired dim light and normal light images, designs a dim light image generation module guided by a brightness mask, provides a dim light image enhancement module based on brightness irrelevant representation, and utilizes cycle consistency reconstruction to develop effective supervision, which is described in detail in the following:
an unsupervised scotopic image enhancement method, the method comprising:
constructing a luminance mask-guided scotopic image generation module by generating a luminance mask-guided scotopic image from a scotopic image I L A brightness mask for darkening the reference normal light image into a dark light image; the dark light image self-reconstruction loss function is adopted to constrain the brightness mask, so that the brightness mask can model the dark light characteristic of the dark light image;
constructing a dim light image enhancement module based on brightness irrelevant representation, and enhancing a dim light image by learning the brightness irrelevant representation and combining brightness information of a reference normal light image; the reconstructed normal light image is constrained by the self-reconstruction loss function of the normal light image, so that the decoder can implicitly model the brightness information of the normal light image.
The resultant dark light image I H→L And normal light image I L→H Obtaining a dark light image I by a dark light image generation module guided by an input brightness mask L Cyclic reconstruction result of (I) L→H→L Combining the dark light image I H→L Input to the encoder E in sequence L And decoder G L→H Obtaining a normal light image I H The result of the cyclic reconstruction; meanwhile, a cycle consistency loss function is constructed to constrain the cycle reconstruction result;
and constructing a multi-loss function mechanism by using the antagonistic loss function, the self-reconstruction loss function, the characteristic perception loss function and the cycle consistency loss function.
The dark light image generation module guided by the brightness mask is as follows:
by taking images from dark light I L In the estimation of the luminance mask, the normal light image I using the luminance mask and the reference H Synthesizing dark light image I by pixel multiplication H→L The following:
M=G M (I L )
wherein, I L And I H Respectively, an unpaired scotopic image and a normal light image, M represents a scotopic image I L The luminance mask of (2) is set,representing pixel-by-pixel dot multiplication, G M Representing the luminance mask estimation network.
Wherein, the constraint of the brightness mask by using the dark light image self-reconstruction loss function is as follows:
wherein, I L→H Representing a dim light image I L The reinforcement result of | · | | non-woven cells 1 The L1 distance is represented by the distance,the distribution of pixel values representing a scotopic image, i.e. scotopic image I L Obedience distribution Indicating averaging.
The dim light image enhancement module based on the brightness-independent representation is as follows:
firstly, two encoders are adopted to respectively extract a dim light image I L And a reference normal light image I H Is represented as follows:
Z L =E L (I L )
Z H =E H (I H )
wherein E is L And E H Respectively representing a dim light image I L And a reference normal light image I H Encoder of, Z L And Z H Respectively representing a dim light image I L And a reference normal light image I H Is displayed independently of the brightness of the display.
In extracting the luminance-independent representation { Z } L ,Z H After the decoding, two decoders { G } are respectively adopted L→H ,G H→H Generation of Normal light image based on luminance-independent representation { I L→H ,I H→H }. By representing Z on the basis of luminance independence H Reconstruction of a Normal light image I H→H Luminance information is decoded by a decoder G H→H Implicitly modeled. For dim light image I L Decoder G H→H Is shared to the decoder G L→H So that the decoder G L→H Luminance-independent representation Z L Generating a positive light image I L→H . Decoder G L→H And G H→H Based on a luminance-independent representation { Z L ,Z H Generating a normal light image I L→H ,I H→H The procedure of (c) is as follows,
I L→H =G L→H (Z L )
I H→H =G H→H (Z H ))
wherein G is L→H And G H→H Denotes a structurally identical, parameter-shared decoder, I L→H And I H→H Respectively representing a dim light image I L Enhanced result and normal light image I H The result of reconstruction.
The expression of the self-reconstruction loss function of the normal light image is as follows:
wherein the content of the first and second substances,the distribution of pixel values representing a normal-light image, i.e. normal-light image I H Obedience distribution
The cycle consistency is reestablished as follows:
I H→L→H =G L→H (E L (I H→L ))
wherein, I L→H→L And I H→L→H Respectively representing a dim light image I L And normal light image I H And (4) circularly reconstructing the result.
The expression of the circular consistency loss function is as follows:
an unsupervised scotopic image enhancing device, the device comprising: a processor and a memory, the memory having stored therein program instructions, the processor calling upon the program instructions stored in the memory to cause the apparatus to perform any of the method steps described.
The technical scheme provided by the invention has the beneficial effects that:
1. according to the method, the characteristic expression capability of deep learning is utilized, and effective supervision is developed by exploring bidirectional mapping between the dim light image and the normal light image based on unpaired data so as to promote learning of dim light image enhancement;
2. the invention designs a dim light image generation module guided by a brightness mask, which models a dim light image by estimating the brightness mask of the dim light image to darken a reference normal light image, thereby constructing a circulating network to promote the enhancement learning of the dim light image; meanwhile, a dim light image enhancement module based on brightness-independent representation is designed, and a dim light image is enhanced by learning the brightness-independent representation and combining with the brightness information of a reference normal light image;
3. the performance of the method is superior to that of the existing unsupervised dim light image enhancement method through experimental verification on the public data set.
Drawings
FIG. 1 is a flow chart of an unsupervised scotopic image enhancement method;
fig. 2 shows the contrast result of the peak signal-to-noise ratio of the enhanced dim light image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention are described in further detail below.
1. Dark light image generation module guided by design brightness mask
For input unpaired scotopic image I L And normal light image I H The embodiment of the invention designs a dim light image generation module guided by a brightness mask to learn the mapping from a normal light image to a dim light image so as to construct a network with cycle consistency. Specifically, the luminance mask guided scotopic image generation module first estimates a scotopic image I L The brightness mask M modeling the dark image I L So that a normal light image can be referenced by using the brightness mask MI H And darkened. After obtaining the brightness mask M, the brightness mask M and the reference normal light image I are utilized H Pixel-by-pixel multiplication is carried out to synthesize a dark light image I H→L The whole process is expressed as follows:
M=G M (I L ) (1)
wherein, I L And I H Respectively representing an unpaired scotopic image and a normal light image, M representing a scotopic image I L The luminance mask of (a) is set,representing pixel-by-pixel dot multiplication, G M Representing a luminance mask estimation network consisting of 3 convolutional layers, 8 residual block sets, 2 transposed convolutional layers and 1 convolutional layer.
To enable the luminance mask M estimated by the luminance mask guided scotopic image generation module to model the luminance mask from the scotopic image I L So that the resultant dark-light image I H→L The method and the device are closer to the image under the real dark light scene, and the embodiment of the invention adopts the dark light image to reconstruct the loss function from the dark light image I L The luminance mask M of (a) is constrained. The expression of the dark image self-reconstruction loss function is as follows:
wherein, I L→H Representing a dim light image I L The reinforcement result of | · | | non-woven cells 1 The L1 distance is indicated.The distribution of pixel values representing a scotopic image, i.e. scotopic image I L Obedience distribution Indicating averaging.
2. Designing a dim image enhancement module based on a luminance independent representation
For a dim-light image and a normal-light image (a dim-light image corresponds to an image taken under low-light conditions, and a normal-light image corresponds to an image taken under normal-light conditions) taken under different light conditions of the same scene, the intrinsic content characteristics of the scene are consistent and independent of brightness. Normal light images may be generated by introducing luminance information based on content characteristics within the scene. Therefore, the invention designs a dim image enhancement module based on a brightness-independent representation to extract the brightness-independent representation of the dim image, on the basis of which the dim image is enhanced by introducing the brightness cues of the reference normal light image. Specifically, the dim image enhancement module based on the independent brightness representation first extracts the dim image I by using two encoders respectively L And a reference normal light image I H Is represented as follows:
Z L =E L (I L ) (4)
Z H =E H (I H ) (5)
wherein, E L And E H Respectively representing a dim light image I L And a reference normal light image I H Encoder of, Z L And Z H Respectively representing a dim light image I L And a reference normal light image I H Is displayed independently of the brightness of the display. The encoder consists of 3 convolutional layers and 4 residual blocks.
In extracting the luminance-independent representation { Z } L ,Z H After the decoding, two decoders { G } are adopted respectively L→H ,G H→H Generation of Normal light image based on luminance-independent representation { I L→H ,I H→H }. By representing Z on the basis of luminance independence H Reconstruction of a Normal light image I H→H Luminance information is decoded by a decoder G H→H Implicitly modeled. For dim light image I L Decoder G H→H Is shared to the decoder G L→H So that the decoder G L→H Luminance-independent representation Z L Generating a positive light image I L→H . Decoder G L→H And G H→H Based on a luminance-independent representation { Z L ,Z H Generating a normal light image { I } L→H ,I H→H The procedure is as follows:
I L→H =G L→H (Z L ) (6)
I H→H =G H→H (Z H )) (7)
wherein G is L→H And G H→H Denotes a structurally identical, parameter-shared decoder, I L→H And I H→H Respectively representing a dim image I L Enhancement result and normal light image I H The result of reconstruction. The decoder consists of 4 residual blocks, 2 transposed convolution layers and 1 convolution layer.
For ensuring a reference normal light image I H And the generated normal light image I H→H The normal light image I from the reference H Can be decoded by a decoder G H→H The embodiment of the invention adopts the self-reconstruction loss function of the normal light image to generate the normal light image I by implicit modeling H→H And (5) carrying out constraint. The expression of the self-reconstruction loss function of the normal light image is as follows:
wherein the content of the first and second substances,the distribution of pixel values representing a normal-light image, i.e. normal-light image I H Obedience distribution
3. Building a network based on cycle consistency
Dark light image generation guided by using the proposed luminance maskThe imaging module and the dim image enhancement module based on the brightness-independent representation effectively realize the bidirectional mapping between the dim image and the normal light image. Based on the generated dim light image I H→L And normal light image I L→H According to the embodiment of the invention, the cycle consistency reconstruction of the dim light image and the normal light image is constructed to develop effective supervision and promote the learning of dim light image enhancement. Generating a dark light image I H→L And normal light image I L→H Obtaining a dark light image I by a dark light image generation module guided by an input brightness mask L Cyclic reconstruction result of (I) L→H→L Generating a dark light image I H→L Sequentially input encoder E L And decoder G L→H Obtaining a normal light image I H Cyclic reconstruction result of (I) H→L→H The whole process is expressed as follows:
for effective surveillance of dark-light images I L Cyclic reconstruction result of (I) L→H→L And normal light image I H Cyclic reconstruction result of (I) H→L→H In the embodiment of the invention, the cycle consistency loss function is adopted to constrain the method, and the formula of the cycle consistency loss function is expressed as follows:
4. constructing a multiple loss function mechanism
In order to perform unsupervised training and obtain an enhancement result with good quality, the embodiment of the invention uses an antagonistic loss function, a self-reconstruction loss function, a feature perception loss function and a cycle consistency loss function to construct a multi-loss function mechanism, so as to improve the visual quality of a dim light image enhancement result.
For confining the dark-light image I generated H→L And normal light image I L→H Counterlearning is employed to make the distribution of the generated image closer to that of the real image.
The formula for the penalty function is expressed as follows:
wherein D is L And D H Respectively showing a dim light image discriminator and a normal light image discriminator, which have the same structure and are composed of 4 convolutional layers stacked together.Representing the competing loss function of the dim image,representing the countering loss function of a normal light image.
Enabling a luminance mask M estimated by a luminance mask guided scotopic image generation module to model images from scotopic images I L While ensuring a normal light image I from a reference H Can be decoded by a decoder G H→H Implicitly modeled, embodiments of the present invention utilize a self-reconstruction loss function for constraints. The formula for the reconstruction loss function is expressed as follows:
wherein the content of the first and second substances,representing a dim-light image self-reconstruction loss function,representing the self-reconstruction loss function of a normal light image.
Dark light image I generated for constraint H→L And normal light image I L→H And an input image I H And I L The embodiment of the invention adopts a characteristic perception loss function to restrain the consistency of the characteristics of the input image and the generated image. The formula for the feature perception loss function is expressed as follows:
wherein the content of the first and second substances,denotes the L2 distance, phi l (. Cndot.) represents the l-th level features of the pre-trained VGG-19 model, and embodiments of the present invention employ the conv5-1 level features of the pre-trained VGG-19 model.
The VGG-19 model is a classic model in the field of deep learning, input images and output images can be sent to the VGG-19 model to obtain the L-th layer features, and then L2 distance constraint is carried out on the features to ensure the consistency of the contents of the input images and the generated images. conv5-1 is the first convolutional layer in the fifth convolutional block.
Finally, embodiments of the present invention employ a round robin consistency loss function L cyc Confining the dark-light image I L Cyclic reconstruction result of (I) L→H→L And normal light image I H Cyclic reconstruction result of (I) H→L→H 。
The loss function used for the final training network is expressed as follows:
L total =λ adv L adv +λ SR L SR +λ SFP L SFP +λ cyc L cyc (17)
wherein λ is adv ,λ SR ,λ SFP And λ cyc Representing the weights of the antagonistic loss function, the self-reconstruction loss function, the feature perception loss function and the cyclic consistency loss function, respectively.
5. Training unsupervised dim light image enhancement network
In the training process, the unsupervised dim image enhancement network based on dim image generation and brightness-independent representation learning comprises a dim image generator, a brightness-independent representation extraction mechanism and a circulation network, and a loss function L is used total Jointly training the entire network, the weight of each loss function { lambda adb ,λ SR ,λ SFP ,λ cyc Are set to {1, 10,1, 10}, respectively.
Fig. 2 lists the peak snr contrast results of the enhanced dim images, and the contrast algorithm includes: SCI method and enlightgan method, both of which are unsupervised dim image enhancement algorithms. The larger the value of the peak signal-to-noise ratio, the closer the enhanced dim-light image is to the true normal-light image. As shown in fig. 2, the SCI method and enlightngan method both have small peak snr values and the enhancement result is much different from the real normal light image because these two methods only focus on the forward mapping from the dim light image to the normal light image, neglect the facilitated learning of the forward mapping from the reverse mapping from the normal light image to the dim light image, and do not enhance the dim light image with a luminance-independent representation reflecting the intrinsic properties of the object to be photographed. As can be seen from fig. 2, by exploiting the correlation between the forward mapping and the backward mapping to find the brightness-independent representation of the dim light image and the normal light image, the method of the present invention can obtain an enhanced result closer to the real normal light image.
An unsupervised scotopic image enhancing device, the device comprising: a processor and a memory, the memory having stored therein program instructions, the processor calling the program instructions stored in the memory to cause the apparatus to perform the method steps of:
constructing a luminance mask-guided scotopic image generation module by extracting from the scotopic image I L Middle estimate luminance maskThe brightness mask is used for darkening the reference normal light image to synthesize a dark light image; the dark light image self-reconstruction loss function is adopted to constrain the brightness mask, so that the brightness mask can model the dark light characteristic of the dark light image;
constructing a dim light image enhancement module based on brightness-independent representation, and enhancing the dim light image by learning the brightness-independent representation and combining with the brightness information of the reference normal light image; the reconstructed normal light image is constrained by the self-reconstruction loss function of the normal light image, so that the decoder can implicitly model the brightness information of the normal light image.
The synthesized dark light image I H→L And normal light image I L→H Obtaining a dark light image I by a dark light image generation module guided by an input brightness mask L Cyclic reconstruction result of (I) L→H→L Combining the dark light image I H→L Sequentially input encoder E L And decoder G L→H Obtaining a normal light image I H The result of the cyclic reconstruction; meanwhile, a cycle consistency loss function is constructed to constrain the cycle reconstruction result;
and constructing a multi-loss function mechanism by using an antagonistic loss function, a self-reconstruction loss function, a characteristic perception loss function and a cycle consistency loss function.
The dark light image generation module guided by the brightness mask is as follows:
by taking images from dark light I L In the estimation of the luminance mask, the normal light image I using the luminance mask and the reference H Pixel-by-pixel multiplication is carried out to synthesize a dark light image I H→L The following are:
M=G M (I L )
wherein, I L And I H Respectively, an unpaired scotopic image and a normal light image, M represents a scotopic image I L The luminance mask of (2) is set,representing pixel-by-pixel dot multiplication, G M Representing the luminance mask estimation network.
Wherein, the constraint of the brightness mask by the dark light image self-reconstruction loss function is as follows:
wherein, I L→H Representing a dim image I L The reinforcement result of | · | | non-woven cells 1 The L1 distance is represented by the following equation,the distribution of pixel values representing a scotopic image, i.e. scotopic image I L Obedience distribution Indicating averaging.
The dim light image enhancement module based on the brightness-independent representation is as follows:
firstly, two encoders are adopted to respectively extract a dim light image I L And a reference normal light image I H Is represented as follows:
Z L =E L (I L )
Z H =E H (I H )
wherein, E L And E H Respectively representing a dim image I L And a reference normal light image I H Encoder of, Z L And Z H Respectively representing a dim image I L And a reference normal light image I H Is displayed independently of the brightness of the display.
In extracting the luminance-independent representation { Z } L ,Z H After the decoding, two decoders { G } are respectively adopted L→H ,G H→H Generation of Normal light image based on luminance-independent representation { I L→H ,I H→H }. By based on brightnessIndependent representation Z H Reconstruction of a Normal light image I H→H Luminance information is decoded by a decoder G H→H Implicitly modeled. For dim light image I L Decoder G H→H Is shared to the decoder G L→H So that the decoder G L→H Representation Z based on independent brightness L Generating a positive light image I L→H . Decoder G L→H And G H→H Based on a luminance-independent representation { Z L ,Z H Generating a normal light image I L→H ,I H→H The procedure of (c) is as follows,
I L→H =G L→H (Z L )
I H→H =G H→H (Z H ))
wherein G L→H And G H→H Representing a structurally identical, parameter-shared decoder, I L→H And I H→H Respectively representing a dim light image I L Enhancement result and normal light image I H The reconstructed result of (1).
The expression of the self-reconstruction loss function of the normal light image is as follows:
wherein, the first and the second end of the pipe are connected with each other,the distribution of pixel values representing a normal-light image, i.e. normal-light image I H Obey distribution
The cycle consistency is reestablished as follows:
I H→L→H =G L→H (E L (I H→L ))
wherein, I L→H→L And I H→L→H Respectively representing a dim light image I L And normal light image I H And (4) circularly reconstructing the result.
The expression of the cyclic consistency loss function is as follows:
in the embodiment of the present invention, except for the specific description of the model of each device, the model of other devices is not limited, as long as the device can perform the above functions.
Those skilled in the art will appreciate that the drawings are only schematic illustrations of preferred embodiments, and the above-described embodiments of the present invention are merely provided for description and do not represent the merits of the embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. An unsupervised scotopic image enhancement method, comprising:
constructing a dark light image generation module guided by a brightness mask, and darkening a reference normal light image by estimating the brightness mask from the dark light image; the dark light image self-reconstruction loss function is adopted to constrain a brightness mask, so that the brightness mask can model the dark light characteristic of the dark light image;
constructing a dim light image enhancement module based on brightness-independent representation, and enhancing the dim light image by learning the brightness-independent representation and combining with the brightness information of the reference normal light image; the reconstructed normal light image is constrained by the self-reconstruction loss function of the normal light image, so that a decoder can implicitly model the brightness information of the normal light image;
based on the generated dim light image I H→L And normal light image I L→H Obtaining a reference normal light image I by cyclic consistency reconstruction H Cyclic reconstruction result of (I) H→L→H And a dim light image I L Cyclic reconstruction result of (I) L→H→L (ii) a Meanwhile, a cycle consistency loss function is constructed to constrain the cycle reconstruction result;
and constructing a multi-loss function mechanism by using an antagonistic loss function, a self-reconstruction loss function, a characteristic perception loss function and a cycle consistency loss function.
2. The unsupervised scotopic image enhancement method of claim 1, wherein the luminance mask guided scotopic image generation module is:
by taking images from dark light I L In the estimation of the luminance mask, the normal light image I using the luminance mask and the reference H Pixel-by-pixel multiplication is carried out to synthesize a dark light image I H→L The following:
M=G M (I L )
3. The unsupervised scotopic image enhancement method of claim 1, wherein the constraining the luminance mask from the reconstruction loss function using the scotopic image is:
4. The unsupervised scotopic image enhancement method of claim 1, wherein the luminance-independent representation-based scotopic image enhancement module is:
firstly, two encoders are adopted to respectively extract a dim light image I L And a reference normal light image I H Is represented as follows:
Z L =E L (I L )
Z H =E H (I H )
wherein E is L And E H Respectively representing a dim light image I L And a reference normal light image I H Encoder of, Z L And Z H Respectively representing a dim light image I L And a reference normal light image I H A luminance-independent representation of;
in extracting the luminance-independent representation Z L ,Z H After the decoding, two decoders { G } are respectively adopted L→H ,G H→H Generation of Normal light image based on luminance-independent representation { I L→H ,I H→H H, by expressing Z based on brightness independence H Reconstruction of a Normal light image I H→H The brightness information of the normal light image is decoded by a decoder G H→H Modeling; for dim light image I L Decoder G H→H Is shared to the decoder G L→H So that the decoder G L→H Representation Z based on independent brightness L Generating a positive light image I L→H (ii) a Decoder G L→H And G H→H Based on a luminance-independent representation { Z L ,Z H Generating a normal light image { I } L→H ,I H→H The procedure of (c) is as follows,
I L→H =G L→H (Z L )
I H→H =G H→H (Z H ))
wherein G is L→H And G H→H Denotes a structurally identical, parameter-shared decoder, I L→H And I H→H Respectively representing a dim light image I L Enhancement result and normal light image I H The reconstructed result of (1).
5. The unsupervised dim light image enhancement method according to claim 1, wherein the self-reconstruction loss function of the normal light image is expressed as follows:
6. An unsupervised scotopic image enhancement method according to claim 1, wherein the cyclic consistency reconstruction is as follows:
I H→L→H =G L→H (E L (I H→L ))
wherein, I L→H→L And I H→L→H Respectively representing a dim image I L And normal light image I H And (4) circularly reconstructing the result.
8. an unsupervised scotopic image enhancing device, the device comprising: a processor and a memory, the memory having stored therein program instructions, the processor calling upon the program instructions stored in the memory to cause the apparatus to perform the method steps of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211004861.8A CN115375598B (en) | 2022-08-22 | 2022-08-22 | Method and device for enhancing unsupervised dim light image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211004861.8A CN115375598B (en) | 2022-08-22 | 2022-08-22 | Method and device for enhancing unsupervised dim light image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115375598A true CN115375598A (en) | 2022-11-22 |
CN115375598B CN115375598B (en) | 2024-04-05 |
Family
ID=84068020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211004861.8A Active CN115375598B (en) | 2022-08-22 | 2022-08-22 | Method and device for enhancing unsupervised dim light image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115375598B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019071981A1 (en) * | 2017-10-12 | 2019-04-18 | 北京大学深圳研究生院 | Image enhancement method based on multi-exposure generation and re-fusion frame |
US20190213717A1 (en) * | 2018-01-05 | 2019-07-11 | Canon Kabushiki Kaisha | Image processing method, imaging apparatus using the same, image processing apparatus, storage medium, and lens apparatus |
US20220036534A1 (en) * | 2020-07-31 | 2022-02-03 | Adobe Inc. | Facial reconstruction network |
CN114399431A (en) * | 2021-12-06 | 2022-04-26 | 北京理工大学 | Dim light image enhancement method based on attention mechanism |
CN114627006A (en) * | 2022-02-28 | 2022-06-14 | 复旦大学 | Progressive image restoration method based on depth decoupling network |
-
2022
- 2022-08-22 CN CN202211004861.8A patent/CN115375598B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019071981A1 (en) * | 2017-10-12 | 2019-04-18 | 北京大学深圳研究生院 | Image enhancement method based on multi-exposure generation and re-fusion frame |
US20190213717A1 (en) * | 2018-01-05 | 2019-07-11 | Canon Kabushiki Kaisha | Image processing method, imaging apparatus using the same, image processing apparatus, storage medium, and lens apparatus |
US20220036534A1 (en) * | 2020-07-31 | 2022-02-03 | Adobe Inc. | Facial reconstruction network |
CN114399431A (en) * | 2021-12-06 | 2022-04-26 | 北京理工大学 | Dim light image enhancement method based on attention mechanism |
CN114627006A (en) * | 2022-02-28 | 2022-06-14 | 复旦大学 | Progressive image restoration method based on depth decoupling network |
Non-Patent Citations (2)
Title |
---|
BO PENG等: "LVE-S2D: Low-Light Video Enhancement From Static to Dynamic", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 12, no. 12, 14 July 2022 (2022-07-14), pages 8342 - 8352, XP011929593, DOI: 10.1109/TCSVT.2022.3190916 * |
黄路遥;叶少珍;: "基于GAN的低照度图像增强算法研究", 福州大学学报(自然科学版), vol. 48, no. 05, 30 September 2020 (2020-09-30), pages 551 - 557 * |
Also Published As
Publication number | Publication date |
---|---|
CN115375598B (en) | 2024-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Hu et al. | Underwater image restoration based on convolutional neural network | |
Chang et al. | Single image reflection removal using convolutional neural networks | |
CN113313644B (en) | Underwater image enhancement method based on residual double-attention network | |
CN114399431B (en) | Dim light image enhancement method based on attention mechanism | |
CN109410135B (en) | Anti-learning image defogging and fogging method | |
CN111681180B (en) | Priori-driven deep learning image defogging method | |
CN110225260B (en) | Three-dimensional high dynamic range imaging method based on generation countermeasure network | |
CN111833277A (en) | Marine image defogging method with non-paired multi-scale hybrid coding and decoding structure | |
Xu et al. | Image enhancement algorithm based on GAN neural network | |
CN114170286B (en) | Monocular depth estimation method based on unsupervised deep learning | |
CN113724149B (en) | Weak-supervision visible light remote sensing image thin cloud removing method | |
CN108921887B (en) | Underwater scene depth map estimation method based on underwater light attenuation priori | |
CN115035010A (en) | Underwater image enhancement method based on convolutional network guided model mapping | |
CN115272072A (en) | Underwater image super-resolution method based on multi-feature image fusion | |
CN115829868B (en) | Underwater dim light image enhancement method based on illumination and noise residual image | |
CN117911302A (en) | Underwater low-illumination image enhancement method based on conditional diffusion model | |
CN117351340A (en) | Underwater image enhancement algorithm based on double-color space | |
Tian et al. | Deformable convolutional network constrained by contrastive learning for underwater image enhancement | |
CN115375598A (en) | Unsupervised dim light image enhancement method and unsupervised dim light image enhancement device | |
CN117011181A (en) | Classification-guided unmanned aerial vehicle imaging dense fog removal method | |
Kumar et al. | Underwater image enhancement using deep learning | |
CN116091337A (en) | Image enhancement method and device based on event signal nerve coding mode | |
CN115841523A (en) | Double-branch HDR video reconstruction algorithm based on Raw domain | |
CN114926594A (en) | Single-view-angle shielding human body motion reconstruction method based on self-supervision space-time motion prior | |
CN115311149A (en) | Image denoising method, model, computer-readable storage medium and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |