CN110728643A - Low-illumination band noise image optimization method based on convolutional neural network - Google Patents
Low-illumination band noise image optimization method based on convolutional neural network Download PDFInfo
- Publication number
- CN110728643A CN110728643A CN201910993809.1A CN201910993809A CN110728643A CN 110728643 A CN110728643 A CN 110728643A CN 201910993809 A CN201910993809 A CN 201910993809A CN 110728643 A CN110728643 A CN 110728643A
- Authority
- CN
- China
- Prior art keywords
- layer
- neural network
- image
- deconvolution
- illumination
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 52
- 238000005286 illumination Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000005457 optimization Methods 0.000 title claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 238000003062 neural network model Methods 0.000 claims abstract description 4
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 4
- 230000003213 activating effect Effects 0.000 claims description 2
- 230000008859 change Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 5
- 230000004913 activation Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a convolution neural network-based low-illumination noisy image optimization method, which comprises the following steps: s1, constructing an LLED-Net convolutional neural network, which comprises 10 convolutional layers and 10 corresponding mirror image deconvolution layers, and connecting the convolutional layers to the corresponding mirror image deconvolution layers by using jump connection; s2, collecting normal illumination images, artificially synthesizing corresponding low-illumination noisy images, performing data enhancement and quantity expansion on the images to obtain training data, training a convolutional neural network model by using the training data, and dynamically adjusting the learning rate of the network model to train to obtain a trained LLED-Net convolutional neural network; and S3, reconstructing and optimizing the acquired real low-illumination band noise image through the trained LLED-Net convolution neural network model, and realizing the reconstruction of a high-quality image. The method can automatically learn the characteristics of the image by using the model under the condition that the noise intensity and the brightness of the low-illumination noise image are unknown, improve the image brightness, remove the image noise and reconstruct a high-quality image.
Description
Technical Field
The invention relates to the field of image processing, in particular to a low-illumination noisy image optimization method based on a convolutional neural network.
Background
Taking high quality pictures and videos using an imaging device such as a camera plays a significant role in many situations, but not all pictures of the taken images have good quality. Due to insufficient illumination during shooting and various electrical noise, mechanical noise, channel noise and other noise interferences during transmission, the captured image is generally dark as a whole, and image noise is accompanied by image degradation, so that it is difficult to clearly identify objects or textures, and therefore, it is necessary to improve the quality of low-brightness pictures.
At present, the mainstream algorithms for improving low-brightness noise images at home and abroad are Histogram Equalization (HE), Contrast Limited Adaptive Histogram Equalization (CLAHE), and Gamma Correction (GC), which improve the picture quality of low-brightness noise images, but the methods have still unsatisfactory effect and respective disadvantages, which easily cause color distortion of pictures and a large amount of white blocks in images.
Disclosure of Invention
The invention aims to provide a low-illumination noise image optimization method based on a convolutional neural network, which can automatically learn the characteristics of an image by using a model under the condition that the noise intensity and the brightness of a low-illumination noise image are unknown, improve the image brightness, remove the image noise and reconstruct a high-quality image.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a convolution neural network-based low-illumination noisy image optimization method comprises the following steps:
step S1, constructing an LLED-Net convolution neural network; the LLED-Net convolutional neural network is of a convolutional-deconvolution structure and comprises N convolutional layers and N corresponding mirror image deconvolution layers, and the convolutional layers are connected to the corresponding mirror image deconvolution layers by using jump connection, wherein the jump connection means that the output of a single convolutional layer is transmitted to the next convolutional layer and the output is transmitted to the corresponding mirror image deconvolution layer; the LLED-Net convolutional neural network uses the loss of structural similarity as a loss function of the convolutional neural network;
step S2, collecting normal illumination images, artificially synthesizing corresponding low-illumination-intensity-band noise images, performing data enhancement and quantity expansion on the low-illumination-intensity-band noise images to obtain training data, training the LLED-Net convolutional neural network model by using the training data, and dynamically adjusting the learning rate of the network model to train to obtain a trained LLED-Net convolutional neural network;
and step S3, reconstructing and optimizing the acquired real low-illumination band noise image through the trained LLED-Net convolutional neural network model, and realizing the reconstruction of a high-quality image.
Preferably, the LLED-Net convolutional neural network comprises ten convolutional layers for processing images and ten corresponding mirror image deconvolution layers; the convolutional layer is used as a feature extractor, after being forwarded by the convolutional layer, details in the images are restored by connecting 10 deconvolution layers, and the number of deconvolution layer feature images is the same as the number of corresponding convolutional layer feature images in a mirror image manner; the LLED-Net convolutional neural network model is connected to a corresponding mirror image deconvolution layer from a convolution layer by using jump connection, the convolution layer characteristic graphs transmitted layer by layer are summed item by item and are used as the input of the deconvolution layer, and the convolution layer characteristic graphs are corrected by the jump connection and then transmitted to the next deconvolution layer.
Preferably, the structure of the LLED-Net convolutional neural network further comprises:
the ten convolutional layers are sequentially recorded as a first convolutional layer, a second convolutional layer … to a tenth convolutional layer from the input end to the output end;
the ten deconvolution layers are sequentially recorded as a first layer deconvolution layer, a second layer deconvolution layer … to a tenth layer deconvolution layer from the input end to the output end;
the ith convolution layer is correspondingly connected with the 11 th-i th deconvolution layer in a jump way, and i is more than or equal to 1 and less than or equal to 10.
Preferably, the structure of the LLED-Net convolutional neural network further comprises:
the number of convolution kernels from the first layer of convolution layer to the fourth layer of convolution layer is 128, the number of convolution kernels from the fifth layer of convolution layer to the seventh layer of convolution layer is 256, and the number of convolution kernels from the eighth layer of convolution layer to the tenth layer of convolution layer is 512;
the number of convolution kernels from the first layer of deconvolution layer to the second layer of deconvolution layer is 512, the number of convolution kernels from the third layer of deconvolution layer to the fifth layer of deconvolution layer is 256, the number of convolution kernels from the sixth layer of deconvolution layer to the ninth layer of deconvolution layer is 128, and the number of convolution kernels from the tenth layer of deconvolution layer is 3.
Preferably, the convolution neural network-based low-illumination noisy image optimization method further includes the following processes: after each convolution or deconvolution operation, padding is adopted for zero padding operation; activating each convolution layer or each deconvolution layer by using a nonlinear rectification function after operation; the convolution kernel size of all convolutional and deconvolution layers is set to 3 × 3.
Preferably, the step S1 further includes:
taking the structural similarity SSIM loss as a loss function of the LLED-Net convolutional neural network, wherein the structural similarity formula is as follows:
wherein, muxIs the average value of x; mu.syIs the average value of y;is the variance of x;is the variance of y; sigmaxyIs the covariance of x and y; c. C1=(k1L)2,c2=(k2L)2Is a constant used to maintain stability; l is the dynamic range of the pixel value; k is a radical of1=0.01,k2=0.03;
The loss function is formulated as follows:
wherein N represents the number of training data sets; x represents a artificially synthesized low-illumination noisy image; x represents a synthetic low-illumination noisy image dataset; y represents a normal illumination image; y represents a normal-light image dataset.
Preferably, the step S2 further includes:
s21, selecting a plurality of noise-free images shot under normal illumination, clockwise rotating all the noise-free images by 90 degrees, 180 degrees and 270 degrees, horizontally turning, and performing data enhancement and data expansion on all the images to obtain training data to be processed;
step S22, Gaussian noise is added to each image processed in the step S21, then nonlinear adjustment is carried out to change the image into a low-brightness image, and the processed image becomes a low-quality image with random brightness and random Gaussian noise;
step S23, randomly cutting picture blocks with 41 × 41 pixels for each low-quality image processed in the step S22, and taking the obtained picture set as a training data set;
step S24, inputting the training data set obtained in the step S23 into the LLED-Net convolution neural network to realize forward propagation; wherein, every 4000 samples are traversed to be one generation, every two generations reduce the learning rate to be 0.1 of the original, the iteration times is 10 generations, and finally the trained LLED-Net convolution neural network model is obtained.
Preferably, the step S22 further includes:
the noise intensity σ of the gaussian noise is randomly selected in the range of (0, 25); the picture brightness values are randomly chosen between gamma (2, 5).
Compared with the prior art, the invention has the beneficial effects that: according to the method, an end-to-end convolutional neural network is constructed, training data are artificially synthesized, manual intervention is not needed, an optimal model for removing noise and improving image brightness is constructed from main characteristics of an autonomous learning image in the training data; according to the method, the brightness and the noise intensity of the image to be processed do not need to be known, the unknown low-illumination noise image can be rapidly processed by using the trained model, the high-quality image is reconstructed, the generalization effect is good, and the robustness is high; the invention has excellent processing effect, and the brightness, color reduction degree, image texture and noise removal degree of the reconstructed image are all more improved than those of the prior art.
Drawings
FIG. 1 is a schematic flow chart of a low-illumination noisy image optimization method based on a convolutional neural network according to the present invention;
FIG. 2 is a structural diagram of an overall model in the low-illumination noisy image optimization method based on a convolutional neural network according to the present invention;
FIG. 3 is a low illumination noise image of the present invention;
FIG. 4 is an image processed by the LLED-Net convolutional neural network model of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1-4, the present invention discloses a convolution neural network-based low-illumination noisy image optimization method, comprising the following steps:
step S1, constructing an LLED-Net convolution neural network; wherein the LLED-Net convolutional neural network comprises N (e.g., 10) convolutional layers and N (10) corresponding mirror-image deconvolution layers, and the respective convolutional layers are connected to the corresponding mirror-image deconvolution layers by using a skip connection, as shown in FIG. 2; and uses the Structural Similarity (SSIM) loss as a function of the loss of the convolutional neural network, as shown in fig. 1.
And S2, collecting normal illumination images, artificially synthesizing corresponding low-illumination noise images, performing data enhancement and quantity extension on the low-illumination noise images to obtain training data (the training data is a low-illumination noise image set), training the LLED-Net convolutional neural network model by using the training data, and dynamically adjusting the learning rate of the network model to train to obtain the trained LLED-Net convolutional neural network.
And step S3, reconstructing and optimizing the acquired real low-illumination band noise image through the trained LLED-Net convolutional neural network model, and realizing the reconstruction of a high-quality image.
In this embodiment, the LLED-Net convolutional neural network model is a convolutional-deconvolution structure, 10 convolutional layers are used for processing images, and the number of feature maps of the convolutional layers is gradually increased; the convolution layer is used as a feature extractor to extract the main features of objects in the image, improve the image brightness and remove the image noise; after the convolution layer is forwarded, 10 deconvolution layers are connected to restore the details in the image, the number of the deconvolution layer feature maps is the same as the number of the corresponding convolution layer feature maps in a mirror image manner, and the deconvolution layer feature maps are used for reconstructing a high-quality image; a skip connection is a connection that passes the output of a single convolutional layer not only to the next convolutional layer, but also to the corresponding mirrored deconvolution layer, i.e., the present invention uses a skip connection to connect from a convolutional layer to a corresponding mirrored deconvolution layer. Summing the convolution characteristic graphs transmitted layer by layer one by one and taking the summed convolution characteristic graphs as the input of deconvolution, correcting the result through skip connection, and transmitting the result to the next deconvolution layer; gradient disappearance and gradient explosion of the convolutional neural network in the training process are avoided, the network is more stable in the training process, the network convergence speed is increased, and the training time is shortened.
Further, the LLED-Net convolutional neural network structure is as follows:
(1) as shown in fig. 2, 10 convolutional layers are sequentially noted as convolutional layer 1, convolutional layer 2 … convolutional layer 9, convolutional layer 10 from input end to output end; the 10 deconvolution layers are sequentially written as a deconvolution layer 11, a deconvolution layer 12 …, and a deconvolution layer 20 from the input end to the output end. Correspondingly, convolutional layer 1 is connected to deconvolution layer 20 using a skip connection, convolutional layer 2 is connected to deconvolution layer 19 using a skip connection, and convolutional layer 3 is connected to deconvolution layer 18 using a skip connection … convolutional layer 10 is connected to deconvolution layer 11 using a skip connection.
(2) The convolution layer performs convolution operation on the picture by using convolution kernel, the number of convolution kernels of 1 to 4 layers of the convolution layer is 128, the number of convolution kernels of 5 to 7 layers of the convolution layer is 256, and the number of convolution kernels of 8 to 10 layers of the convolution layer is 512; the deconvolution layer uses deconvolution to check the picture to perform deconvolution operation, the number of deconvolution kernels of 11-12 layers of the deconvolution layer is 512 and is the same as that of 8-10 layers of the convolution layer, the number of deconvolution kernels of 13-15 layers of the deconvolution layer is 256 and is the same as that of 5-7 layers of the convolution layer, the number of deconvolution kernels of 16-19 layers of the deconvolution layer is 128 and is the same as that of 1-4 layers of the convolution layer, and the number of deconvolution kernels of 20 layers of the deconvolution layer is 3, and the picture is reconstructed.
In this embodiment, zero padding is performed after each convolution operation or after a deconvolution operation. Each convolution layer or each deconvolution layer is then activated using a nonlinear rectification function (Relu) (i.e., each convolution operation is followed by activation using a nonlinear rectification function, and each deconvolution operation is followed by activation using a nonlinear rectification function). The convolution kernel size of all convolution and deconvolution layers is set to 3 × 3.
In step S1, the method further includes:
taking the loss of Structural Similarity (SSIM) as a loss function of the LLED-Net convolutional neural network, wherein the formula of the structural similarity is as follows:
wherein, muxIs the average value of x; mu.syIs the average value of y;is the variance of x;is the variance of y; sigmaxyIs the covariance of x and y; c. C1=(k1L)2,c2=(k2L)2Is a constant used to maintain stability; l is the dynamic range of the pixel value; k is a radical of1=0.01,k2=0.03。
In addition, the loss function is formulated as follows:
wherein N represents the number of training data sets; x represents a single artificially synthesized low-illumination noisy image; x represents a synthetic low-illumination noisy image dataset; y represents a single normal illumination image; y represents a normal-light image dataset.
In step S2, the method further includes:
step S21, selecting a plurality of (for example, 300) noiseless images shot under normal illumination, rotating all the images clockwise by 90 °, 180 °, and 270 °, and then horizontally flipping all the images, and after performing data enhancement and data expansion on all the images, obtaining training data to be processed.
Step S22, adding gaussian noise to each image processed in step S21, and randomly selecting the noise intensity σ within the range of (0, 25); and then, carrying out nonlinear adjustment on each picture to obtain a low-brightness picture, wherein the brightness value gamma of the picture is randomly selected between (2, 5), and the processed picture is a low-quality picture with random brightness and random Gaussian noise. The processing of the pictures is realized by matlab software.
And step S23, randomly cutting the picture blocks with pixels of 41 × 41 into each picture processed in step S22, and using the obtained picture set as a training data set.
S24, inputting the training data set obtained in the S23 into an LLED-Net convolution neural network to realize forward propagation; every 4000 samples are traversed to be a generation, the learning rate is reduced to the original 0.1 in every two generations, and the iteration frequency is 10 generations; and finally obtaining the trained LLED-Net convolution neural network model.
In step S3, the method further includes:
and inputting the collected low-illumination noise picture into the trained LLED-Net convolutional neural network model to obtain an optimized reconstructed image, wherein compared with the original image, the brightness, the color fidelity, the texture detail and the image quality of the reconstructed image are greatly improved, as shown in fig. 3.
In conclusion, the end-to-end convolutional neural network is constructed, training data are artificially synthesized, manual intervention is not needed, an optimal model for removing noise and improving image brightness is constructed from main characteristics of an autonomous learning image in the training data; the brightness and the noise intensity of the image to be processed do not need to be known, the unknown low-illumination noise image can be rapidly processed by using the trained model, the high-quality image is reconstructed, the generalization effect is good, and the robustness is high; the processing effect is excellent, and the brightness, the color reduction degree, the image texture and the noise removal degree of the reconstructed image are greatly improved compared with the prior art.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.
Claims (8)
1. A low-illumination noisy image optimization method based on a convolutional neural network is characterized by comprising the following steps:
step S1, constructing an LLED-Net convolution neural network; the LLED-Net convolutional neural network is of a convolutional-deconvolution structure and comprises N convolutional layers and N corresponding mirror image deconvolution layers, and the convolutional layers are connected to the corresponding mirror image deconvolution layers by using jump connection, wherein the jump connection means that the output of a single convolutional layer is transmitted to the next convolutional layer and the output is transmitted to the corresponding mirror image deconvolution layer; the LLED-Net convolutional neural network uses the loss of structural similarity as a loss function of the convolutional neural network;
step S2, collecting normal illumination images, artificially synthesizing corresponding low-illumination-intensity-band noise images, performing data enhancement and quantity expansion on the low-illumination-intensity-band noise images to obtain training data, training the LLED-Net convolutional neural network model by using the training data, and dynamically adjusting the learning rate of the network model to train to obtain a trained LLED-Net convolutional neural network;
and step S3, reconstructing and optimizing the acquired real low-illumination band noise image through the trained LLED-Net convolutional neural network model, and realizing the reconstruction of a high-quality image.
2. The convolutional neural network-based low-illumination noisy image optimization method of claim 1,
the LLED-Net convolutional neural network comprises ten convolutional layers for processing images and ten corresponding mirror image deconvolution layers;
the convolutional layer is used as a feature extractor, after being forwarded by the convolutional layer, details in the images are restored by connecting 10 deconvolution layers, and the number of deconvolution layer feature images is the same as the number of corresponding convolutional layer feature images in a mirror image manner;
the LLED-Net convolutional neural network model is connected to a corresponding mirror image deconvolution layer from a convolution layer by using jump connection, the convolution layer characteristic graphs transmitted layer by layer are summed item by item and are used as the input of the deconvolution layer, and the convolution layer characteristic graphs are corrected by the jump connection and then transmitted to the next deconvolution layer.
3. The convolutional neural network-based low-illumination noisy image optimization method of claim 2,
the structure of the LLED-Net convolutional neural network further comprises:
the ten convolutional layers are sequentially recorded as a first convolutional layer, a second convolutional layer … to a tenth convolutional layer from the input end to the output end;
the ten deconvolution layers are sequentially recorded as a first layer deconvolution layer, a second layer deconvolution layer … to a tenth layer deconvolution layer from the input end to the output end;
the ith convolution layer is correspondingly connected with the 11 th-i th deconvolution layer in a jump way, and i is more than or equal to 1 and less than or equal to 10.
4. The convolutional neural network-based low-illumination noisy image optimization method of claim 3,
the structure of the LLED-Net convolutional neural network further comprises:
the number of convolution kernels from the first layer of convolution layer to the fourth layer of convolution layer is 128, the number of convolution kernels from the fifth layer of convolution layer to the seventh layer of convolution layer is 256, and the number of convolution kernels from the eighth layer of convolution layer to the tenth layer of convolution layer is 512;
the number of convolution kernels from the first layer of deconvolution layer to the second layer of deconvolution layer is 512, the number of convolution kernels from the third layer of deconvolution layer to the fifth layer of deconvolution layer is 256, the number of convolution kernels from the sixth layer of deconvolution layer to the ninth layer of deconvolution layer is 128, and the number of convolution kernels from the tenth layer of deconvolution layer is 3.
5. The convolutional neural network-based low-illumination noisy image optimization method of claim 1,
further comprising the following processes:
after each convolution or deconvolution operation, padding is adopted for zero padding operation;
activating each convolution layer or each deconvolution layer by using a nonlinear rectification function after operation;
the convolution kernel size of all convolutional and deconvolution layers is set to 3 × 3.
6. The convolutional neural network-based low-illumination noisy image optimization method of claim 1,
the step S1 further includes:
taking the structural similarity SSIM loss as a loss function of the LLED-Net convolutional neural network, wherein the structural similarity formula is as follows:
wherein, muxIs the average value of x; mu.syIs the average value of y;is the variance of x;is the variance of y; sigmaxyIs the covariance of x and y; c. C1=(k1L)2,c2=(k2L)2Is a constant used to maintain stability; l is the dynamic range of the pixel value; k is a radical of1=0.01,k2=0.03;
The loss function is formulated as follows:
wherein N represents the number of training data sets; x represents a artificially synthesized low-illumination noisy image; x represents a synthetic low-illumination noisy image dataset; y represents a normal illumination image; y represents a normal-light image dataset.
7. The convolutional neural network-based low-illumination noisy image optimization method of claim 1,
the step S2 further includes:
s21, selecting a plurality of noise-free images shot under normal illumination, clockwise rotating all the noise-free images by 90 degrees, 180 degrees and 270 degrees, horizontally turning, and performing data enhancement and data expansion on all the images to obtain training data to be processed;
step S22, Gaussian noise is added to each image processed in the step S21, then nonlinear adjustment is carried out to change the image into a low-brightness image, and the processed image becomes a low-quality image with random brightness and random Gaussian noise;
step S23, randomly cutting picture blocks with 41 × 41 pixels for each low-quality image processed in the step S22, and taking the obtained picture set as a training data set;
step S24, inputting the training data set obtained in the step S23 into the LLED-Net convolution neural network to realize forward propagation; wherein, every 4000 samples are traversed to be one generation, every two generations reduce the learning rate to be 0.1 of the original, the iteration times is 10 generations, and finally the trained LLED-Net convolution neural network model is obtained.
8. The convolutional neural network-based low-illumination noisy image optimization method of claim 7,
the step S22 further includes:
the noise intensity σ of the gaussian noise is randomly selected in the range of (0, 25);
the picture brightness values are randomly chosen between gamma (2, 5).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910993809.1A CN110728643A (en) | 2019-10-18 | 2019-10-18 | Low-illumination band noise image optimization method based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910993809.1A CN110728643A (en) | 2019-10-18 | 2019-10-18 | Low-illumination band noise image optimization method based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110728643A true CN110728643A (en) | 2020-01-24 |
Family
ID=69221560
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910993809.1A Pending CN110728643A (en) | 2019-10-18 | 2019-10-18 | Low-illumination band noise image optimization method based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110728643A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114004761A (en) * | 2021-10-29 | 2022-02-01 | 福州大学 | Image optimization method integrating deep learning night vision enhancement and filtering noise reduction |
CN114372941A (en) * | 2021-12-16 | 2022-04-19 | 佳源科技股份有限公司 | Low-illumination image enhancement method, device, equipment and medium |
GB2607158A (en) * | 2021-03-08 | 2022-11-30 | Nvidia Corp | Neural network training technique |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170213321A1 (en) * | 2016-01-22 | 2017-07-27 | Siemens Healthcare Gmbh | Deep Unfolding Algorithm For Efficient Image Denoising Under Varying Noise Conditions |
CN108876737A (en) * | 2018-06-06 | 2018-11-23 | 武汉大学 | A kind of image de-noising method of joint residual error study and structural similarity |
CN109242788A (en) * | 2018-08-21 | 2019-01-18 | 福州大学 | One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method |
-
2019
- 2019-10-18 CN CN201910993809.1A patent/CN110728643A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170213321A1 (en) * | 2016-01-22 | 2017-07-27 | Siemens Healthcare Gmbh | Deep Unfolding Algorithm For Efficient Image Denoising Under Varying Noise Conditions |
CN108876737A (en) * | 2018-06-06 | 2018-11-23 | 武汉大学 | A kind of image de-noising method of joint residual error study and structural similarity |
CN109242788A (en) * | 2018-08-21 | 2019-01-18 | 福州大学 | One kind being based on coding-decoding convolutional neural networks low-light (level) image optimization method |
Non-Patent Citations (1)
Title |
---|
刘超等: "超低照度下微光图像的深度卷积自编码网络复原", 《光学精密工程》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2607158A (en) * | 2021-03-08 | 2022-11-30 | Nvidia Corp | Neural network training technique |
CN114004761A (en) * | 2021-10-29 | 2022-02-01 | 福州大学 | Image optimization method integrating deep learning night vision enhancement and filtering noise reduction |
CN114372941A (en) * | 2021-12-16 | 2022-04-19 | 佳源科技股份有限公司 | Low-illumination image enhancement method, device, equipment and medium |
CN114372941B (en) * | 2021-12-16 | 2024-04-26 | 佳源科技股份有限公司 | Low-light image enhancement method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106709875B (en) | Compressed low-resolution image restoration method based on joint depth network | |
CN114140353B (en) | Swin-Transformer image denoising method and system based on channel attention | |
CN110599409B (en) | Convolutional neural network image denoising method based on multi-scale convolutional groups and parallel | |
Zhang et al. | Rethinking noise synthesis and modeling in raw denoising | |
CN110728643A (en) | Low-illumination band noise image optimization method based on convolutional neural network | |
CN107590779B (en) | Image denoising and deblurring method based on image block clustering dictionary training | |
CN107481278B (en) | Image bit depth expansion method and device based on combination frame | |
CN111028163A (en) | Convolution neural network-based combined image denoising and weak light enhancement method | |
CN113450290B (en) | Low-illumination image enhancement method and system based on image inpainting technology | |
CN108830812B (en) | Video high frame rate reproduction method based on grid structure deep learning | |
CN108830809B (en) | Image denoising method based on expansion convolution | |
CN111915513B (en) | Image denoising method based on improved adaptive neural network | |
CN111369466B (en) | Image distortion correction enhancement method of convolutional neural network based on deformable convolution | |
CN110796622B (en) | Image bit enhancement method based on multi-layer characteristics of series neural network | |
CN110610467B (en) | Multi-frame video compression noise removing method based on deep learning | |
CN113096029A (en) | High dynamic range image generation method based on multi-branch codec neural network | |
US11948278B2 (en) | Image quality improvement method and image processing apparatus using the same | |
Maleky et al. | Noise2noiseflow: Realistic camera noise modeling without clean images | |
CN112819705A (en) | Real image denoising method based on mesh structure and long-distance correlation | |
CN114926336A (en) | Video super-resolution reconstruction method and device, computer equipment and storage medium | |
CN112019704B (en) | Video denoising method based on prior information and convolutional neural network | |
CN117911302A (en) | Underwater low-illumination image enhancement method based on conditional diffusion model | |
CN116757959A (en) | HDR image reconstruction method based on Raw domain | |
CN116645281A (en) | Low-light-level image enhancement method based on multi-stage Laplace feature fusion | |
CN116452431A (en) | Weak light image enhancement method based on multi-branch progressive depth network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200124 |
|
RJ01 | Rejection of invention patent application after publication |