CN115829868A - Underwater dim light image enhancement method based on illumination and noise residual error image - Google Patents

Underwater dim light image enhancement method based on illumination and noise residual error image Download PDF

Info

Publication number
CN115829868A
CN115829868A CN202211512578.6A CN202211512578A CN115829868A CN 115829868 A CN115829868 A CN 115829868A CN 202211512578 A CN202211512578 A CN 202211512578A CN 115829868 A CN115829868 A CN 115829868A
Authority
CN
China
Prior art keywords
image
residual error
map
underwater
noise
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211512578.6A
Other languages
Chinese (zh)
Other versions
CN115829868B (en
Inventor
王宏海
王瑞星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SANYA UNIVERSITY
Original Assignee
SANYA UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SANYA UNIVERSITY filed Critical SANYA UNIVERSITY
Priority to CN202211512578.6A priority Critical patent/CN115829868B/en
Publication of CN115829868A publication Critical patent/CN115829868A/en
Application granted granted Critical
Publication of CN115829868B publication Critical patent/CN115829868B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an underwater dim light image enhancement method based on an illumination and noise residual error map, which comprises the following steps: firstly, setting a network frame as a multi-scale residual error network model; secondly, dividing the data set into training data and testing data; thirdly, the input image is firstly down-sampled by an encoder of a multi-scale residual error neural network model to obtain a high-dimensionality characteristic diagram; fourthly, inputting the characteristic diagram into a noise decoder and a lighting diagram decoder and then outputting an optimized lighting diagram and a noise residual diagram; the fifth step, according to the formula
Figure DDA0003967120770000011
Calculating to obtain an enhanced image; sixthly, training by using the L2 loss functions calculated in different color gamut of the image to obtain a trained model; seventhly, transmitting the shot input image into the trained residual error neural network, and then according to a formula
Figure DDA0003967120770000012
Resulting in an enhanced image. The invention can not only enhance the underwater illumination, but also well eliminate the underwater impurity dust shadow.

Description

Underwater dim light image enhancement method based on illumination and noise residual error image
Technical Field
The invention relates to an underwater dim light image enhancement method based on an illumination and noise residual error map.
Background
When a video image is shot under sea water, the depth illumination intensity of the water is gradually weakened, and some low-illumination scenes can be generated. In addition, some impurities and noise exist in underwater photography, in addition, people usually make some data pairs for enhancing the dim light image, and train a model by using a deep learning method to enhance the dim light image. Such methods tend to yield results that are not very stable due to the black-box nature of deep learning networks, and are not currently specifically designed for enhancement of underwater dim-light scenes.
The Chinese patent with application number CN202210395296.6 discloses an underwater image enhancement method combining deep learning and traditional image enhancement technologies, which comprises the steps of 1, inputting degraded underwater images and clear data sets of the underwater images, and dividing the image data sets into a training set and a test set; step 2, preprocessing the training set and the test set obtained in the step 1; step 3, analyzing the mean difference between the degraded underwater image and the clear underwater image which are paired in the image data set in the step 1; step 4, constructing an enhanced network model of the underwater image based on the analysis result of the step 3; step 5, training the network model constructed in the step 4 by adopting the training set preprocessed in the step 2 to obtain an optimal enhanced network model; and 6, placing the test set preprocessed in the step 2 into the enhanced network model trained in the step 5, and outputting an enhanced underwater image. The method has no good data acquisition mode, and the shadow of the underwater impurity dust is not well eliminated, so that the obtained final image effect is not ideal.
Therefore, the image enhancement method which can enhance underwater illumination and well eliminate the underwater impurity dust shadow has a good application prospect.
Disclosure of Invention
Aiming at the prior art, the invention provides an underwater dim light image enhancement method based on an illumination and noise residual error image for enhancing an underwater low-illumination image.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an underwater dim light image enhancement method based on illumination and noise residual error images comprises the following steps:
the method comprises the following steps that firstly, a network framework is set to be a multi-scale residual error network model which comprises an encoder, a noise decoder and a lighting map decoder;
secondly, dividing the data set into training data and testing data, and setting the training parameters of the model network;
thirdly, the input image is firstly down-sampled by a coder of a multi-scale residual neural network model to obtain a high-dimensionality characteristic diagram;
fourthly, inputting the characteristic diagram into a noise decoder and a light map decoder and then outputting an optimized light map and a noise residual map;
the fifth step, according to the formula
Figure BDA0003967120750000021
Calculating to obtain an enhanced image, wherein I t Is an enhanced image of the input image I, N t For the optimized noise residual map, S t The optimized illumination map is obtained;
sixthly, calculating an L2 loss function in different color gamut of the image to train to obtain a trained model;
seventhly, transmitting the shot input image into a trained residual error neural network, predicting an optimized illumination map and an optimized noise residual error map by using the residual error neural network, and then obtaining the optimized illumination map and the optimized noise residual error map according to formula I & t Resulting in an enhanced image.
Further, in the sixth step, L2= (image shot without water — enhanced underwater image) ^2, where the image is calculated on RGB, YUV, LAB domain to ensure the final color detail consistency of the image.
Preferably, the activation layer of the multi-scale residual neural network model uses a PRelu function.
The beneficial effects of the invention are: the invention provides a new method and a data acquisition mode for enhancing the photography in the underwater dark light environment, and the training data set designed by the invention is more effective for the training of enhancing the underwater dark light environment. The obtained effect is higher in numerical value on PSNR, SSIM and other measurement standards than the existing method, and meanwhile, compared with the traditional method, the result obtained by the network design of the invention has higher chroma, clearer details are restored, and the impurity shadow in water is effectively removed. The model according to the invention is very stable and hardly has some uncontrollable effect defects caused by deep learning model.
Drawings
FIG. 1 is a frame diagram of the present invention;
FIG. 2 is a schematic diagram of a residual neural network of the present invention;
FIG. 3 is a flow chart of the present invention;
fig. 4 is an effect diagram of the present invention.
Detailed Description
For a better understanding of the present invention, embodiments of the present invention are explained in detail below with reference to fig. 1 to 4.
Various articles are put in a large fish tank to simulate some scenes under sea water, then the gopro camera is placed at a fixed position, then two times of photographing are carried out, and the photographing result obtained under the condition of good illumination is adjusted by experts at the later stage of photographing to achieve an almost perfect learning target, so that the input image learning target of the user can be subjected to regression learning under dark light.
The specific steps of the invention as shown in fig. 3 are as follows:
in a first step, the multi-scale residual neural network model of the present invention comprises one encoder and two decoders, one being an optimized illumination branch and the other being an optimized noise residual map branch, i.e. one being a noise decoder and one being a illumination map decoder. The activation layer of the multi-scale residual neural network model uses the PRelu function.
And secondly, making and preprocessing a data set, dividing the data set into training data and testing data, and setting the model network training parameters.
Next, as shown in FIG. 3: training is carried out according to a set neural network model, the neural network is utilized to learn to obtain an optimized illumination map and an optimized noise residual map, an enhanced underwater shot map is obtained by acting on an original image, and a minimum L2 loss function is taken as a target training network model parameter in the back propagation process of the underwater shot map. The method comprises the following steps: and thirdly, the input image is firstly downsampled by an encoder of the multi-scale residual error neural network model to obtain a high-dimensionality characteristic diagram. And fourthly, inputting the characteristic diagram into a noise decoder and a lighting diagram decoder, and outputting an optimized lighting diagram and a noise residual diagram. And fifthly, calculating to obtain an enhanced image according to a formula (input image-noise residual image)/optimized illumination map. And sixthly, calculating an L2 loss function on RGB, YUV and LAB in different color domains, and training by using the L2= (an image shot under the anhydrous condition-an enhanced illumination map) ^2 loss function to obtain a trained model. And seventhly, transmitting the shot input image into a trained residual error neural network, predicting an optimized illumination map and an optimized noise residual error map by using the residual error neural network, and obtaining an enhanced image according to the predicted illumination map and the optimized noise residual error map.
As shown in fig. 2, the residual error neural network of the present invention performs downsampling on an input image by 2, 4, and 8 times through convolution of two layers of 3 × 3, performs feature extraction learning on different scales, outputs a branch a as an underwater noise and impurity residual error map, and outputs a branch B as an underwater illumination map.
Specifically, the structural principle in fig. 2 is described in detail as follows: the residual neural network model is suitable for an image enhancement task under the condition of low illumination. The network adopts an end-to-end training mode, and can simultaneously solve the problems of denoising, brightening, color correction and the like. The model main body is designed based on unet, and the idea of multi-scale fusion and image reconstruction from coarse to fine is adopted.
The input to the residual neural network is a low-quality noisy low-illumination image that is first multi-scale down-sampled before being input to the network. And the multiscale image pyramid obtained by downsampling contains information on different scales of the picture. Pictures of smaller size contain color information, while pictures of larger size contain structure information. The residual neural network model includes one encoder and two decoders a and B.
In the encoder part (left column in fig. 2), each layer of modules corresponds to different scales of the image pyramid, and after each module uses the convolution layer to extract features, the modules are stacked with pictures of the corresponding scales and sent to the next layer of encoder modules. Each layer encoder module uses average pooling for downsampling.
After passing through the encoder, the network obtains a high-dimensional feature map from the original input picture. The signature is then fed to the decoders a and B of the two different branches. The two structures are the same and are formed by stacking multi-scale convolution modules. A and B bear different functions to respectively complete the tasks of denoising and brightening. In the decoder A, a denoising module regards noise as residual by using a residual learning method, and subtracts the residual output by the decoder from an original image to obtain a denoised clean picture.
Similarly, the illumination map decoder B outputs an illumination map of the picture based on retinex theory. And after obtaining the illumination map, dividing the denoised clean picture by the illumination map to obtain a brightened result. The light ray diagram is composed of three channels, so that the color degradation of the original image can be corrected while the brightness is improved. In the decoding process of B, all the scale pictures of the input image pyramid are also overlapped with all the inputs in the decoder module of B so as to realize multi-scale information fusion.
Therefore, the residual neural network model can complete the tasks of end-to-end denoising and dim light enhancement by utilizing the picture information of different scales and using a residual learning and jump connection mode.
The specific embodiment is as follows: we define the problem as the input image I passes through an enhancement function F, resulting in an enhanced image I t And F (I) = F (I), where F is our deep learning framework model, and when deep learning is used, we design a brand-new multi-scale residual error network to perform prediction learning on an underwater illumination map and a noise residual error map. And the underwater noise impurity residual map is divided into two branches, wherein one branch is an optimized underwater illumination branch (such as an upper branch of a frame map), and the other branch is an optimized underwater noise impurity residual map branch (such as a lower branch of the frame map). Then, a training data set and a test data set are required to be constructed, wherein the training data set comprises 1800 pairs of test data serving as training data and 200 pairs of test data, the training data is used for training a deep learning model during training, and the test set is used for testing the effect of the network. In the data acquisition stage, a waterproof camera is placed at a fixed position, objects in different scenes are placed well, and shooting is performed respectively in the water-existing state and the water-free state in a fish tank, so that a data pair with a good alignment effect can be shot.
After the dataset is finalized, we use an adam optimizer to perform optimization on the residual neural network. We set the network parameters to a learning rate of 0.01, reducing the learning rate to 0.001 at 1000 epochs (after traversing 1000 complete data) and to 0.0001 at 2000 epochs (after traversing 2000 complete data).
Training is carried out according to the set network parameters and the adopted multi-scale residual error network, and in the back propagation process, the network model is optimized by calculating the minimum L2 loss function in different color gamuts. The process is as follows:
after an input image passes through a residual error neural network of two branches, the output result is an optimized illumination map and a noise residual error map, an enhanced image is obtained by calculation according to a formula (input image-noise residual error map)/optimized illumination map, and a trained model is obtained after 3000 epochs are trained by using an L2 loss function, wherein the L2 loss function is as follows: l2= (image shot without water-enhanced underwater image) ^2, wherein the image is calculated by L2 on RGB, YUV and LAB domains respectively to ensure the final color detail consistency of the image. In the using process, the user only needs to transmit a shot input image into a trained residual error neural network, and uses the residual error neural network to predict an optimized illumination map and an optimized noise residual error map, and an enhanced image can be obtained according to the contents of a formula.
Fig. 4 is a comparison graph of the effects of the present invention.
The above embodiments are summarized as follows: 1. a network frame is adopted as a multi-scale residual error network, so that the image can be efficiently learned in different stage characteristics, and a network model is divided into two modules, wherein one module is an optimized illumination branch, and the other module is an optimized noise residual error image branch. The active layer uses the PRelu function. 2. And (3) making and preprocessing a data set, dividing the data set into training data and testing data, and setting the model network training parameters. 3. Training is carried out according to a set neural network model, an optimized illumination map and an optimized noise residual map are obtained by learning through a neural network, an enhanced underwater photographed image is obtained by acting on an original image, and a minimized L2 loss function (calculated in different color gamuts respectively, namely L2 of an L2+ LAB domain of an RGB domain and L2 of a YUV domain) is taken as a target training network model parameter in the back propagation process of the underwater photographed image. 4. And inputting the test image into a trained neural network model, obtaining an optimized illumination map and an optimized induced residual map through two branches, and obtaining an image after underwater image enhancement by using an image recovery formula.
The principles and embodiments of the present invention have been described using specific examples, which are provided only to help understand the method and its core idea of the present invention. The foregoing is only a preferred embodiment of the present invention, and it should be noted that there are objectively infinite specific structures due to the limited character expressions, and it will be apparent to those skilled in the art that a plurality of modifications, decorations or changes may be made without departing from the principle of the present invention, and the technical features described above may be combined in a suitable manner; such modifications, variations, combinations, or adaptations of the invention using its spirit and scope, as defined by the claims, may be directed to other uses and embodiments.

Claims (3)

1. An underwater dim light image enhancement method based on an illumination and noise residual error map is characterized by comprising the following steps:
the method comprises the following steps that firstly, a network framework is set to be a multi-scale residual error network model which comprises an encoder, a noise decoder and a lighting map decoder;
secondly, dividing the data set into training data and testing data, and setting the training parameters of the model network;
thirdly, the input image is firstly down-sampled by an encoder of a multi-scale residual error neural network model to obtain a high-dimensionality characteristic diagram;
fourthly, inputting the characteristic diagram into a noise decoder and a lighting diagram decoder and then outputting an optimized lighting diagram and a noise residual diagram;
the fifth step, according to the formula
Figure FDA0003967120740000011
Calculating to obtain an enhanced image, wherein I t Is an enhanced image of the input image I, N t For the optimized noise residual map, S t Is an optimized illumination map;
sixthly, calculating an L2 loss function in different color gamut of the image to train to obtain a trained model;
seventhly, transmitting the shot input image into a trained residual error neural network, predicting an optimized illumination map and an optimized noise residual error map by using the residual error neural network, and then obtaining the optimized illumination map and the optimized noise residual error map according to formula I & t Resulting in an enhanced image.
2. The underwater dim light image enhancement method based on illumination and noise residual map according to claim 1, characterized in that in the sixth step, L2= (image taken without water-enhanced underwater image) ^2, where the image is calculated on RGB, YUV, LAB domain respectively to ensure the final color detail consistency of the image.
3. The method of underwater dim light image enhancement based on illumination and noise residual map according to claim 1, characterized in that the active layer of the multi-scale residual neural network model uses the PRelu function.
CN202211512578.6A 2022-11-28 2022-11-28 Underwater dim light image enhancement method based on illumination and noise residual image Active CN115829868B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211512578.6A CN115829868B (en) 2022-11-28 2022-11-28 Underwater dim light image enhancement method based on illumination and noise residual image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211512578.6A CN115829868B (en) 2022-11-28 2022-11-28 Underwater dim light image enhancement method based on illumination and noise residual image

Publications (2)

Publication Number Publication Date
CN115829868A true CN115829868A (en) 2023-03-21
CN115829868B CN115829868B (en) 2023-10-03

Family

ID=85532691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211512578.6A Active CN115829868B (en) 2022-11-28 2022-11-28 Underwater dim light image enhancement method based on illumination and noise residual image

Country Status (1)

Country Link
CN (1) CN115829868B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745595A (en) * 2024-02-18 2024-03-22 珠海金山办公软件有限公司 Image processing method, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method
CN113034413A (en) * 2021-03-22 2021-06-25 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error codec
CN113450290A (en) * 2021-09-01 2021-09-28 中科方寸知微(南京)科技有限公司 Low-illumination image enhancement method and system based on image inpainting technology
WO2021230708A1 (en) * 2020-05-15 2021-11-18 Samsung Electronics Co., Ltd. Image processing method, electronic device and readable storage medium
CN114359073A (en) * 2021-12-16 2022-04-15 华南理工大学 Low-illumination image enhancement method, system, device and medium
CN114757863A (en) * 2022-04-15 2022-07-15 西安理工大学 Underwater image enhancement method integrating deep learning and traditional image enhancement technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method
WO2021230708A1 (en) * 2020-05-15 2021-11-18 Samsung Electronics Co., Ltd. Image processing method, electronic device and readable storage medium
CN113034413A (en) * 2021-03-22 2021-06-25 西安邮电大学 Low-illumination image enhancement method based on multi-scale fusion residual error codec
CN113450290A (en) * 2021-09-01 2021-09-28 中科方寸知微(南京)科技有限公司 Low-illumination image enhancement method and system based on image inpainting technology
CN114359073A (en) * 2021-12-16 2022-04-15 华南理工大学 Low-illumination image enhancement method, system, device and medium
CN114757863A (en) * 2022-04-15 2022-07-15 西安理工大学 Underwater image enhancement method integrating deep learning and traditional image enhancement technology

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WENJIE XU等: "Deep Residual Convolutional Network for Natural Image Denoising and Brightness Enhancement", 《2018 INTERNATIONAL CONFERENCE ON PLATFORM TECHNOLOGY AND SERVICE (PLATCON)》 *
XIAOGANG XU等: "SNR-Aware Low-light Image Enhancement", 《2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
亓法国;张海洋;柳淳;赵长明;张子龙;: "一种基于双分支改良编解码器的图像去噪算法", 应用光学, no. 05 *
刘小娜: "基于深度网络的低照度图像增强算法研究及应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117745595A (en) * 2024-02-18 2024-03-22 珠海金山办公软件有限公司 Image processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115829868B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
CN111968044A (en) Low-illumination image enhancement method based on Retinex and deep learning
CN109636754A (en) Based on the pole enhancement method of low-illumination image for generating confrontation network
CN112435191B (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN111047529B (en) Video processing method based on machine learning
CN110717868B (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
CN112200732B (en) Video deblurring method with clear feature fusion
CN113284061B (en) Underwater image enhancement method based on gradient network
CN111047543A (en) Image enhancement method, device and storage medium
CN109785252A (en) Based on multiple dimensioned residual error dense network nighttime image enhancing method
CN111612722A (en) Low-illumination image processing method based on simplified Unet full-convolution neural network
CN111931671A (en) Face recognition method for illumination compensation in underground coal mine adverse light environment
CN115240022A (en) Low-illumination image enhancement method using long exposure compensation
CN115829868A (en) Underwater dim light image enhancement method based on illumination and noise residual error image
CN116757986A (en) Infrared and visible light image fusion method and device
CN116012260A (en) Low-light image enhancement method based on depth Retinex
CN117351340A (en) Underwater image enhancement algorithm based on double-color space
CN117611467A (en) Low-light image enhancement method capable of balancing details and brightness of different areas simultaneously
CN114862707A (en) Multi-scale feature recovery image enhancement method and device and storage medium
CN114897718B (en) Low-light image enhancement method capable of balancing context information and space detail simultaneously
CN116109510A (en) Face image restoration method based on structure and texture dual generation
CN114463189A (en) Image information analysis modeling method based on dense residual UNet
Li et al. An electrical equipment image enhancement approach based on Zero-DCE model for power IoTs edge service
CN117726541B (en) Dim light video enhancement method and device based on binarization neural network
Yang et al. Multi-scale extreme exposure images fusion based on deep learning
CN115984137B (en) Dim light image recovery method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant