CN110378848B - Image defogging method based on derivative map fusion strategy - Google Patents

Image defogging method based on derivative map fusion strategy Download PDF

Info

Publication number
CN110378848B
CN110378848B CN201910609244.2A CN201910609244A CN110378848B CN 110378848 B CN110378848 B CN 110378848B CN 201910609244 A CN201910609244 A CN 201910609244A CN 110378848 B CN110378848 B CN 110378848B
Authority
CN
China
Prior art keywords
image
derivative
module
hazy
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910609244.2A
Other languages
Chinese (zh)
Other versions
CN110378848A (en
Inventor
郭璠
赵鑫
唐琎
吴志虎
肖晓明
高琰
邹北骥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201910609244.2A priority Critical patent/CN110378848B/en
Publication of CN110378848A publication Critical patent/CN110378848A/en
Application granted granted Critical
Publication of CN110378848B publication Critical patent/CN110378848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image defogging method based on a derivative map fusion strategy, which comprises the following steps: step A: constructing a sample set based on a large number of original foggy images and corresponding fogless images; and B: extracting derivative images of the original foggy image from five angles respectively to enhance the detail recovery of the defogging method on the long shot and the short shot of the image, and eliminating color cast and enhancing contrast; and C: building a U-shaped convolution neural network; step D: cascading the derivative image and the original foggy image obtained in the step A as input, and training the network constructed in the step C by taking the fogless image as output; step E: and D, using the network obtained by training in the step D, and cascading the original foggy image and a derivative graph corresponding to the foggy image as an input predicted foggy-free image. The invention has good defogging effect.

Description

Image defogging method based on derivative map fusion strategy
Technical Field
The invention belongs to the field of image information processing, and particularly relates to an image defogging method based on a derivative map fusion strategy.
Background
Image defogging is intended to restore the details of a scene and estimate an unknown fog-free image from a given blurred image. Defogging algorithms are used in many situations, such as day-to-day photography, automatic monitoring systems, satellite remote sensing, outdoor object recognition, and visual navigation in low visibility environments. However, the quality of an image photographed under severe weather conditions such as haze is easily disturbed by the haze, so that the contrast of the photographed image is lowered and color degradation is serious. Such foggy images often lack visual vividness and clarity. Therefore, image defogging techniques are urgently needed not only in daily photography but also in many computer vision applications.
The existing defogging work at present mainly focuses on predicting a transmission image and an atmospheric light value of an original fogging image by using relevant prior knowledge of fog or hypothesis and then calculating the defogged image by adopting an atmospheric scattering model. For example, He [ by IEEE transactions on pattern analysis and machine interference, 2001 ] proposes an a priori assumption of dark channels from an observation: the pixel value of at least one channel in most fog-free images is very low. According to the assumption, the transmission diagram can be conveniently calculated, and then the defogged result is calculated according to the atmospheric scattering model. Although the He method can achieve a good defogging effect in most cases, the method is not very good for the sky area treatment, and the method easily causes an effect of contrast enhancement. In recent years, researchers [ in IEEE Transactions on Image Processing, 2016 ] proposed to predict transmission maps by a deep learning method and then calculate defogged images by an atmospheric scattering model, but the method of predicting transmission maps by deep learning also causes the problem of inaccurate transmission map prediction.
In patent, patent application No. CN108921805A to chenyuming et al proposes a method for defogging images and videos. The method divides an original foggy image into a plurality of rectangular sub-blocks with equal size, introduces self-adaptive degree factors and neighborhood dark channel pooling operation to calculate a transmission image, and further calculates a fogless image according to an atmospheric scattering model. Patent application with patent publication number CN109146810A of yellow-red soldier et al proposes an image defogging method based on end-to-end deep learning. The method adopts a deep defogging network, directly predicts a transmission image and an atmospheric value according to a fog image, and then calculates a fog-free image according to an atmospheric scattering model.
The above methods all rely on predictions of transmission map and atmospheric light values, so inaccuracies in either transmission map or atmospheric value predictions can lead to problems of incomplete defogging or local color shift.
Under the background, it is very important to research an image defogging method which has strong robustness and can effectively remove various fog concentrations in various scenes.
Disclosure of Invention
The invention aims to solve the problems that the existing defogging method depends on a mechanism of transmission images and atmospheric value prediction, defogging is not thorough, local color cast and the like, and provides an image defogging method based on a derivative image fusion strategy, which can effectively extract derivative images of a foggy image from multiple angles through defogging network fusion, thereby realizing image defogging and having good defogging effect on images of various scenes and various fog concentrations.
The technical scheme adopted by the invention is as follows:
an image defogging method based on a derivative map fusion strategy comprises the following steps:
step A: constructing a sample set by using a large number of original foggy images and corresponding fogless images;
and B: respectively extracting derivative images of the original foggy image from a plurality of angles to enhance the detail recovery of the defogging method on the long shot view and the short shot view of the image, and eliminating color cast and enhancing contrast;
and C: building a defogging network based on a convolutional neural network;
step D: for each group of samples, taking the cascade results of all the derivative images and the original foggy images acquired in the step B as input, taking the fogless images as output, and training the defogging network constructed in the step C;
step E: and D, as for the image to be defogged, taking the cascade result of the derivative image obtained in the step B and the original image to be defogged as the input of the defogging network trained in the step D, and predicting the defogged image without fog.
Further, the specific processing procedure of step B is as follows:
the invention extracts the following five derivative images from the original foggy image from five angles of improving the long-range and close-range details, eliminating the color cast of the image caused by atmospheric light, enhancing the brightness of the image and enhancing the contrast of the image.
1) Exposure Pattern IEM
The exposure map is used for measuring the image exposure, and the exposure map is used for improving the detail recovery of the defogging method on the long-range information in the invention.
Firstly, the original foggy image IhazyConverting the RGB color space into the HIS color space to obtain a hue (H), saturation (S) and brightness (I) component diagram of the HIS color space;
then, the exposure evaluation chart EM is calculated from the brightness component chart I according to the following formula to measure the exposure of different areas:
Figure GDA0002940134150000021
wherein I (x, y) is luminance information at (x, y) in the luminance component map I, and σ is a standard deviation of pixel values of all pixel points on the luminance component map I;
finally, the evaluation image EM and the original fogging image I are evaluated from the exposurehazyCalculating exposure map I'EMBy highlighting the detail information of the distant view, the calculation formula is as follows:
I′EM=(1-EM)*Ihazy
wherein, 1-EM represents a result matrix obtained by respectively performing subtraction operation on each element in EM; the dimension of 1-EM is the same as that of EM, and the value of the element in the x-th row and the y-th column in 1-EM is equal to 1 minus the value of the element in the x-th row and the y-th column in EM; representing the Hadamard product (Hadamard product) of two matrices, i.e. IEMIs equal to 1-EM and IhazyMultiplication of corresponding elements in (I)EMThe value of the element in the x-th row and the y-th column in the same 1-EM is multiplied by the value of the element in the x-th row and the y-th column in the same 1-EMhazyThe value of the element in the x-th row and the y-th column;
due to calculated exposure map I'EMThere may be a numerical overflow, hence to I'EMFirst, the interval is scaled and the value is scaled to 0-255Then, the value type is converted into a uint8 (8-bit unsigned integer) type, so that the final exposure image I can be obtainedEM(ii) a matlab provides a function that converts the number of values to a type of uint8, namely, uint8 (number).
2) Significance map ISM
The fog concentration is increased along with the distance between the target scene and the imaging lens, and in order to improve the defogging effect of the fog scene with low concentration in the close range, the invention provides a significance map for highlighting the close range detail information.
First, the original hazy image is smoothed using gaussian filtering to obtain a filtered image gf. The filtered result gf is then transformed from the RGB color space to the LAB color space, resulting in its luminance information map L, and component maps a and B of the two color channels, where a comprises colors from dark green (low luminance value) to gray (medium luminance value) to bright pink red (high luminance value), and B comprises colors from bright blue (low luminance value) to gray (medium luminance value) to yellow (high luminance value). The saliency map S is calculated according to the following formula:
S=(L-mean_L)2+(A-mean_A)2+(B-mean_B)2
the three variables mean _ L, mean _ A and mean _ B are respectively the mean values of pixel values of all pixel points in L, A and B, the subtraction in the formula represents that each element in the matrix subtracts one value respectively, the square of the matrix represents that each element in the matrix is squared respectively, and the element in the obtained result matrix is equal to the square of the corresponding element in the original matrix; finally, a saliency characteristic map S and an original foggy image I are obtainedhazyIs subjected to Hadamard multiplication to obtain a result graph I 'of close-range enhancement'SMAnd is prepared from'SMConversion to the fluid 8 type to obtain significance map I with values between 0 and 255SM
3) White balance diagram IBM
Because of the influence of atmospheric illumination, partial color cast phenomenon exists in the original foggy image, and the invention provides a method for using white balance to relieve the color cast problem. The white balance map calculation flow mainly comprises 3 steps.
The first step is as follows: respectively obtaining original foggy images IhazyMean values avgR, avgG, avgB of pixel values of all pixel points on three color components, namely R, G, B components, in RGB color space and original foggy image IhazyThe mean value of the pixel values of all the pixels on the (color image) image is the gray value;
the second step is that: respectively calculating the ratio of the grayValue to avgR, avgG and avgB to obtain scaleR, scaleG and scaleB;
the third step: separately mixing scaleR, scaleG and scaleB with the original foggy image IhazyMultiplying R, G, B components in the RGB color space (multiplying the pixel value of each pixel point in R, G, B components by scaleR, scaleG and scaleB respectively) to obtain an original foggy image IhazyFinally, the corrected result is subjected to numerical truncation, namely the numerical interval is controlled to be 0-255, so that a new R, G, B component can be obtained, and the original foggy image I is formed by the new R, G, B componenthazyWhite balance diagram IBM
4) gamma correction chart IGM
Because some areas in the fog image will appear darker due to the interference of fog, in order to enhance the overall brightness of the image, the invention proposes to use gamma transformation to correct the original fog image IhazyThereby obtaining a gamma correction chart IGMThe calculation formula is as follows:
IGM=αIhazy β
wherein β is gamma coefficient used to control the brightness correction degree; i ishazy βRepresentation pair matrix IhazyIn which each element is separately solved to the power of beta, i.e. Ihazy βIs equal to IhazyBeta power of corresponding element in the solution; alpha is a linear transformation factor and is used for adjusting the overall brightness of the image after exponential transformation;
5) defogging gas mask IVM
The fog filters the atmospheric light and the light reflected from the target object to obtain a fog image with low contrast, so that the fog can be regarded as a fog shade attached to the original fog-free image. The invention calculates the non-uniform fog mask image and eliminates the fog mask in the original foggy image so as to achieve the effect of enhancing the contrast and visibility of the original foggy image. The method for obtaining the defogging gas mask image comprises the following steps:
a) performing Gaussian filtering on each channel of original image in RGB color space to obtain blurred image
Separately extracting original foggy images IhazyIn three color components of RGB color space, namely R, G and B components, the image of each component is convolved to obtain a new R, G, B component, and I is formed by the new R, G, B componentconvAnd the convolution kernel F (x, y) needs to satisfy that the sum of all elements in the F (x, y) is 1.
b) Obtaining a non-uniform fog mask
In order to obtain the non-uniform fog mask image of the original fog image, the invention firstly obtains IconvThe mean value of the pixel values of all the pixel points on the R, G, B component is marked as R _ mean, G _ mean and B _ mean, and then the result graph LDeHaze of each channel after the global fog mask is removed is calculated through the following formula:
LDeHaze_R(x,y)=255-Ihazy(x,y)×R_mean
LDeHaze_G(x,y)=255-Ihazy(x,y)×G_mean
LDeHaze_B(x,y)=255-Ihazy(x,y)×B_mean
where LDeHaze _ R, LDeHaze _ G, and LDeHaze _ B are the three components of LDeHaze in the RGB color space, respectively.
The LDeHaze is converted from the RGB color space to the YCbCr color space and the luminance component Y is extracted as the non-uniform fog mask map RegionalHaze of the original foggy image.
c) Fog-eliminating shade
In order to eliminate the original fogging, the original fogging image I is first processedhazyRespectively taking logarithms of the non-uniform fog mask pattern and the non-uniform fog mask pattern, then subtracting, obtaining a solved fog mask elimination pattern I _ dehaze through exponential transformation, and calculating a fog mask elimination pattern formula as shown in the formula:
I_dehaze=exp(logIhazy-logRegionalHaze)
the logarithm taking and the exponential transformation are respectively to take the logarithm of each element in the matrix and carry out exponential calculation to obtain the processed matrix.
In order to increase the contrast of the image after fog elimination, the invention uses an adaptive contrast enhancement (pull-up) algorithm to process the fog mask elimination image I _ dehaze so as to improve the image contrast and obtain a final fog elimination mask image IVM
Further, the defogging network constructed in the step C is a U-shaped convolution neural network. The invention provides a method for constructing a U-shaped convolution neural network by cascading a residual error module (CRB) and a channel compression module (CCB) for defogging.
1) Construction of cascaded residual modules (CRBs)
In the neural network, the shallow network may have the problem of gradient disappearance or explosion in the training process, and the phenomenon can be effectively relieved by adopting the residual module design; in order to effectively extract features and accelerate the circulation of information flow in a network, the invention provides a cascade residual error module, namely a CRB module. The CRB module comprises two residual modules, a residual branch of each residual module comprises a first convolution layer, a first example normalization layer, a first Relu activation function, a second convolution layer, a second example normalization layer and a second Relu activation function which are sequentially connected, and the output of the residual module is the sum of the original input and the output passing through the residual branch.
2) Construction of channel compression modules (CCBs)
In order to effectively combine low-order features and high-order semantic features in a network, the invention provides a channel compression module, namely a CCB module. The CCB module has two inputs: input 1 is a low-order feature and input 2 is a high-order feature. Gathering the two features together through cascade operation, fusing the two features in a convolution mode, carrying out example normalization processing on the convolved features, and finally outputting through a Relu activation function.
3) Construction of U-shaped convolutional neural network
The defogging network of the invention isU-shaped convolutional neural network, in which the input is five derivative maps (Exposure map I)EMSignificance map ISMWhite balance diagram IBMGamma correction chart IGMAnd a defogging gas mask IVM) And cascading the five derivative images and the original fogging image to obtain an input data matrix with 18 channels, wherein the cascading operation means that the five derivative images and the original fogging image are spliced in channel dimension, each image has three channels, and six images are cascaded to generate a matrix with 18 channels. The U-shaped convolutional neural network comprises an encoding layer and a decoding layer.
a) And designing an encoding layer. The encoding layer is mainly used for extracting the characteristics of an input data matrix obtained by cascading five derivative graphs and an original foggy image, and the first 4 modules of the Resnet-50 network structure are used as four encoding modules of the encoding layer to extract the characteristics of the input data matrix step by step. The first module Resnet-1 of Resnet _50 is a feature extraction module, and 64 features are extracted by adopting a convolution kernel with the size of 7 × 7 and the step length of 2; the second module resnet-2 has 3 residual modules (resnet-2a to resnet-2c), each residual module extracts 64 features by convolution of 1 × 1, then extracts 64 features by convolution kernel with size of 3 × 3, and finally amplifies the feature number to 256 by convolution of 1 × 1; the third module, resnet-3, is similar to resnet-2, but resnet-3 has 4 residual modules (resnet-3 a-resnet-3 d) with feature numbers in the residual modules being 128 and then scaled up to 512; the fourth module, resnet-4, is also similar to resnet-2, but resnet-4 has 6 residual modules (resnet-4 a-resnet-4 f), where the number of features in the residual module is first 256 and then scaled up to 1024. The ResNet-50 network structure is the prior art, see https:// blog.csdn.net/seven _ layer _ progress/arrow/details/69360488; resnet-1 is the Part1 convolutional layer and Part2 max pooling layer.
b) And designing a decoding layer. The decoding layer is mainly responsible for fusing high-order characteristics and low-order characteristics to realize prediction of the weight of the derivative graph. The first decoding module firstly up-samples the characteristic diagram obtained by resnet-4, then sends the result obtained by up-sampling and the characteristic diagram obtained by resnet-3 as input 1 and input 2 respectively into the CCB module, and the result obtained by the CCB module is input into the CRB module to obtain output Decode-1 of the first decoding module. The second decoding module firstly carries out up-sampling on Decode-1, then takes the result obtained by the up-sampling and the characteristic diagram obtained by resnet-2 as input 1 and input 2 respectively and sends the input 1 and the input 2 to the CCB module, and then inputs the result obtained by the CCB module to the CRB module to obtain output Decode-2 of the second decoding module. Then, Decode-2 is up-sampled, the result of up-sampling and the feature map obtained by resnet-1 are respectively used as input 1 and input 2 to be sent to the CCB module, and the output obtained by the CCB module is input to the CRB module to obtain the output Decode-3 of the third decoding module. Decode-3 is regarded as a weight map of the predicted derivative map in the present invention. Solving a Hadamard product of an input data matrix obtained by cascading five derivative graphs and an original foggy image and a weight graph of a predicted derivative graph to obtain a weighted derivative graph output derived _ out, inputting the derived _ out into a CRB module for fusion to obtain a fused weighted derivative graph, and finally performing convolution operation on the fused weighted derivative graph to obtain a result, namely the predicted fogless image, wherein the parameters of the convolution operation are as follows: the number of input channels is 18, the number of output channels is 3, the size of the convolution kernel is 3 × 3, the padding is 1, and the step size is 1.
Further, the specific processing procedure of step D is as follows:
1) design of mixing loss function
The invention provides a mixed loss function for optimizing a U-shaped convolutional neural network, wherein the mixed loss function comprises two parts: absolute value error loss and perceptual loss. The absolute value error loss is used for measuring the fog-free image recovery degree, and the perception loss is used for evaluating the overall structure information of the fog-free image.
Loss=λ1L12Lperceptual
Figure GDA0002940134150000071
Figure GDA0002940134150000072
Wherein Loss is the mixing Loss, L1And LperceptualError loss in absolute value and perceptual loss, λ, respectively1And λ2Two lost weight coefficients, respectively; wherein FiPredicting a fog-free image G of the fog-containing image in the ith group of samples by using a U-shaped convolution neural networkiIs a true fog-free image in the ith group of samples. In the calculation of the perception loss, the characteristics of the non-fog image and the real non-fog image which are obtained by prediction are extracted by a characteristic extraction module and compared with each other in different levels; v1(Fi) And V1(Gi) A first feature map V, representing the predicted fog-free image and the real fog-free image respectively, extracted by the feature extraction module2(Fi) And V2(Gi) And the second feature maps respectively represent the predicted fog-free image and the real fog-free image extracted by the feature extraction module. N is the number of sample groups.
2) Multi-scale training strategy
The invention adopts a multi-scale training strategy to enable the network to have scale invariance. In the training process, the SGD is used as an optimizer to optimize the weight of the U-shaped convolutional neural network, multiple rounds of training are carried out, the first n rounds of training are carried out, an input data matrix formed by cascade connection of a derivative graph and an original hazy image is scaled to a first size, and the initial learning rate is 0.001; starting from the n +1 round, the input data matrix, which is a concatenation of the derivative map and the original hazy image, is scaled to a second size, and the learning rate is scaled to 0.0001. By the aid of the staged training mode, the network convergence effect is better, and the prediction capability of the network on images with different scales is enhanced.
Further, the specific processing procedure of step E is as follows:
and D, loading the convolution defogging network trained in the step D, extracting five derivative graphs of the original fogging image, cascading the original fogging image and the five derivative graphs, inputting the five derivative graphs into the defogging network, and outputting a defogged result.
Furthermore, the defogging method is executed through the GPU to realize real-time calculation and real-time defogging.
Has the advantages that:
the existing defogging algorithm mostly depends on prior knowledge or hypothesis related to foggy day images, and a defogged result is calculated through an atmospheric scattering model on the basis. Such a model is prone to cause problems such as incomplete defogging and color cast. The invention provides a new method for realizing effective defogging by fusing the derivative diagram information of the original fogging image, which does not adopt an atmospheric scattering model, but adopts a U-shaped convolution network to realize the extraction of the salient feature areas of each part of the derivative diagram, thereby avoiding the problems of the existing method and realizing the effective defogging treatment on the original fogging image. The method has the advantages of good defogging effect, high automation degree, higher fidelity to the detail information of the long-range view and the short-range view of the image, and capability of enhancing the contrast of the image on the basis of not changing the color effect of the original image. In addition, the method can generate high-quality fog-free images through the depth defogging network fusion of derivative images from different angles, so that the method can be widely applied to relevant applications of image defogging.
Drawings
FIG. 1 is a schematic diagram of a method in an embodiment of the invention;
FIG. 2 is a flow chart of a method in an embodiment of the invention;
FIG. 3 is a diagram showing an original fogging image and a derivative thereof obtained in example 1 of the present invention; in which fig. 3(a) is an original fogging image, fig. 3(b) is an exposure image, fig. 3(c) is a gamma correction image, fig. 3(d) is a saliency map, fig. 3(e) is a white balance map, and fig. 3(f) is a defogging gas mask map.
FIG. 4 is a diagram showing cascaded residual modules of the defogging network according to embodiment 1;
FIG. 5 is a block diagram showing a channel compression module of the defogging network according to embodiment 1;
FIG. 6 is a defogging network in accordance with embodiment 1;
FIG. 7 is a graph showing an image to be defogged and a defogging result in example 1; in which fig. 7(a) is an image to be defogged, and fig. 7(b) is a defogging result diagram.
FIG. 8 is the original haze image and derived map from test 1; where fig. 8(a) is an original fogging image, fig. 8(b) is an exposure image, fig. 8(c) is a gamma correction image, fig. 8(d) is a saliency map, fig. 8(e) is a white balance map, and fig. 8(f) is a defogging gas mask map.
FIG. 9 is a graph of the original fogging image and the defogging result in test 1; fig. 9(a) is an original fogging image, and fig. 9(b) is a defogging result image.
FIG. 10 is the original haze image and derived map from test 2; in which fig. 10(a) is an original fogging image, fig. 10(b) is an exposure image, fig. 10(c) is a gamma correction image, fig. 10(d) is a saliency map, fig. 10(e) is a white balance map, and fig. 10(f) is a defogging gas mask map.
FIG. 11 is a graph of the original fogging image and the defogging result in test 2; in which fig. 11(a) is an original fogging image, and fig. 11(b) is a defogging result image.
Detailed Description
The invention is further illustrated by the following description in conjunction with the accompanying drawings:
example 1:
in this embodiment, the overall implementation flow of any image containing fog is shown in fig. 1, and the defogging result generation is performed according to the following steps:
step A: constructing a sample set by using a large number of original foggy images and corresponding fogless images;
and B: the five angles of improving the long-range detail and the short-range detail, eliminating the color cast of the image caused by atmospheric light, enhancing the brightness of the image and enhancing the contrast of the image are extracted from the original foggy image as follows:
1) exposure Pattern IEM
Firstly, the original foggy image Ihazy(fig. 3(a)) converting from RGB color space to HIS color space, obtaining a map of its hue (H), saturation (S) and lightness (I) components;
then, the exposure evaluation chart EM is calculated from the brightness component chart I according to the following formula to measure the exposure of different areas:
Figure GDA0002940134150000091
where I (x, y) is luminance information at (x, y) in the luminance component map I, σ is a standard deviation of pixel values of all pixel points on the luminance component map I, and σ is set to 0.25 in this embodiment.
Finally, the evaluation image EM and the original fogging image I are evaluated from the exposurehazyCalculating exposure map I'EMBy highlighting the detail information of the distant view, the calculation formula is as follows:
I′EM=(1-EM)*Ihazy
wherein, 1-EM represents a result matrix obtained by respectively performing subtraction operation on each element in EM; the dimension of 1-EM is the same as that of EM, and the value of the element in the x-th row and the y-th column in 1-EM is equal to 1 minus the value of the element in the x-th row and the y-th column in EM; representing the Hadamard product (Hadamard product) of two matrices, i.e. IEMIs equal to 1-EM and IhazyMultiplication of corresponding elements in (I)EMThe value of the element in the x-th row and the y-th column in the same 1-EM is multiplied by the value of the element in the x-th row and the y-th column in the same 1-EMhazyThe value of the element in the x-th row and the y-th column;
due to calculated exposure map I'EMThere may be a numerical overflow, hence to I'EMFirstly, interval scaling is carried out, the numerical value is scaled to be between 0 and 255, then the numerical value type is converted into the type of uint8 (8-bit unsigned integer), and the final exposure diagram I can be obtainedEMThe resulting exposure pattern is shown in FIG. 3 (b); matlab provides a function that converts the number of values to a type of uint8, namely, uint8 (number).
2) Significance map ISM
First, the original foggy image is smoothed by gaussian filtering (the gaussian kernel size adopted in this embodiment is 3 × 3, and the filtering method is convolution) to obtain a filtered image gf. The filtered result gf is then transformed from the RGB color space to the LAB color space, resulting in its luminance information map L, and component maps a and B of the two color channels, where a comprises colors from dark green (low luminance value) to gray (medium luminance value) to bright pink red (high luminance value), and B comprises colors from bright blue (low luminance value) to gray (medium luminance value) to yellow (high luminance value). The saliency map S is calculated according to the following formula:
S=(L-mean_L)2+(A-mean_A)2+(B-mean_B)2
the three variables mean _ L, mean _ A and mean _ B are respectively the mean values of pixel values of all pixel points in L, A and B, the subtraction in the formula represents that each element in the matrix subtracts one value respectively, the square of the matrix represents that each element in the matrix is squared respectively, and the element in the obtained result matrix is equal to the square of the corresponding element in the original matrix; finally, a saliency characteristic map S and an original foggy image I are obtainedhazyIs subjected to Hadamard multiplication to obtain a result graph I 'of close-range enhancement'SMAnd is prepared from'SMConversion to the fluid 8 type to obtain significance map I with values between 0 and 255SMAs shown in fig. 3 (d).
3) White balance diagram IBM
The first step is as follows: respectively obtaining original foggy images IhazyMean values avgR, avgG, avgB of pixel values of all pixel points on three color components, namely R, G, B components, in RGB color space and original foggy image IhazyThe mean value of the pixel values of all the pixels on the (color image) image is the gray value;
the second step is that: respectively calculating the ratio of the grayValue to avgR, avgG and avgB to obtain scaleR, scaleG and scaleB;
the third step: separately mixing scaleR, scaleG and scaleB with the original foggy image IhazyMultiplying R, G, B components in the RGB color space (multiplying the pixel value of each pixel point in R, G, B components by scaleR, scaleG and scaleB respectively) to obtain an original foggy image IhazyFinally, the corrected result is subjected to numerical truncation, namely the numerical interval is controlled to be 0-255, so that a new R, G, B component can be obtained, and the original foggy image I is formed by the new R, G, B componenthazyWhite balance diagram IBMAs shown in FIG. 3 (e);
4) gamma correction chart IGM
Correction of original foggy image I using gamma transformationhazyThereby obtaining a gamma correction chart IGMThe calculation formula is as follows:
IGM=αIhazy β
where β is a gamma coefficient used to control the degree of brightness correction, β is set to 1.5 in this embodiment; i ishazy βRepresentation pair matrix IhazyIn which each element is separately solved to the power of beta, i.e. Ihazy βIs equal to IhazyBeta power of corresponding element in the solution; α is a linear transformation factor for adjusting the overall brightness of the exponentially transformed image, and is set to 1 in this embodiment; the resulting gamma correction pattern is shown in FIG. 3 (c);
5) defogging gas mask IVM
a) Performing Gaussian filtering on each channel of original image in RGB color space to obtain blurred image
Separately extracting original foggy images IhazyIn three color components of RGB color space, namely R, G and B components, the image of each component is convolved to obtain a new R, G, B component, and I is formed by the new R, G, B componentconvAnd the convolution kernel F (x, y) needs to satisfy that the sum of all elements in the F (x, y) is 1.
b) Obtaining a non-uniform fog mask
In order to obtain the non-uniform fog mask of the original foggy image, I is first obtainedconvThe mean value of the pixel values of all the pixel points on the R, G, B component is marked as R _ mean, G _ mean and B _ mean, and then the result graph LDeHaze of each channel after the global fog mask is removed is calculated through the following formula:
LDeHaze_R(x,y)=255-Ihazy(x,y)×R_mean
LDeHaze_G(x,y)=255-Ihazy(x,y)×G_mean
LDeHaze_B(x,y)=255-Ihazy(x,y)×B_mean
where LDeHaze _ R, LDeHaze _ G, and LDeHaze _ B are the three components of LDeHaze in the RGB color space, respectively.
The LDeHaze is converted from the RGB color space to the YCbCr color space and the luminance component Y is extracted as the non-uniform fog mask map RegionalHaze of the original foggy image.
c) Fog-eliminating shade
In order to eliminate the original fogging, the original fogging image I is first processedhazyRespectively taking logarithms of the non-uniform fog mask pattern and the non-uniform fog mask pattern, then subtracting, obtaining a solved fog mask elimination pattern I _ dehaze through exponential transformation, and calculating a fog mask elimination pattern formula as shown in the formula:
I_dehaze=exp(logIhazy-logRegionalHaze)
the logarithm taking and the exponential transformation are respectively to take the logarithm of each element in the matrix and carry out exponential calculation to obtain the processed matrix.
In order to increase the contrast of the image after the fog removal, the embodiment uses an adaptive contrast enhancement (pull-up) algorithm to process the fog mask removal image I _ dehaze to improve the image contrast, so as to obtain a final fog removal mask image IVMAs shown in fig. 3 (f).
And C: building a defogging network; the defogging network is a U-shaped convolution neural network. The present embodiment proposes cascading a residual block (CRB) and a Channel Compression Block (CCB) to construct a U-shaped convolutional neural network for defogging.
1) Construction of cascaded residual modules (CRBs)
In the neural network, the shallow network may have the problem of gradient disappearance or explosion in the training process, and this embodiment adopts the residual module design to effectively alleviate the phenomenon; in order to effectively extract features and speed up the flow of information streams in a network, the present embodiment provides a cascaded residual block, i.e., a CRB block, as shown in fig. 4. The CRB module comprises two residual modules, a residual branch of each residual module comprises a first convolution layer, a first example normalization layer, a first Relu activation function, a second convolution layer, a second example normalization layer and a second Relu activation function which are sequentially connected, and the output of the residual module is the sum of the original input and the output passing through the residual branch.
2) Construction of channel compression modules (CCBs)
In order to effectively combine the low-order features and the high-order semantic features in the network, the present embodiment proposes a channel compression module, i.e., a CCB module, as shown in fig. 5. The CCB module has two inputs: input 1 is a low-order feature and input 2 is a high-order feature. Gathering the two features together through cascade operation, fusing the two features in a convolution mode, carrying out example normalization processing on the convolved features, and finally outputting through a Relu activation function.
3) Construction of U-shaped convolutional neural network
As shown in FIG. 6, the defogging network of the present embodiment is a U-shaped convolutional neural network, and the input of the network is five derivative images (exposure image I)EMSignificance map ISMWhite balance diagram IBMGamma correction chart IGMAnd a defogging gas mask IVM) And cascading the five derivative images and the original fogging image to obtain an input data matrix with 18 channels, wherein the cascading operation means that the five derivative images and the original fogging image are spliced in channel dimension, each image has three channels, and six images are cascaded to generate a matrix with 18 channels. The U-shaped convolutional neural network comprises an encoding layer and a decoding layer.
a) And designing an encoding layer. The encoding layer is mainly used for extracting the characteristics of an input data matrix obtained by cascading five derivative graphs and an original foggy image, and the first 4 modules of the Resnet _50 network structure are used as four encoding modules of the encoding layer to extract the characteristics of the input data matrix step by step. The first module Resnet-1 of Resnet _50 is a feature extraction module, and 64 features are extracted by adopting a convolution kernel with the size of 7 × 7 and the step length of 2; the second module resnet-2 has 3 residual modules (resnet-2a to resnet-2c), each residual module extracts 64 features by convolution of 1 × 1, then extracts 64 features by convolution kernel with size of 3 × 3, and finally amplifies the feature number to 256 by convolution of 1 × 1; the third module, resnet-3, is similar to resnet-2, but resnet-3 has 4 residual modules (resnet-3 a-resnet-3 d) with feature numbers in the residual modules being 128 and then scaled up to 512; the fourth module, resnet-4, is also similar to resnet-2, but resnet-4 has 6 residual modules (resnet-4 a-resnet-4 f), where the number of features in the residual module is first 256 and then scaled up to 1024. The ResNet-50 network structure is the prior art, see https:// blog.csdn.net/seven _ layer _ progress/arrow/details/69360488; resnet-1 is the Part1 convolutional layer and Part2 max pooling layer.
b) And designing a decoding layer. The decoding layer is mainly responsible for fusing high-order characteristics and low-order characteristics to realize prediction of the weight of the derivative graph. The first decoding module firstly up-samples the characteristic diagram obtained by resnet-4, then sends the result obtained by up-sampling and the characteristic diagram obtained by resnet-3 as input 1 and input 2 respectively into the CCB module, and the result obtained by the CCB module is input into the CRB module to obtain output Decode-1 of the first decoding module. The second decoding module firstly carries out up-sampling on Decode-1, then takes the result obtained by the up-sampling and the characteristic diagram obtained by resnet-2 as input 1 and input 2 respectively and sends the input 1 and the input 2 to the CCB module, and then inputs the result obtained by the CCB module to the CRB module to obtain output Decode-2 of the second decoding module. Then, Decode-2 is up-sampled, the result of up-sampling and the feature map obtained by resnet-1 are respectively used as input 1 and input 2 to be sent to the CCB module, and the output obtained by the CCB module is input to the CRB module to obtain the output Decode-3 of the third decoding module. The embodiment regards Decode-3 as the weight map of the predicted derivative map. Solving a Hadamard product of an input data matrix obtained by cascading five derivative graphs and an original foggy image and a weight graph of a predicted derivative graph to obtain a weighted derivative graph output derivative _ out, inputting the derivative _ out into a CRB module to fuse the derivative graphs, and finally performing convolution operation on a fusion result, wherein the obtained result is the predicted foggy image, and parameters of the convolution operation are as follows: the number of input channels is 18, the number of output channels is 3, the size of the convolution kernel is 3 × 3, the padding is 1, and the step size is 1.
Step D: for each group of samples, taking the cascade results of all the derivative images and the original foggy images acquired in the step B as input, taking the fogless images as output, and training the defogging network constructed in the step C;
according to the mixed loss function designed by the embodiment, the loss between the predicted output fog-free image and the real fog-free image of the network is calculated, and the SGD is used as an optimizer to optimize the network weight. The mixing loss function designed by the embodiment comprises two parts: absolute value error loss and perceptual loss. The absolute value error loss is used for measuring the fog-free image recovery degree, and the perception loss is used for evaluating the overall structure information of the fog-free image.
Loss=λ1L12Lperceptual
Figure GDA0002940134150000131
Figure GDA0002940134150000132
Wherein Loss is the mixing Loss, L1And LperceptualError loss in absolute value and perceptual loss, λ, respectively1And λ2Two respective lost weight coefficients, in this example λ1And λ2Are all set to 0.5; wherein FiPredicting a fog-free image G of the fog-containing image in the ith group of samples by using a U-shaped convolution neural networkiIs a true fog-free image in the ith group of samples. In the calculation of the perception loss, the characteristics of the non-fog image and the real non-fog image which are obtained by prediction are extracted by a characteristic extraction module and compared with each other in different levels; v1(Fi) And V1(Gi) A first feature map V, representing the predicted fog-free image and the real fog-free image respectively, extracted by the feature extraction module2(Fi) And V2(Gi) And the second feature maps respectively represent the predicted fog-free image and the real fog-free image extracted by the feature extraction module. In the embodiment, a VGG network trained on an ImageNet data set is used as a feature extraction module, and the VGG network is formed by stacking a series of 3 x 3 convolution layers and can be used for nonlinear feature extraction; v1(Fi) And V1(Gi) Respectively taking the feature maps V calculated by the first convolution layer of the predicted fog-free image and the real fog-free image in the VGG network2(Fi) And V2(Gi) Respectively taken as predicted fog-free imagesAnd calculating a feature map of the image and the real fog-free image in a third convolution layer in the VGG network. N is the number of sample groups.
This example trains a 25-round network, and the first 15 rounds compress the input data formed by cascading the derivative map and the original fogging image to 256 × 256, and the initial learning rate is 0.001. Starting from the 15 rounds, the input data, which was composed of the derivative map and the original fogging image, was scaled to a size of 448 × 608, and the learning rate was scaled to 0.0001.
Step E: and D, extracting five derivative graphs of the image to be defogged, cascading the five derivative graphs and the original image to be defogged as the input of the defogging network trained in the step D, and predicting the defogged image. For example, as for the image to be defogged shown in fig. 7(a), a defogged non-defogged image is predicted, i.e., a defogging result map is shown in fig. 7 (b).
Furthermore, the defogging method is executed through the GPU to realize real-time calculation and real-time defogging.

Claims (9)

1. An image defogging method based on a derivative map fusion strategy is characterized by comprising the following steps:
step A: constructing a sample set based on a large number of original foggy images and corresponding fogless images;
and B: respectively extracting derivative images of the original foggy image from five angles of improving the long-range view detail and the short-range view detail, eliminating the color cast of the image caused by atmospheric light, enhancing the image brightness and enhancing the image contrast to obtain five derivative images which are respectively exposure images IEMSignificance map ISMWhite balance diagram IBMGamma correction chart IGMAnd a defogging gas mask IVM
And C: building a defogging network based on a convolutional neural network, wherein the defogging network is a U-shaped convolutional neural network;
step D: for each group of samples, taking the cascade results of all the derivative images and the original foggy images acquired in the step B as input, taking the fogless images as output, and training the defogging network constructed in the step C;
step E: and D, as for the image to be defogged, taking the cascade result of the derivative image obtained in the step B and the original image to be defogged as the input of the defogging network trained in the step D, and predicting the defogged image without fog.
2. The image defogging method based on the derivative map fusion strategy as claimed in claim 1, wherein the exposure map IEMThe extraction method comprises the following steps:
firstly, the original foggy image IhazyConverting the RGB color space into HIS color space to obtain a brightness component image I thereof;
then, the exposure evaluation chart EM is calculated from the luminance component chart I according to the following formula:
Figure FDA0002940134140000011
wherein, EM (x, y) and I (x, y) are respectively exposure information and luminance information at points (x, y) in the exposure evaluation graph EM and the luminance component graph I, and σ is a standard deviation of pixel values of all pixel points on the luminance component graph I;
then evaluating the image EM and the original fogging image I according to the exposurehazyCalculating an exposure map:
I′EM=(1-EM)*Ihazy
wherein, the value of the element in the x row and the y column in the 1-EM is equal to 1-EM (x, y); representing the hadamard product of two matrices;
finally, to I'EMFirstly, the interval scaling is carried out, the numerical value is scaled to be between 0 and 255, and then the numerical value type is converted into the uint8 type, so as to obtain the final exposure image IEM
3. The image defogging method based on the derivative map fusion strategy as claimed in claim 1, wherein the saliency map ISMThe extraction method comprises the following steps:
firstly, to the original foggy image IhazyPerforming Gaussian filtering to obtain a filtered image gf;
then, converting the filtered result gf from the RGB color space to the LAB color space to obtain a luminance information graph L thereof and component graphs a and B of two color channels, and then calculating a saliency feature graph S according to the following formula:
S=(L-mean_L)2+(A-mean_A)2+(B-mean_B)2
the three variables mean _ L, mean _ A and mean _ B are respectively the mean values of pixel values of all pixel points in L, A and B, the subtraction in the formula represents that each element in the matrix subtracts one value respectively, the square of the matrix represents that each element in the matrix is squared respectively, and the element in the obtained result matrix is equal to the square of the corresponding element in the original matrix;
finally, a saliency map S and an original foggy image I are obtainedhazyIs subjected to Hadamard multiplication to obtain a result graph I 'of close-range enhancement'SMAnd is prepared from'SMConverting into the fluid 8 type to obtain a significance map ISM
4. The image defogging method based on the derivative map fusion strategy as claimed in claim 1, wherein a white balance map IBMThe extraction method comprises the following steps:
the first step is as follows: respectively obtaining original foggy images IhazyMean values avgR, avgG, avgB of pixel values of all pixel points on three color components, namely R, G, B components, in RGB color space and original foggy image IhazyThe mean value of the pixel values of all the pixels is GrayValue;
the second step is that: respectively calculating the ratio of the grayValue to avgR, avgG and avgB to obtain scaleR, scaleG and scaleB;
the third step: separately mixing scaleR, scaleG and scaleB with the original foggy image IhazyMultiplication of R, G, B components in RGB color space results in the original foggy image IhazyFinally, the corrected result is subjected to numerical truncation, namely the numerical interval is controlled to be 0-255, a new R, G, B component is obtained, and the original foggy image I is formed by the new R, G, B componenthazyWhite balance diagram IBM
5. According toThe method of claim 1, wherein the defogging process is performed from an original foggy image IhazyExtracting gamma correction chart IGMThe calculation formula of (2) is as follows:
IGM=αIhazy β
wherein, beta is gamma coefficient and is used for controlling the brightness correction degree; i ishazy βRepresentation pair matrix IhazySolving the power of beta for each element in the solution; alpha is a linear transformation factor used for adjusting the overall brightness of the image after exponential transformation.
6. The image defogging method based on the derivative map fusion strategy according to claim 1, wherein a defogging gas mask map IVMThe extraction method comprises the following steps:
a) separately extracting original foggy images IhazyIn three color components of RGB color space, namely R, G and B components, the image of each component is convolved to obtain a new R, G, B component, and I is formed by the new R, G, B componentconvWherein, the convolution kernel F (x, y) needs to satisfy that the sum of all elements in the F (x, y) is 1;
b) firstly, respectively obtain IconvThe mean value of the pixel values of all the pixel points on the R, G, B component is recorded as R _ mean, G _ mean, B _ mean, and then three components of the result graph LDeHaze in the RGB color space after the global fog mask is removed are calculated by the following formula:
LDeHaze_R(x,y)=255-Ihazy(x,y)×R_mean
LDeHaze_G(x,y)=255-Ihazy(x,y)×G_mean
LDeHaze_B(x,y)=255-Ihazy(x,y)×B_mean
converting the LDeHaze from an RGB color space to a YCbCr color space, and extracting a brightness component Y as a non-uniform fog mask map RegionAlHaze of the original foggy image;
c) the fog mask elimination map I _ dehaze is calculated by the following formula:
I_dehaze=exp(logIhazy-logRegionalHaze)
the logarithm taking and the exponential transformation are respectively to take the logarithm of each element in the matrix and carry out exponential calculation to obtain a processed matrix;
processing the fog mask elimination image I _ dehaze by using a self-adaptive contrast enhancement algorithm to obtain a final fog mask image IVM
7. The image defogging method based on the derivative graph fusion strategy according to claim 1, wherein the defogging network built in the step C is a U-shaped convolution neural network; the input of the method is an input data matrix obtained by cascading five derivative images and an original foggy image; the U-shaped convolutional neural network comprises an encoding layer and a decoding layer;
a) designing a coding layer; the first 4 modules of the Resnet _50 network structure, namely, the four coding modules of the Resnet-1, the Resnet-2, the Resnet-3 and the Resnet-4 are used as coding layers to extract the characteristics of an input data matrix step by step;
b) designing a decoding layer; the decoding layer comprises four decoding modules; the first decoding module firstly up-samples the characteristic diagram obtained by resnet-4, then takes the result obtained by up-sampling and the characteristic diagram obtained by resnet-3 as input 1 and input 2 respectively and sends the input 1 and the input 2 into the CCB module, and the result obtained by the CCB module is input into the CRB module again to obtain output Decode-1 of the first decoding module; the second decoding module firstly performs up-sampling on Decode-1, then takes the result obtained by the up-sampling and the feature map obtained by resnet-2 as input 1 and input 2 respectively and sends the input 1 and input 2 to the CCB module, and then inputs the result obtained by the CCB module to the CRB module to obtain output Decode-2 of the second decoding module; then, performing up-sampling on Decode-2, taking the result obtained by the up-sampling and a feature map obtained by resnet-1 as input 1 and input 2 respectively, sending the input 1 and the input 2 into a CCB module, and inputting the output obtained by the CCB module into a CRB module to obtain output Decode-3 of a third decoding module; taking Decode-3 as a weight map of the predicted derivative map; solving a Hadamard product of an input data matrix obtained by cascading five derivative graphs and an original foggy image and a weight graph of a predicted derivative graph to obtain a weighted derivative graph output derived _ out, inputting the derived _ out into a CRB module for fusion to obtain a fused weighted derivative graph, and finally performing convolution operation on the fused weighted derivative graph to obtain a result, namely the predicted fogless image;
the CCB module is a channel compression module, converges input 1 and input 2 together through cascade operation, fuses the two characteristics in a convolution mode, performs example normalization processing on the convolved characteristics, and finally outputs the characteristics through a Relu activation function;
the CRB module is a cascade residual module and comprises two residual modules which are connected in series, a residual branch of each residual module comprises a first convolution layer, a first instance normalization layer, a first Relu activation function, a second convolution layer, a second instance normalization layer and a second Relu activation function which are connected in sequence, and the output of the residual module is the sum of the original input of the residual module and the output of the residual module passing through the residual branch.
8. The image defogging method based on the derivative map fusion strategy according to claim 1, wherein in the step D, the loss function adopts the following mixed loss function in the course of training the defogging network:
Loss=λ1L12Lperceptual
Figure FDA0002940134140000041
Figure FDA0002940134140000042
wherein Loss is the mixing Loss, L1And LperceptualError loss in absolute value and perceptual loss, λ, respectively1And λ2Two lost weight coefficients, respectively; wherein FiFor the fog-free image predicted by the defogging network for the fog-containing image in the ith group of samples, GiReal fog-free images in the ith group of samples; v1(Fi) And V1(Gi) Respectively representing a predicted fog-free image and a true fog-free imageA first feature map, V, extracted by a feature extraction module2(Fi) And V2(Gi) The second feature maps respectively represent the predicted fog-free image and the real fog-free image which are extracted by the feature extraction module; n is the number of sample groups.
9. The image defogging method based on the derivative graph fusion strategy according to claim 1, wherein the step D adopts a multi-scale training strategy to make the defogging network have scale invariance; in the training process, SGD is used as an optimizer to optimize the weight of a defogging network, multiple rounds of training are carried out, the front n rounds of input data matrixes formed by cascade connection of derivative graphs and original fogging images are scaled to a first size, and the initial learning rate is 0.001; starting from the n +1 round, the input data matrix, which is a concatenation of the derivative map and the original hazy image, is scaled to a second size, and the learning rate is scaled to 0.0001.
CN201910609244.2A 2019-07-08 2019-07-08 Image defogging method based on derivative map fusion strategy Active CN110378848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910609244.2A CN110378848B (en) 2019-07-08 2019-07-08 Image defogging method based on derivative map fusion strategy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910609244.2A CN110378848B (en) 2019-07-08 2019-07-08 Image defogging method based on derivative map fusion strategy

Publications (2)

Publication Number Publication Date
CN110378848A CN110378848A (en) 2019-10-25
CN110378848B true CN110378848B (en) 2021-04-20

Family

ID=68252251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910609244.2A Active CN110378848B (en) 2019-07-08 2019-07-08 Image defogging method based on derivative map fusion strategy

Country Status (1)

Country Link
CN (1) CN110378848B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112581379A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Image enhancement method and device
CN110766640B (en) * 2019-11-05 2022-03-25 中山大学 Image defogging method based on depth semantic segmentation
CN111192258A (en) * 2020-01-02 2020-05-22 广州大学 Image quality evaluation method and device
CN113240589A (en) * 2021-04-01 2021-08-10 重庆兆光科技股份有限公司 Image defogging method and system based on multi-scale feature fusion
CN113643323B (en) * 2021-08-20 2023-10-03 中国矿业大学 Target detection system under urban underground comprehensive pipe rack dust fog environment
CN114926359B (en) * 2022-05-20 2023-04-07 电子科技大学 Underwater image enhancement method combining bicolor space recovery and multi-stage decoding structure
TWI831643B (en) * 2023-03-13 2024-02-01 鴻海精密工業股份有限公司 Method and relevant equipment for traffic sign recognition

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning
CN109509156A (en) * 2018-10-31 2019-03-22 聚时科技(上海)有限公司 A kind of image defogging processing method based on generation confrontation model

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016206087A1 (en) * 2015-06-26 2016-12-29 北京大学深圳研究生院 Low-illumination image processing method and device
US10430913B2 (en) * 2017-06-30 2019-10-01 Intel Corporation Approximating image processing functions using convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127702A (en) * 2016-06-17 2016-11-16 兰州理工大学 A kind of image mist elimination algorithm based on degree of depth study
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning
CN109509156A (en) * 2018-10-31 2019-03-22 聚时科技(上海)有限公司 A kind of image defogging processing method based on generation confrontation model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Airlight Estimation Based on Distant Region Segmentation;Yi Wang 等;《2019 IEEE International Symposium on Circuits and Systems (ISCAS)》;20190529;第1-5页 *
Gated Context Aggregation Network for Image Dehazing and Deraining;Dongdong Chen 等;《2019 IEEE Winter Conference on Applications of Computer Vision (WACV)》;20190111;第1375-1383页 *
基于融合策略的单幅图像去雾算法;郭璠 等;《通信学报》;20140731;第35卷(第7期);第199-207+214页 *
基于雾气遮罩理论的图像去雾算法;谢斌 等;《计算机工程与科学》;20120630;第34卷(第6期);第83-87页 *

Also Published As

Publication number Publication date
CN110378848A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN110378848B (en) Image defogging method based on derivative map fusion strategy
CN112288658B (en) Underwater image enhancement method based on multi-residual joint learning
CN107424133B (en) Image defogging method and device, computer storage medium and mobile terminal
CN114119378A (en) Image fusion method, and training method and device of image fusion model
KR102261532B1 (en) Method and system for image dehazing using single scale image fusion
TWI808406B (en) Image dehazing method and image dehazing apparatus using the same
CN104091310A (en) Image defogging method and device
Wang et al. Joint iterative color correction and dehazing for underwater image enhancement
CN109255758A (en) Image enchancing method based on full 1*1 convolutional neural networks
CN112712481B (en) Structure-texture sensing method aiming at low-light image enhancement
CN113870124B (en) Weak supervision-based double-network mutual excitation learning shadow removing method
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
CN114862698B (en) Channel-guided real overexposure image correction method and device
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN111553856B (en) Image defogging method based on depth estimation assistance
Shutova et al. NTIRE 2023 challenge on night photography rendering
CN110009574A (en) A kind of method that brightness, color adaptively inversely generate high dynamic range images with details low dynamic range echograms abundant
CN114004766A (en) Underwater image enhancement method, system and equipment
CN113554739A (en) Relighting image generation method and device and electronic equipment
CN115526803A (en) Non-uniform illumination image enhancement method, system, storage medium and device
CN110189262B (en) Image defogging method based on neural network and histogram matching
CN110175967B (en) Image defogging processing method, system, computer device and storage medium
CN107392870A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN116843559A (en) Underwater image enhancement method based on image processing and deep learning
CN110648297A (en) Image defogging method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant