CN112070691A - Image defogging method based on U-Net - Google Patents

Image defogging method based on U-Net Download PDF

Info

Publication number
CN112070691A
CN112070691A CN202010868578.4A CN202010868578A CN112070691A CN 112070691 A CN112070691 A CN 112070691A CN 202010868578 A CN202010868578 A CN 202010868578A CN 112070691 A CN112070691 A CN 112070691A
Authority
CN
China
Prior art keywords
image
residual
fog
defogging
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010868578.4A
Other languages
Chinese (zh)
Other versions
CN112070691B (en
Inventor
李佐勇
冯婷
余兆钗
曹新容
王传胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minjiang University
Original Assignee
Minjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minjiang University filed Critical Minjiang University
Priority to CN202010868578.4A priority Critical patent/CN112070691B/en
Publication of CN112070691A publication Critical patent/CN112070691A/en
Application granted granted Critical
Publication of CN112070691B publication Critical patent/CN112070691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image defogging method based on U-Net, which estimates a residual error between a fog-free image and a fog-containing image through a deep learning model, and then obtains a defogging result by adding the residual error image to the input fog-containing image. The defogging effect of the invention is superior to the defogging algorithms of the prior images.

Description

Image defogging method based on U-Net
Technical Field
The invention relates to the technical field of foggy image processing, in particular to an image defogging method based on U-Net.
Background
Haze weather appears more and more frequently as global pollution increases. The visibility is low in foggy weather, and inconvenience is brought to daily life of people. Today, with the popularity of artificial intelligence, image fog limits computer vision applications, such as autopilot, video surveillance, remote sensing satellite imaging. Image defogging has become an important research topic in computer vision, which aims to restore a foggy image to a fogless image by removing image fog. The existing image defogging methods are mainly divided into two types: the traditional method based on prior knowledge and the method based on deep learning.
Conventional a priori knowledge based methods typically estimate the atmospheric light values and the transmittance map of the foggy image, and then use a classical atmospheric scattering model to estimate the corresponding fogless image. For example, He et al first proposed the concept of dark channel prior for estimating the transmittance map, but the sky region of its image defogging result is prone to color distortion. Zhu et al first proposed a color attenuation prior, built a linear model that estimates the depth of the image scene, and used the estimated depth to derive a transmittance map. The method based on the prior knowledge generally has lower time complexity, but common prior knowledge is difficult to find, so that the image defogging effect is unstable.
Deep learning based methods typically first design a deep neural network to learn image features, and then train the network to estimate fog-free images. For example, DeHazeNet first takes a hazy image as an input and obtains a transmittance map as its output; then, the maximum intensity in the input image is taken as the atmospheric light value, and the image defogging result is derived using a classical atmospheric scattering model. The DeHazeNet does not require any prior, but suffers from inaccurate estimation of the atmospheric light value. Li et al propose an end-to-end image defogging network called AOD-Net that takes a foggy image as an input and directly obtains an image defogging result as an output, but the defogging effect is to be improved.
Disclosure of Invention
In view of the above, the present invention is directed to an image defogging method based on U-Net, which has a defogging effect superior to that of the existing image defogging algorithms.
The invention is realized by adopting the following scheme: a residual error between a fog-free image and a fog-containing image is estimated through a deep learning model, and then the input fog-containing image is added with the residual error image to obtain a defogging result.
Further, the estimating, by the deep learning model, a residual between the fog-free image and the fog-containing image specifically includes:
constructing a deep learning model, wherein the model comprises a coding module, a bottleneck module and a decoding module;
using a mixed convolution containing a standard convolution and a hole convolution to enlarge a receptive field so as to better extract shallow features of the image at an encoding module;
using a residual block at the bottleneck module to prevent network performance degradation;
and extracting deep features of the image through a decoding module and obtaining a residual image.
Further, the activation function and the normalization operation used in the deep learning model are a parametric rectification linear unit PReLU and a group normalization GN, respectively.
Further, the bottleneck module comprises seven residual modules, and the residual modules adopt a residual network ResNet.
Further, the loss function adopted by the deep learning model during training is as follows:
L=LMSE+λLSV,;
in the formula, lambda is a constant coefficient; first item LMSERepresenting the mean square error and used for measuring the loss between the residual image estimated by the model and the actual residual image; second term LSVRepresenting the absolute difference loss between the S and V components of the image.
Further, the constant coefficient has a value of 0.1.
Further, the mean square error LMSEIs calculated as follows:
Figure BDA0002650505850000031
where m represents the total number of image pixels, i tableNumber of pixels of the image, IiIs a vector formed by RGB components of the ith pixel point in the input foggy image, C (I)i) Is a vector formed by RGB components of the ith pixel point in a residual image generated by the image I through the deep learning model, riIs IiAnd a vector formed by RGB components of the ith pixel point in an actual residual image between the ideal fog-free image and the pixel point.
Further, the absolute difference between the S and V components of the image is lost by LSVIs calculated as follows:
Figure BDA0002650505850000032
in the formula, W and H represent the width and height of the image respectively, S and V are components in HSV color space, and (i, j) represents the position of a pixel point in the image.
Compared with the prior art, the invention has the following beneficial effects: according to the method, the residual error between the fog-free image and the fog-containing image is estimated through a deep learning model, and then the fog-containing image and the residual error image are added to obtain a defogging result. The invention uses mixed convolution to enlarge the receptive field so as to more effectively extract the image characteristics, uses a residual block to prevent the network performance from being reduced due to the disappearance of the gradient, and designs a new loss function on the basis of considering the absolute difference of S and V color components of the foggy image and the fogless image in the HSV color space, so that the image defogging effect of the invention is obviously superior to that of the prior algorithms.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a deep learning model according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of an encoding module according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of a residual error network according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of absolute differences between S and V components in HSV color space according to an embodiment of the present invention. Wherein, (a) is a foggy image, (b) is an S component, (c) is a V component, (d) is an absolute difference image between S and V, (e) is a fogless image, (f) is an S component, (g) is a V component, and (h) is an absolute difference image between S and V.
Fig. 6 shows the defogging result of the indoor synthesized fogging image according to the embodiment of the invention. Wherein, (a) is a foggy image, (b) is MSCNN, (c) is DeHazenet, (d) is AOD-Net, (e) is GFN, (f) is the algorithm of the invention, and (g) is a fogless image.
Fig. 7 is a defogging result of the outdoor synthesized fogging image according to the embodiment of the invention. Wherein, (a) is a foggy image, (b) is MSCNN, (c) is DeHazenet, (d) is AOD-Net, (e) is GFN, (f) is the algorithm of the invention, and (g) is a fogless image.
FIG. 8 shows the defogging result of the real fog image according to the embodiment of the invention. Wherein, (a) is a foggy image, (b) is MSCNN, (c) is DeHazenet, (d) is AOD-Net, (e) is GFN, and (f) is the algorithm of the invention.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
As shown in fig. 1, the present embodiment provides an image defogging method based on U-Net, which estimates a residual between a fog-free image and a fog-containing image through a depth learning model, and then adds the residual image to the input fog-containing image to obtain a defogging result.
In this embodiment, the estimating, by the deep learning model, a residual between the fog-free image and the fog-containing image specifically includes:
constructing a deep learning model, wherein the model comprises an encoding module, a bottleneck module and a decoding module, and is shown in FIG. 2;
using a mixed convolution containing a standard convolution and a hole convolution to enlarge a receptive field so as to better extract shallow features of the image at an encoding module;
using a residual block at the bottleneck module to prevent network performance degradation;
and extracting deep features of the image through a decoding module and obtaining a residual image.
In the present embodiment, the activation function and the normalization operation used in the deep learning model are a parameter rectification linear unit PReLU and a group normalization GN, respectively.
In the present embodiment, the structure of the coding module is shown in fig. 3, and unlike U-Net with standard convolution, the coding module of the present embodiment uses hybrid convolution combining standard convolution and hole convolution to extract shallow features of an image more efficiently. The advantage of using a hybrid convolution is that it can have a larger field of view for the same number of parameters. FIG. 3 illustrates a hybrid convolution, where the left colored square represents the image area covered by the 3 × 3 standard convolution operation on the center pixel, the middle colored square represents the image area covered by the 3 × 3 hole convolution operation on the center pixel, and the right colored square represents the image area covered by the 3 × 3 hybrid convolution on the center pixel.
Generally speaking, better network performance is generally generated as the depth of the neural network is increased, but a network too deep brings great challenges to network weight optimization, which easily causes gradient disappearance and network performance degradation. To solve this problem, He et al propose a well-known residual network ResNet [ K.M.He, X.Y.Zhang, S.Q.Ren, J.Sun, Deep residual learning for image Recognition in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),2016, pp 770.778.]. ResNet designs a hopping connection as shown in FIG. 4, directWill input xiAnd output xi+1Are connected together. The conventional convolutional neural network will input xiAnd output xi+1The relationship between them is mapped as:
xi+1=F(xii);
in the formula, wiIs the weight of the ith layer and F () is the nonlinear transformation. Unlike traditional convolutional networks, the building blocks of ResNet map the relationship between input and output as:
xi+1=F(xii)+h(xi);
wherein h (.) represents the jump connection identifier. This hopping connection allows ResNet to learn the residual between the input and output, rather than a direct mapping between the input and output, thereby improving learning performance. The present implementation algorithm uses an encoder-decoder structure to learn residual images, rather than directly learning a direct mapping between a hazy image as input and a hazless image as output. In addition, the present embodiment also embeds seven residual blocks in the bottleneck structure to prevent the network performance from being degraded due to the disappearance of the gradient.
In this embodiment, since the loss function of the network determines the training efficiency and effectiveness of the network, an appropriate loss function can improve the network performance. In this embodiment, the loss function adopted by the deep learning model during training is as follows:
L=LMSE+λLSV,;
in the formula, lambda is a constant coefficient; first item LMSEThe representative mean square error is used for measuring the loss between the residual image estimated by the deep learning model in the algorithm of the embodiment and the actual residual image; second term LSVRepresenting the absolute difference loss between the S and V components of the image.
In the present embodiment, the constant coefficient has a value of 0.1.
In the present embodiment, the mean square error LMSEIs calculated as follows:
Figure BDA0002650505850000071
where m represents the total number of image pixels, I represents the number of image pixels, and I represents the number of image pixelsiIs a vector formed by RGB components of the ith pixel point in the input foggy image, C (I)i) Is a vector formed by RGB components of the ith pixel point in a residual image generated by the image I through the deep learning model, riIs IiAnd a vector formed by RGB components of the ith pixel point in an actual residual image between the ideal fog-free image and the pixel point.
In this embodiment, the absolute difference between the S and V components of the image is lost by LSVIs calculated as follows:
Figure BDA0002650505850000072
in the formula, W and H represent the width and height of the image respectively, S and V are components in HSV color space, and (i, j) represents the position of a pixel point in the image.
Preferably, the second term LSVThe design concept of (1) is derived from the observation by Zhu et al of both foggy and fogless images [ Q.S.Zhu, J.M.Mai, L.Shao, A fast single Image size removal using color attribute, IEEE Transactions on Image Processing 24(11) (2015) 3522-3533.]That is, a foggy image generally has a significantly larger absolute difference between saturation and brightness in HSV color space than a fogless image. To verify this observation, fig. 5 gives two images each of the inside and outside of the room as a sample, and (a) - (d) in fig. 5 show the original foggy image, the S (saturation) component in HSV color space, the V (value) component in HSV, and the absolute difference image between the S and V components, respectively. Similarly, (e) - (h) in fig. 5 show the original image without fog, the S component, the V component, and the absolute difference image between the S and V components. Observing (d) in fig. 5 and (h) in fig. 5 can yield: the absolute difference image between the S and V components in the fog condition is generally whiter than the image in the fog-free condition. The average gray values of the two graphs in the first row of fig. 5 (d) and fig. 5 (h) were calculated to be 201.6 and 15, respectively3.9, the average gray values of the two images in the second row are 131.1 and 73.7 respectively. These data show that the mean absolute difference between the S and V components under hazy conditions is higher than the mean absolute difference between the two components under haze-free conditions. It is inferred from this phenomenon that the smaller the average value of absolute differences between the corresponding S and V components of the defogging result obtained by the algorithm of the embodiment is, the better the defogging effect is.
In order to verify the effect of the algorithm on image defogging, a sub data set, namely a comprehensive target test set (SOTS) in a public image data set (RESIDE), and 6 real fogging images which are representative in different scenes are adopted as experimental objects. The SOTS consists of 550 indoor images (i.e., 50 original fog-free images and their corresponding 500 synthesized fog images) and 1000 outdoor images (i.e., 500 original fog-free images and their corresponding 500 synthesized fog images).
In order to quantitatively evaluate the image defogging effect, the image defogging result on the RESIDE data set is measured by peak signal to noise ratio (PSNR) and Structural Similarity (SSIM) as quantitative evaluation measures. The calculation of PSNR and SSIM values requires an ideal fog-free image as a reference, so they are suitable for a synthetic image dataset comprising fog-free images, but not for true fog-free images without corresponding ideal fog-free images. For this reason, the present embodiment employs the information entropy E to quantitatively evaluate the defogging effect on the real fogging image.
The algorithm of the embodiment is quantitatively compared with 4 existing image defogging algorithms based on deep learning, namely DeHazeNet, AOD-Net, MSCNN and GFN. Table 1 lists the average PSNR and SSIM values of the image defogging results corresponding to all indoor fogging images and outdoor fogging images on the SOTS composite image dataset for each method, respectively. Table 2 lists the information entropy E measurements and their mean values of the defogging results of each method on six real fogging images. The higher the PSNR, SSIM and E values are, the better the defogging effect of the image is. As shown in tables 1 and 2, the PSNR and SSIM measurement values of the algorithm are obviously higher than those of other algorithms, and the defogging effect of the algorithm is obviously better than that of other algorithms. The average information entropy value on the 6 real fog images is higher than that of other algorithms, and the good fog removing effect is proved.
TABLE 1
Figure BDA0002650505850000091
TABLE 2
DeHazeNet AOD-Net MSCNN GFN Algorithm of the invention
1 7.5597 7.5192 7.3059 7.7201 7.6816
2 7.4881 7.6198 7.4877 7.8138 7.7271
3 7.6463 7.4383 7.6977 7.5803 7.5593
4 7.7155 7.3306 7.5690 7.4284 7.7293
5 7.6823 7.5210 7.4915 7.8295 7.8389
6 6.7900 6.7953 6.8408 7.2526 7.1165
Mean value of 7.4803 7.3707 7.3988 7.6041 7.6088
To further qualitatively compare the image defogging effects of the different methods, fig. 6 and 7 show the image defogging results of the five methods (i.e., MSCNN, DeHazeNet, AOD-Net, GFN, and the algorithm of the present invention) applied to 8 representative indoor synthesized fogging images and outdoor synthesized fogging images, respectively, and fig. 8 shows the image defogging results of 6 real fogging images.
As can be seen from FIGS. 6 and 7, the image defogging results obtained by MSCNN and AOD-Net leave significant haze. The image dehazing results obtained by DeHazeNet contained less haze than MSCNN and AOD-Net. The GFN substantially eliminates image fog but the background of the image in the defogging result is unnatural, e.g., the sky region in the last image of fig. 7. The algorithm of the invention can obtain the best image defogging effect, and the defogging result is very similar to the ideal defogged image. Fig. 8 shows the defogging results on 6 real fogging images. From this figure, it can be seen that less haze remains with DeHazeNet and GFN, but their haze results are darker in color. MSCNN and AOD-Net leave significant haze. The algorithm of the invention achieves a better compromise between image fog removal and natural color of the defogging result.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.

Claims (8)

1. A defogging method for an image based on U-Net is characterized in that a residual error between a fog-free image and a fog-containing image is estimated through a deep learning model, and then an input fog-containing image is added with the residual error image to obtain a defogging result.
2. The U-Net based image defogging method according to claim 1, wherein the estimation of the residual between the fog-free image and the fog-containing image through the deep learning model is specifically as follows:
constructing a deep learning model, wherein the model comprises a coding module, a bottleneck module and a decoding module;
using a mixed convolution containing a standard convolution and a hole convolution to enlarge a receptive field so as to better extract shallow features of the image at an encoding module;
using a residual block at the bottleneck module to prevent network performance degradation;
and extracting deep features of the image through a decoding module and obtaining a residual image.
3. The U-Net based image defogging method according to claim 1, wherein the activation function and normalization operation used in the deep learning model are a parameter rectification linear unit (PReLU) and a group normalization GN, respectively.
4. The U-Net based image defogging method according to claim 2, wherein seven residual modules are included in said bottleneck module, and said residual modules adopt a residual network ResNet.
5. The U-Net based image defogging method according to claim 1, wherein the loss function adopted by the deep learning model in the training is as follows:
L=LMSE+λLSV,;
in the formula, lambda is a constant coefficient; first item LMSERepresenting the mean square error and used for measuring the loss between the residual image estimated by the model and the actual residual image; second term LSVRepresenting the absolute difference loss between the S and V components of the image.
6. The U-Net based image defogging method according to claim 5, wherein the constant coefficient has a value of 0.1.
7. The U-Net based image defogging method according to claim 5, wherein the mean square error LMSEIs calculated as follows:
Figure FDA0002650505840000021
where m represents the total number of image pixels, I represents the number of image pixels, and I represents the number of image pixelsiIs a vector formed by RGB components of the ith pixel point in the input foggy image, C (I)i) Is a vector formed by RGB components of the ith pixel point in a residual image generated by the image I through the deep learning model, riIs IiAnd a vector formed by RGB components of the ith pixel point in an actual residual image between the ideal fog-free image and the pixel point.
8. The U-Net based image defogging method according to claim 5, wherein the absolute difference between S and V components of the image loses LSVIs calculated as follows:
Figure FDA0002650505840000022
in the formula, W and H represent the width and height of the image respectively, S and V are components in HSV color space, and (i, j) represents the position of a pixel point in the image.
CN202010868578.4A 2020-08-26 2020-08-26 Image defogging method based on U-Net Active CN112070691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010868578.4A CN112070691B (en) 2020-08-26 2020-08-26 Image defogging method based on U-Net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010868578.4A CN112070691B (en) 2020-08-26 2020-08-26 Image defogging method based on U-Net

Publications (2)

Publication Number Publication Date
CN112070691A true CN112070691A (en) 2020-12-11
CN112070691B CN112070691B (en) 2024-02-06

Family

ID=73659800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010868578.4A Active CN112070691B (en) 2020-08-26 2020-08-26 Image defogging method based on U-Net

Country Status (1)

Country Link
CN (1) CN112070691B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049274A (en) * 2021-11-13 2022-02-15 哈尔滨理工大学 Defogging method for single image
CN114529464A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image recovery method based on deep learning
CN116129379A (en) * 2022-12-28 2023-05-16 国网安徽省电力有限公司芜湖供电公司 Lane line detection method in foggy environment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning
CN110378849A (en) * 2019-07-09 2019-10-25 闽江学院 Image defogging rain removing method based on depth residual error network
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146810A (en) * 2018-08-08 2019-01-04 国网浙江省电力有限公司信息通信分公司 A kind of image defogging method based on end-to-end deep learning
CN110378849A (en) * 2019-07-09 2019-10-25 闽江学院 Image defogging rain removing method based on depth residual error network
CN110570371A (en) * 2019-08-28 2019-12-13 天津大学 image defogging method based on multi-scale residual error learning

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114049274A (en) * 2021-11-13 2022-02-15 哈尔滨理工大学 Defogging method for single image
CN114529464A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image recovery method based on deep learning
CN116129379A (en) * 2022-12-28 2023-05-16 国网安徽省电力有限公司芜湖供电公司 Lane line detection method in foggy environment
CN116129379B (en) * 2022-12-28 2023-11-07 国网安徽省电力有限公司芜湖供电公司 Lane line detection method in foggy environment

Also Published As

Publication number Publication date
CN112070691B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN112070691B (en) Image defogging method based on U-Net
CN110378849B (en) Image defogging and rain removing method based on depth residual error network
CN107292830B (en) Low-illumination image enhancement and evaluation method
CN108269244B (en) Image defogging system based on deep learning and prior constraint
CN108596853A (en) Underwater picture Enhancement Method based on bias light statistical model and transmission map optimization
CN109447917B (en) Remote sensing image haze eliminating method based on content, characteristics and multi-scale model
CN111709888B (en) Aerial image defogging method based on improved generation countermeasure network
CN109523480A (en) A kind of defogging method, device, computer storage medium and the terminal of sea fog image
CN107958465A (en) A kind of single image to the fog method based on depth convolutional neural networks
CN109410171A (en) A kind of target conspicuousness detection method for rainy day image
CN105447833A (en) Foggy weather image reconstruction method based on polarization
CN106296618A (en) A kind of color image defogging method based on Gaussian function weighted histogram regulation
CN111861896A (en) UUV-oriented underwater image color compensation and recovery method
CN106846258A (en) A kind of single image to the fog method based on weighted least squares filtering
CN105894507B (en) Image quality evaluating method based on amount of image information natural scene statistical nature
Bansal et al. A review of image restoration based image defogging algorithms
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
Huang et al. Haze removal algorithm for optical remote sensing image based on multi-scale model and histogram characteristic
CN109118450A (en) A kind of low-quality images Enhancement Method under the conditions of dust and sand weather
CN107451975B (en) A kind of view-based access control model weights similar picture quality clarification method
CN111598793A (en) Method and system for defogging image of power transmission line and storage medium
CN110349113B (en) Adaptive image defogging method based on dark primary color priori improvement
CN109685735B (en) Single picture defogging method based on fog layer smoothing prior
CN104915933A (en) Foggy day image enhancing method based on APSO-BP coupling algorithm
CN113487509B (en) Remote sensing image fog removal method based on pixel clustering and transmissivity fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant