CN111161159B - Image defogging method and device based on combination of priori knowledge and deep learning - Google Patents

Image defogging method and device based on combination of priori knowledge and deep learning Download PDF

Info

Publication number
CN111161159B
CN111161159B CN201911226437.6A CN201911226437A CN111161159B CN 111161159 B CN111161159 B CN 111161159B CN 201911226437 A CN201911226437 A CN 201911226437A CN 111161159 B CN111161159 B CN 111161159B
Authority
CN
China
Prior art keywords
image
layer
base layer
convolution
layer image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911226437.6A
Other languages
Chinese (zh)
Other versions
CN111161159A (en
Inventor
郑超兵
伍世虔
徐望明
方顺
陈思摇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201911226437.6A priority Critical patent/CN111161159B/en
Publication of CN111161159A publication Critical patent/CN111161159A/en
Application granted granted Critical
Publication of CN111161159B publication Critical patent/CN111161159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image defogging method and device based on combination of priori knowledge and deep learning. The method comprises the following steps: decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b (ii) a Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere light component image A c Wherein c belongs to { r, g, b }; constructing a deep convolutional neural network on the base layer image Z b Processing is carried out to obtain a transmissivity image t; utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J.

Description

Image defogging method and device based on combination of priori knowledge and deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to an image defogging method and device based on combination of priori knowledge and deep learning.
Background
In the foggy weather, floating particles in the sun, fog or dust cause the picture to fade and be blurred, the contrast and softness are reduced, the image quality is severely restricted, and the applications of video monitoring and analysis, target identification, urban traffic, aerial photography, military and national defense and the like are limited. Therefore, the method has a direct relationship with the civil life to the clear processing of the foggy image, and has great practical significance to the production and the life of people.
The existing defogging algorithms are mainly divided into three categories: non-model based defogging algorithms, and deep learning based defogging algorithms. The defogging algorithm based on the non-model mainly achieves the aim of improving the image by directly stretching the contrast of the image. The common methods are as follows: histogram equalization, homomorphic filtering algorithms, retinex model-based algorithms, and Retinex model-improved-based algorithms. The methods carry out defogging according to the optical imaging principle, so that the contrast among the image colors is more balanced, the image effect is softer, but the obtained image cannot be effectively enhanced in contrast, the method weakens the dark or bright area in the original image, and blurs the key points of the image.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image defogging method and device based on combination of priori knowledge and deep learning.
In a first aspect, the present invention provides an image defogging method based on a combination of priori knowledge and deep learning, including:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b
Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere light component image A c Wherein c belongs to { r, g, b };
constructing a deep convolutional neural network on the base layer image Z b Processing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J.
Preferably, the original fogging image Z is decomposed by using a weighted guided filter to obtain a base layer image Z b The method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formula p (x, y) and b p (x,y):
Figure GDA0004110135570000021
Wherein, mu Z,ρ (x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
Figure GDA0004110135570000022
is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; Γ Y (x, Y) is the luminance component of the original hazy image Z at point (x, Y), Y = max { Z { n } r ,Z g ,Z b },Z r ,Z g ,Z b R, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z by the following formula b
Z b (x,y)=a p (x,y)Z(x,y)+b p (x,y);
Wherein, Z b (x, y) is the base layer image Z b Of the pixel grey value at point (x, y).
Preferably, the base layer image Z is searched by using a quadtree search method b Processing to obtain a global atmosphere light component image A c The method specifically comprises the following steps:
step A, base layer image Z b Are equally divided into four rectangular areas Z b-i (i∈{1,2,3,4}),Z b-i Respectively has a length and a width of Z b 1/2 of the length and width of (A);
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Z b-end
Obtaining the atmospheric light component image A by using the following formula c
|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,
In the formula (Z) b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x, y)) is the final subdivided area Z b-end The color vector at the midpoint (x, y), (255 ) is a pure white vector.
Preferably, the deep convolutional neural network is constructed on the base layer image Z b The transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the base layer image Z b Down-sampling to obtain low-resolution base layer image Z b ' post-extracting low-layer features, wherein the base layer image Z is aligned using a first convolution layer b The formula for downsampling is:
Figure GDA0004110135570000031
wherein,
Figure GDA0004110135570000032
the low resolution base layer image Z for the ith of the first convolutional layer at the index of channel c b ' low-level features; x is the base layer image Z b Abscissa of (a), y is the base layer image Z b The ordinate of (a); x' is a low resolution base layer image Z b 'abscissa, y' is the low resolution base layer image Z b The ordinate of `; />
Figure GDA0004110135570000033
For the ith layer of the first winding layerStacking convolution weight arrays under the index channels of the layers c and c'; />
Figure GDA0004110135570000034
A deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (= max (, 0) represents the ReLU activation function and zero padding is used as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
the obtained low-layer characteristics
Figure GDA0004110135570000041
The number of input layers is n L Extracting a local feature L from a second convolution layer of =2, wherein the size of a convolution kernel in the second convolution layer is 3 × 3, and the step length is 1;
low layer feature obtained by laminating the second convolution layer
Figure GDA0004110135570000042
The number of input layers is n G1 After a third convolutional layer with the size of 3 × 3 and the step length of 2, =2 and convolutional kernel, the number of layers is input again G2 A full connection layer of =2, obtaining a global feature G; />
Adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level feature
Figure GDA0004110135570000043
Corresponding hybrid feature map F L =σ(L+G);
For mixed feature map F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layer
Figure GDA0004110135570000044
And (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t.
Preferably, the global atmospheric light component image A is utilized based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
Figure GDA0004110135570000045
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Z e (x, y) is the detail layer image Z e The gray value of the pixel at the middle point (x, y), Z b (x, y) is the base layer image Z b A pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at the midpoint (x, y) of the transmittance image t,
Figure GDA0004110135570000046
where η is a constant greater than zero.
Preferably, the pair of mixed feature maps F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layer
Figure GDA0004110135570000047
And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=L r +w c L c
wherein L is r To reconstruct the loss function, L c As a function of color loss, w c To be assigned to the color loss function L c Weight of, L r Is shown as
Figure GDA0004110135570000051
L c Is expressed as->
Figure GDA0004110135570000052
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and angle (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
In a second aspect, the invention provides an image defogging device based on combination of priori knowledge and deep learning, wherein the device comprises a memory and a processor;
the memory for storing a computer program;
the processor is configured to, when executing the computer program, implement the image defogging method based on the combination of the priori knowledge and the deep learning.
The image defogging method and device based on the combination of the priori knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain the detail layer image and the basic layer image, then the quad-tree search method and the deep neural network are used for processing the basic layer image to obtain the global atmosphere light component image and the transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for recovering the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image defogging method based on combination of priori knowledge and deep learning according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, the single-image defogging method based on the combination of the prior knowledge and the deep learning described in the present invention includes the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b
Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere optical component image A c
Constructing a deep convolutional neural network on the base layer image Z b Processing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J.
Specifically, the original input foggy image Z is decomposed by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b The method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formula p (x, y) and b p (x,y):
Figure GDA0004110135570000061
Wherein, mu Z,ρ (x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
Figure GDA0004110135570000071
is the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image ZThe variance of (a);
Γ Y (x, y) is the luminance component of point (x, y) of the original hazy image Z;
λ is a constant greater than 1;
obtaining the base layer image Z using the following formula b
Z b (x,y)=a p (x,y)Z(x,y)+b p (x,y);
Wherein Z is b (x, y) is the base layer image Z b The pixel gray value at point (x, y);
z (x, y) is a pixel gray value at any point (x, y) in the original foggy image Z;
a p (x, y) and b p (x, y) weighted filter coefficients at point (x, y) in the original hazy image Z.
Specifically, the base layer image Z is searched by using a quadtree search method b Processing to obtain a global atmosphere light component image A c The method specifically comprises the following steps:
step A, base layer image Z b Are equally divided into four rectangular areas Z b-i (i∈{1,2,3,4}),Z b-i Respectively has a length and a width of Z b 1/2 of the length and width of (A);
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
repeating the steps B and C and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Z b-end
Obtaining the atmospheric light component image A by using the following formula c
|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,
In the formula (Z) b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x, y)) is the final subdivided region Z b-end Ren of (2)The color vector of a pixel (255 ) is a pure white vector.
Specifically, the deep convolutional neural network is constructed on the base layer image Z b The processing to obtain the transmittance image t specifically includes the following steps:
using the first convolution layer to pair the base layer image Z b Down-sampling to obtain low-resolution base layer image Z b ' post-extraction of low-level features, wherein the base-level image Z is mapped using a first convolution layer b The down-sampling is performed according to the formula:
Figure GDA0004110135570000081
wherein,
Figure GDA0004110135570000082
the low resolution base layer image Z for the ith convolutional layer at the index of channel c b The low-level feature of';
x is the base layer image Z b The abscissa of (a);
y is the base layer image Z b The ordinate of (a);
x' is a low resolution base layer image Z b The abscissa of';
y' is the low resolution base layer image Z b The ordinate of `;
Figure GDA0004110135570000083
a convolution weight array of the ith convolution layer under the index channels of the layers c and c';
Figure GDA0004110135570000084
a deviation vector of the ith convolution layer under the c layer index channel;
σ (= max (, 0) represents the ReLU activation function and zero padding is used as a boundary condition for all convolutional layers;
s is the step size of the convolution kernel;
low layer characteristics to be obtained
Figure GDA0004110135570000085
The number of input layers is n L Extracting a local feature L from a second convolution layer of =2, wherein the size of a convolution kernel in the second convolution layer is 3 × 3, and the step length is 1;
low layer feature obtained by laminating a second convolution layer
Figure GDA0004110135570000086
The number of input layers is n G1 After a third convolutional layer with the size of 3 × 3 and the step length of 2, =2 and convolutional kernel, the number of layers is input again G2 A full connection layer of =2, obtaining a global feature G;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level feature
Figure GDA0004110135570000087
Corresponding hybrid feature map F L =σ(L+G);
For mixed feature map F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layer
Figure GDA0004110135570000088
And (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t (x, y). />
Specifically, the global atmospheric light component image A is utilized based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e And recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
Figure GDA0004110135570000091
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J,
Z e (x, y) is the detailsLayer image Z e The gray value of the pixel at the midpoint (x, y),
Z b (x, y) is the base layer image Z b The gray value of the pixel at the midpoint (x, y),
t (x, y) is the output resulting in a transmittance image,
Figure GDA0004110135570000092
where t (x, y) is the output resulting transmittance image and η is a constant greater than zero.
In the step, according to an atmospheric scattering model Z (x, y) = t (x, y) J (x, y) + A c (1-t (x, y)), J (x, y) is a defogged image, and (x, y) is the space coordinate of a pixel point, the recovery of the fogging image is carried out, the obtained recovered image can be expressed as,
Figure GDA0004110135570000093
wherein t is 0 Is a parameter for ensuring the processing effect of the dense fog area, Z (x, y) = J (x, y) + n (x, y), J is a noiseless image, and n is noise, namely
Figure GDA0004110135570000094
The noise will be amplified, taking into account that the noise is mainly contained in the detail layer image Z e And noise on the image, the present invention recovers the later image as,
Figure GDA0004110135570000095
the atmospheric light component a is obtained by step 2.1, the transmittance t is obtained by step 2.2,
Figure GDA0004110135570000096
in order to reduce the effect of noise on the restored image, which is shown as,
Figure GDA0004110135570000101
eta is a constant, eta =6 in the embodiment, and the experiment shows that t (x, y) epsilon [0,1]When t (x, y) < 1/eta, (x, y) data sky region pixel point,
Figure GDA0004110135570000102
noise of the sky area is prevented from being amplified;
specifically, the pair of mixed feature maps F L Carrying out convolution processing on the signals so as to obtain the characteristics of the lower layer
Figure GDA0004110135570000103
And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=L r +w c L c
wherein L is r To reconstruct the loss function, L c As a function of color loss, w c To be assigned to the color loss function L c Weight of, L r Is shown as
Figure GDA0004110135570000104
L c Is expressed as->
Figure GDA0004110135570000105
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and angle (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y); although the similarity of the original image and the defogged image can be measured by the reconstruction error, the consistent angles of the color vectors of the original image and the defogged image cannot be ensured, so that the consistent angles of the color vectors of the same pixel point of the images before and after defogging are ensured by adding the color error function.
The image defogging method and device based on the combination of the priori knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain the detail layer image and the basic layer image, then the quad-tree search method and the deep neural network are used for processing the basic layer image to obtain the global atmosphere light component image and the transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for recovering the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image. And calculating a loss function by using the defogged image to perform feedback parameter adjustment, and adding color loss into the loss function to improve the robustness of the algorithm.
In another embodiment of the invention, an image defogging device based on combination of a priori knowledge and deep learning comprises a memory and a processor. The memory is used for storing the computer program. The processor is configured to implement the image defogging method based on the combination of the priori knowledge and the deep learning when the computer program is executed.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example" or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Those skilled in the art will appreciate that various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (6)

1. An image defogging method based on combination of priori knowledge and deep learning is characterized by comprising the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image Z e And base layer image Z b
Using a quadtree search method to search the base layer image Z b Processing to obtain a global atmosphere light component image A c Wherein c belongs to { r, g, b };
constructing a deep convolutional neural network on the base layer image Z b Processing to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering model c The transmittance image t and the detail layer image Z e Restoring the original foggy image Z to obtain a defogged image J;
the atmosphere scattering model is used for utilizing the global atmosphere light component image A c The transmittance image t and the detail layer image Z e And restoring the original foggy image Z to obtain a defogged image J according to the formula:
Figure FDA0004110135560000011
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Z e (x, y) is the detail layer image Z e The gray value of the pixel at the middle point (x, y), Z b (x, y) is the base layer image Z b A pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at a midpoint (x, y) of the transmittance image t,
Figure FDA0004110135560000012
where η is a constant greater than zero.
2. The preamble-based according to claim 1The image defogging method combining knowledge inspection and deep learning is characterized in that an input original foggy image Z is decomposed by using a weighted guide filter to obtain a basic layer image Z b The method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formula p (x, y) and b p (x,y):
Figure FDA0004110135560000021
Wherein, mu Z,ρ (x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
Figure FDA0004110135560000022
is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; gamma-shaped Y (x, Y) is the luminance component of the original foggy image Z at point (x, Y), Y = max { Z { (X, Y) } r ,Z g ,Z b },Z r ,Z g ,Z b R, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z using the following formula b
Z b (x,y)=a p (x,y)Z(x,y)+b p (x,y);
Wherein Z is b (x, y) is the base layer image Z b The pixel gray value at point (x, y).
3. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein said base layer image Z is searched by using a quadtree search method b Processing to obtain a global atmosphere light component image A c The method specifically comprises the following steps:
step A, base layer image Z b Are equally divided into four rectangular areas Z b-i (i∈{1,2,3,4}),Z b-i Respectively has a length and a width of Z b 1/2 of the length and width of (A);
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Z b-end
Obtaining the atmospheric light component image A by using the following formula c
|A c (c∈{r,g,b})|=min|((Z b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x,y))-(255,255,255))|,
In the formula (Z) b-end-r (x,y),Z b-end-g (x,y),Z b-end-b (x, y)) is the final subdivided region Z b-end The color vector at the midpoint (x, y), (255 ) is a pure white vector.
4. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the construction of the deep convolutional neural network is performed on the basic layer image Z b The transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the base layer image Z b Down-sampling to obtain low-resolution base layer image Z b ' post-extraction of low-layer features, wherein the base layer image Z is mapped using a first convolution layer b The down-sampling is performed according to the formula:
Figure FDA0004110135560000031
wherein,
Figure FDA0004110135560000032
the low resolution base layer image Z for the ith of the first convolutional layer at the index of channel c b The low-level feature of'; x is the base layer image Z b Abscissa of (c), y is the base layer image Z b The ordinate of (a); x' is a low resolution base layer image Z b 'abscissa, y' is the low resolution base layer image Z b The ordinate of `; />
Figure FDA0004110135560000033
Convolution weight arrays of the ith convolution layer in the first convolution layer under the index channels of the layers c and c'; />
Figure FDA0004110135560000034
A deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (= max (, 0) represents a ReLU activation function and uses zero padding as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
the obtained low-layer characteristics
Figure FDA0004110135560000035
The number of input layers is n L Extracting a local feature L from a second convolutional layer of =2, wherein the size of a convolutional kernel in the second convolutional layer is 3 × 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layer
Figure FDA0004110135560000036
The number of input layers is n G1 After a third convolutional layer with the size of 3 × 3 and the step length of 2, =2 and convolutional kernel, the number of layers is input again G2 A full connection layer of =2, obtaining a global feature G;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level feature
Figure FDA0004110135560000041
Corresponding hybrid feature map F L =σ(L+G);
For mixed feature map F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layer
Figure FDA0004110135560000042
And (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t.
5. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 4, wherein the pair of mixed feature maps F L Convolution processing is carried out for (= sigma (L + G)) to obtain the characteristic of the lower layer
Figure FDA0004110135560000045
And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=L r +w c L c
wherein L is r To reconstruct the loss function, L c As a function of color loss, w c To be assigned to the color loss function L c Weight of, L r Is shown as
Figure FDA0004110135560000043
L c Is expressed as->
Figure FDA0004110135560000044
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and angle (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
6. An image defogging device based on combination of priori knowledge and deep learning is characterized by comprising a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the image defogging method according to any one of claims 1 to 5 based on the combination of the prior knowledge and the deep learning.
CN201911226437.6A 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning Active CN111161159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911226437.6A CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911226437.6A CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Publications (2)

Publication Number Publication Date
CN111161159A CN111161159A (en) 2020-05-15
CN111161159B true CN111161159B (en) 2023-04-18

Family

ID=70556359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911226437.6A Active CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Country Status (1)

Country Link
CN (1) CN111161159B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861939B (en) * 2020-07-30 2022-04-29 四川大学 Single image defogging method based on unsupervised learning
CN111932365B (en) * 2020-08-11 2021-09-10 上海华瑞银行股份有限公司 Financial credit investigation system and method based on block chain
CN114331874A (en) * 2021-12-07 2022-04-12 西安邮电大学 Unmanned aerial vehicle aerial image defogging method and device based on residual detail enhancement

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9754356B2 (en) * 2013-04-12 2017-09-05 Agency For Science, Technology And Research Method and system for processing an input image based on a guidance image and weights determined thereform
WO2016159884A1 (en) * 2015-03-30 2016-10-06 Agency For Science, Technology And Research Method and device for image haze removal
KR102461144B1 (en) * 2015-10-16 2022-10-31 삼성전자주식회사 Image haze removing device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余春艳 ; 林晖翔 ; 徐小丹 ; 叶鑫焱 ; .雾天退化模型参数估计与CUDA设计.计算机辅助设计与图形学学报.2018,(第02期),全文. *
谢伟 ; 周玉钦 ; 游敏 ; .融合梯度信息的改进引导滤波.中国图象图形学报.2016,(第09期),全文. *
陈永 ; 郭红光 ; 艾亚鹏 ; .基于双域分解的多尺度深度学习单幅图像去雾.光学学报.2019,(第02期),全文. *

Also Published As

Publication number Publication date
CN111161159A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
Li et al. Single image de-hazing using globally guided image filtering
Santra et al. Learning a patch quality comparator for single image dehazing
CN111161159B (en) Image defogging method and device based on combination of priori knowledge and deep learning
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
CN109993804A (en) A kind of road scene defogging method generating confrontation network based on condition
CN110136075B (en) Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle
Kapoor et al. Fog removal in images using improved dark channel prior and contrast limited adaptive histogram equalization
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN111539246B (en) Cross-spectrum face recognition method and device, electronic equipment and storage medium thereof
CN107292830A (en) Low-light (level) image enhaucament and evaluation method
Guo et al. Joint raindrop and haze removal from a single image
Qian et al. CIASM-Net: a novel convolutional neural network for dehazing image
CN115063318A (en) Adaptive frequency-resolved low-illumination image enhancement method and related equipment
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN113436124A (en) Single-image defogging method applied to marine foggy environment
CN115937019A (en) Non-uniform defogging method combining LSD (local Scale decomposition) quadratic segmentation and deep learning
CN115187474A (en) Inference-based two-stage dense fog image defogging method
CN113487509B (en) Remote sensing image fog removal method based on pixel clustering and transmissivity fusion
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN112614063B (en) Image enhancement and noise self-adaptive removal method for low-illumination environment in building
CN113962889A (en) Thin cloud removing method, device, equipment and medium for remote sensing image
Li et al. DLT-Net: deep learning transmittance network for single image haze removal
CN112132757B (en) General image restoration method based on neural network
Senthilkumar et al. A review on haze removal techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant