CN111161159A - Image defogging method and device based on combination of priori knowledge and deep learning - Google Patents

Image defogging method and device based on combination of priori knowledge and deep learning Download PDF

Info

Publication number
CN111161159A
CN111161159A CN201911226437.6A CN201911226437A CN111161159A CN 111161159 A CN111161159 A CN 111161159A CN 201911226437 A CN201911226437 A CN 201911226437A CN 111161159 A CN111161159 A CN 111161159A
Authority
CN
China
Prior art keywords
image
layer
convolution
base layer
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911226437.6A
Other languages
Chinese (zh)
Other versions
CN111161159B (en
Inventor
郑超兵
伍世虔
徐望明
方顺
陈思摇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201911226437.6A priority Critical patent/CN111161159B/en
Publication of CN111161159A publication Critical patent/CN111161159A/en
Application granted granted Critical
Publication of CN111161159B publication Critical patent/CN111161159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image defogging method and device based on combination of priori knowledge and deep learning. The method comprises the following steps: decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb(ii) a Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image AcWherein c belongs to { r, g, b }; constructing a deep convolutional neural network on the base layer image ZbProcessing is carried out to obtain a transmissivity image t; utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.

Description

Image defogging method and device based on combination of priori knowledge and deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to an image defogging method and device based on combination of priori knowledge and deep learning.
Background
In the foggy weather, floating particles in the sun, fog or dust cause the picture to fade and be blurred, the contrast and softness are reduced, the image quality is severely restricted, and the applications of video monitoring and analysis, target identification, urban traffic, aerial photography, military and national defense and the like are limited. Therefore, the method has a direct relationship with the civil life to the clear processing of the foggy image, and has great practical significance to the production and the life of people.
The existing defogging algorithms are mainly divided into three categories: non-model based defogging algorithms, and deep learning based defogging algorithms. The defogging algorithm based on the non-model mainly achieves the aim of improving the image by directly stretching the contrast of the image. The common methods are as follows: histogram equalization, homomorphic filtering algorithms, Retinex model-based algorithms, and Retinex model-improved-based algorithms. The methods carry out defogging according to the optical imaging principle, so that the contrast among image colors is more balanced, the image effect is softer, but the obtained image cannot be effectively enhanced in contrast, the method weakens dark or bright areas in the original image, and blurs the key points of the image.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image defogging method and device based on combination of priori knowledge and deep learning.
In a first aspect, the present invention provides an image defogging method based on a combination of priori knowledge and deep learning, including:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb
Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image AcWherein c belongs to { r, g, b };
constructing a deep convolutional neural network pair to the base layerImage ZbProcessing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.
Preferably, the input original fog image Z is decomposed by a weighted guided filter to obtain a base layer image ZbThe method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formulap(x, y) and bp(x,y):
Figure BDA0002302364150000021
Wherein, muZ,ρ(x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
Figure BDA0002302364150000022
is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; gamma-shapedY(x, Y) is the luminance component of the original hazy image Z at point (x, Y), Y ═ max { Z {r,Zg,Zb},Zr,Zg,ZbR, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z using the following formulab
Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);
Wherein Z isb(x, y) is the base layer image ZbOf the pixel grey value at point (x, y).
Preferably, the base layer image Z is searched by using a quadtree search methodbProcessing to obtain global atmosphere lightPartial image AcThe method specifically comprises the following steps:
step A, base layer image ZbAre equally divided into four rectangular areas Zb-i(i∈{1,2,3,4}),Zb-iRespectively has a length and a width of Zb1/2 for the length and width of;
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Zb-end
Obtaining the atmospheric light component image A by using the following formulac:
|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,
In the formula (Z)b-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x, y)) is the final subdivided region Zb-endThe color vector at the midpoint (x, y), (255 ) is a pure white vector.
Preferably, the deep convolutional neural network is constructed on the base layer image ZbThe transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the basic image ZbDown-sampling to obtain low-resolution base layer image Zb' post-extracting low-level features, wherein the base image Z is aligned using a first convolution layerbThe formula for downsampling is:
Figure BDA0002302364150000031
wherein the content of the first and second substances,
Figure BDA0002302364150000032
convolution for the ith layer in the first convolution layerLayer at the index of channel c, the low resolution base layer image ZbThe low-level feature of'; x is a base image ZbY is the base image ZbThe ordinate of (a); x' is a low resolution base layer image Zb'abscissa, y' is the low resolution base layer image ZbThe ordinate of `;
Figure BDA0002302364150000033
convolution weight arrays of the ith convolution layer in the first convolution layer under the index channels of the layers c and c';
Figure BDA0002302364150000034
a deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (·) max (·,0) denotes a ReLU activation function and zero padding is used as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
low layer characteristics to be obtained
Figure BDA0002302364150000041
The number of input layers is nLExtracting a local feature L from a second convolution layer of 2, wherein the size of a convolution kernel in the second convolution layer is 3 multiplied by 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layer
Figure BDA0002302364150000042
The number of input layers is nG1After the third convolution layer with convolution kernel size of 3 × 3 and step length of 2 is input, the number of layers n is inputG2Obtaining a global feature G of the 2 full connection layer;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level feature
Figure BDA0002302364150000047
Corresponding hybrid feature map FL=σ(L+G);
For mixed feature map FLObtaining sum-low by performing convolution processing for σ (L + G)Layer characteristics
Figure BDA0002302364150000043
And (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t.
Preferably, the global atmospheric light component image A is utilized based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
Figure BDA0002302364150000044
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Ze(x, y) is the detail layer image ZeThe gray value of the pixel at the middle point (x, y), Zb(x, y) is the base layer image ZbA pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at a midpoint (x, y) of the transmittance image t,
Figure BDA0002302364150000045
where η is a constant greater than zero.
Preferably, the pair of mixed feature maps FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)
Figure BDA0002302364150000046
And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=Lr+wcLc
wherein L isrTo reconstruct the loss function, LcAs a function of color loss, wcTo be assigned to the color loss function LcWeight of, LrIs shown as
Figure BDA0002302364150000051
LcIs shown as
Figure BDA0002302364150000052
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and ∠ (J (x, y), Z (x, y)) represents the included angle of the three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
In a second aspect, the invention provides an image defogging device based on combination of priori knowledge and deep learning, wherein the device comprises a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the image defogging method according to any one of claims 1 to 6 based on the combination of the prior knowledge and the deep learning.
The image defogging method and the image defogging device based on the combination of the prior knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain a detail layer image and a basic layer image, then a quad-tree search method and a deep neural network are used for processing the basic layer image to obtain a global atmosphere light component image and a transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for restoring the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an image defogging method based on combination of a priori knowledge and deep learning according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, the single-image defogging method based on the combination of the prior knowledge and the deep learning described in the present invention includes the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb
Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image Ac
Constructing a deep convolutional neural network on the base layer image ZbProcessing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.
Specifically, the original input foggy image Z is decomposed by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image ZbThe method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formulap(x, y) and bp(x,y):
Figure BDA0002302364150000061
bp(x,y)=(1-ap(x,y))μZ,ρ(x,y);
Wherein, muZ,ρ(x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
Figure BDA0002302364150000062
is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z;
ΓY(x, y) is the luminance component of point (x, y) of the original hazy image Z;
λ is a constant greater than 1;
obtaining the base layer image Z using the following formulab
Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);
Wherein Z isb(x, y) is the base layer image ZbThe pixel gray value at point (x, y);
z (x, y) is a pixel gray value at any point (x, y) in the original foggy image Z;
ap(x, y) and bp(x, y) weighted filter coefficients at point (x, y) in the original hazy image Z.
Specifically, the base layer image Z is searched by using a quadtree search methodbProcessing to obtain a global atmosphere light component image AcThe method specifically comprises the following steps:
step A, base layer image ZbAre equally divided into four rectangular areas Zb-i(i∈{1,2,3,4}),Zb-iRespectively has a length and a width of Zb1/2 for the length and width of;
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
repeating steps B and C and overlapping the rectangular region with the highest fractionUpdating n times to obtain the final subdivision area Zb-end
Obtaining the atmospheric light component image A by using the following formulac:
|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,
In the formula (Z)b-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x, y)) is the final subdivided region Zb-endThe color vector of any pixel point (255 ) is a pure white vector.
Specifically, the deep convolutional neural network is constructed on the base layer image ZbThe processing to obtain the transmittance image t specifically includes the following steps:
using the first convolution layer to pair the basic image ZbDown-sampling to obtain low-resolution base layer image Zb' post-extracting low-level features, wherein the base image Z is aligned using a first convolution layerbThe formula for downsampling is:
Figure BDA0002302364150000081
wherein the content of the first and second substances,
Figure BDA0002302364150000082
the low resolution base layer image Z for the ith convolutional layer at the index of channel cbThe low-level feature of';
x is a base image ZbThe abscissa of (a);
y is a base image ZbThe ordinate of (a);
x' is a low resolution base layer image ZbThe abscissa of';
y' is the low resolution base layer image ZbThe ordinate of `;
Figure BDA0002302364150000083
convolution weight under index channel of c and c' layers for i-th convolution layerAn array;
Figure BDA0002302364150000084
a deviation vector of the ith convolution layer under the c layer index channel;
σ (·) max (·,0) denotes the ReLU activation function and zero padding is used as a boundary condition for all convolutional layers;
s is the step size of the convolution kernel;
low layer characteristics to be obtained
Figure BDA0002302364150000085
The number of input layers is nLExtracting a local feature L from a second convolution layer of 2, wherein the size of a convolution kernel in the second convolution layer is 3 multiplied by 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layer
Figure BDA0002302364150000086
The number of input layers is nG1After the third convolution layer with convolution kernel size of 3 × 3 and step length of 2 is input, the number of layers n is inputG2Obtaining a global feature G of the 2 full connection layer;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level feature
Figure BDA0002302364150000087
Corresponding hybrid feature map FL=σ(L+G);
For mixed feature map FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)
Figure BDA0002302364150000088
And (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t (x, y).
Specifically, the global atmospheric light component image A is utilized based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeMisting the originalAnd recovering the image Z, wherein the formula for obtaining the defogged image J is as follows:
Figure BDA0002302364150000091
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J,
Ze(x, y) is the detail layer image ZeThe gray value of the pixel at the midpoint (x, y),
Zb(x, y) is the base layer image ZbThe gray value of the pixel at the midpoint (x, y),
t (x, y) is the output resulting in a transmittance image,
Figure BDA0002302364150000092
where t (x, y) is the output resulting transmittance image and η is a constant greater than zero.
In the step, according to an atmosphere scattering model Z (x, y) ═ t (x, y) J (x, y) + Ac(1-t (x, y)), J (x, y) is a defogged image, and (x, y) is the space coordinate of a pixel point, the recovery of the fogging image is carried out, the obtained recovered image can be expressed as,
Figure BDA0002302364150000093
where t0 is a parameter for ensuring the processing effect of the dense fog region, Z (x, y) ═ J (x, y) + n (x, y), J is a noiseless image, and n is noise, that is, the image processing method is
Figure BDA0002302364150000094
The noise will be amplified, taking into account that the noise is mainly contained in the detail layer image ZeAnd noise on the image, the present invention recovers the later image as,
Figure BDA0002302364150000095
the atmospheric light component a is obtained by step 2.1, the transmittance t is obtained by step 2.2,
Figure BDA0002302364150000096
in order to reduce the effect of noise on the restored image, which is shown as,
Figure BDA0002302364150000097
η is a constant, in this example η equals 6, and it is found by experiment that t (x, y) is ∈ [0,1 ∈]When t (x, y) < 1/η, pixel points of the sky region of (x, y) data,
Figure BDA0002302364150000098
noise of the sky area is prevented from being amplified;
specifically, the pair of mixed feature maps FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)
Figure BDA0002302364150000099
And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=Lr+wcLc
wherein L isrTo reconstruct the loss function, LcAs a function of color loss, wcTo be assigned to the color loss function LcWeight of, LrIs shown as
Figure BDA0002302364150000101
LcIs shown as
Figure BDA0002302364150000102
J is a defogged image, Z is a foggy image, c belongs to (R, G, B) as a channel index, ∠ (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the foggy image and the defogged image at a pixel point (x, y)), and although a reconstruction error can measure the similarity of the original image and the defogged image, the consistent angles of the color vectors cannot be ensured, so that a color error function is added to ensure the consistent angles of the color vectors of the same pixel point of the image before and after defogging.
The image defogging method and the image defogging device based on the combination of the prior knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain a detail layer image and a basic layer image, then a quad-tree search method and a deep neural network are used for processing the basic layer image to obtain a global atmosphere light component image and a transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for restoring the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image. And calculating a loss function by using the defogged image to perform feedback parameter adjustment, and adding color loss into the loss function to improve the robustness of the algorithm.
In another embodiment of the invention, an image defogging device based on combination of a priori knowledge and deep learning comprises a memory and a processor. The memory is used for storing the computer program. The processor is configured to implement the image defogging method based on the combination of the priori knowledge and the deep learning when the computer program is executed.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example" or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Those skilled in the art will appreciate that various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (7)

1. An image defogging method based on combination of priori knowledge and deep learning is characterized by comprising the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb
Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image AcWherein c belongs to { r, g, b };
constructing a deep convolutional neural network on the base layer image ZbProcessing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.
2. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the input original foggy image Z is decomposed by using a weighted guide filter to obtain a basic layer image ZbThe method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formulap(x, y) and bp(x,y):
Figure FDA0002302364140000011
bp(x,y)=(1-ap(x,y))μZ,ρ(x,y);
Wherein, muZ,ρ(x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
Figure FDA0002302364140000012
is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; gamma-shapedY(x, Y) is the luminance component of the original hazy image Z at point (x, Y), Y ═ max { Z {r,Zg,Zb},Zr,Zg,ZbR, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z using the following formulab
Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);
Wherein Z isb(x, y) is the base layer image ZbOf the pixel grey value at point (x, y).
3. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein said base layer image Z is searched by using a quadtree search methodbProcessing to obtain a global atmosphere light component image AcThe method specifically comprises the following steps:
step A, base layer image ZbAre equally divided into four rectangular areas Zb-i(i∈{1,2,3,4}),Zb-iRespectively has a length and a width of Zb1/2 for the length and width of;
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Zb-end
Obtaining the atmospheric light component image A by using the following formulac:
|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,
In the formula (Z)b-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x, y)) is the final subdivided region Zb-endThe color vector at the midpoint (x, y), (255 ) is a pure white vector.
4. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the construction of the deep convolutional neural network is performed on the basic layer image ZbThe transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the basic image ZbDown-sampling to obtain low-resolution base layer image Zb' post-extracting low-level features, wherein the base image Z is aligned using a first convolution layerbThe formula for downsampling is:
Figure FDA0002302364140000021
wherein the content of the first and second substances,
Figure FDA0002302364140000022
the low resolution base layer image Z for the ith of the first convolutional layer at the index of channel cbThe low-level feature of'; x is a base image ZbY is the base image ZbThe ordinate of (a); x' is a low resolution base layer image Zb'abscissa, y' is the low resolution base layer image ZbThe ordinate of `;
Figure FDA0002302364140000031
convolution weight arrays of the ith convolution layer in the first convolution layer under the index channels of the layers c and c';
Figure FDA0002302364140000032
a deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (·) max (·,0) denotes a ReLU activation function and zero padding is used as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
low layer characteristics to be obtained
Figure FDA0002302364140000033
The number of input layers is nLExtracting a local feature L from a second convolution layer of 2, wherein the size of a convolution kernel in the second convolution layer is 3 multiplied by 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layer
Figure FDA0002302364140000034
The number of input layers is nG1After the third convolution layer with convolution kernel size of 3 × 3 and step length of 2 is input, the number of layers n is inputG2Obtaining a global feature G of the 2 full connection layer;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level feature
Figure FDA0002302364140000035
Corresponding hybrid feature map FL=σ(L+G);
For mixed feature map FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)
Figure FDA0002302364140000037
And (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t.
5. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the atmosphere scattering model is based on and utilizedThe global atmospheric light component image AcThe transmittance image t and the detail layer image ZeAnd recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
Figure FDA0002302364140000036
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Ze(x, y) is the detail layer image ZeThe gray value of the pixel at the middle point (x, y), Zb(x, y) is the base layer image ZbA pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at a midpoint (x, y) of the transmittance image t,
Figure FDA0002302364140000041
where η is a constant greater than zero.
6. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 4, wherein the pair of mixed feature maps FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)
Figure FDA0002302364140000042
And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=Lr+wcLc
wherein L isrTo reconstruct the loss function, LcAs a function of color loss, wcTo be assigned to the color loss function LcWeight of, LrIs shown as
Figure FDA0002302364140000043
LcIs shown as
Figure FDA0002302364140000044
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and ∠ (J (x, y), Z (x, y)) represents the included angle of the three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
7. An image defogging device based on combination of priori knowledge and deep learning is characterized by comprising a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the image defogging method according to any one of claims 1 to 6 based on the combination of the prior knowledge and the deep learning.
CN201911226437.6A 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning Active CN111161159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911226437.6A CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911226437.6A CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Publications (2)

Publication Number Publication Date
CN111161159A true CN111161159A (en) 2020-05-15
CN111161159B CN111161159B (en) 2023-04-18

Family

ID=70556359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911226437.6A Active CN111161159B (en) 2019-12-04 2019-12-04 Image defogging method and device based on combination of priori knowledge and deep learning

Country Status (1)

Country Link
CN (1) CN111161159B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861939A (en) * 2020-07-30 2020-10-30 四川大学 Single image defogging method based on unsupervised learning
CN111932365A (en) * 2020-08-11 2020-11-13 武汉谦屹达管理咨询有限公司 Financial credit investigation system and method based on block chain
CN114331874A (en) * 2021-12-07 2022-04-12 西安邮电大学 Unmanned aerial vehicle aerial image defogging method and device based on residual detail enhancement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292824A1 (en) * 2013-04-12 2016-10-06 Agency For Science, Technology And Research Method and System for Processing an Input Image
US20170109870A1 (en) * 2015-10-16 2017-04-20 Sogang University Research Foundation Image processing device
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net
US20180122051A1 (en) * 2015-03-30 2018-05-03 Agency For Science, Technology And Research Method and device for image haze removal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160292824A1 (en) * 2013-04-12 2016-10-06 Agency For Science, Technology And Research Method and System for Processing an Input Image
US20180122051A1 (en) * 2015-03-30 2018-05-03 Agency For Science, Technology And Research Method and device for image haze removal
US20170109870A1 (en) * 2015-10-16 2017-04-20 Sogang University Research Foundation Image processing device
CN107749052A (en) * 2017-10-24 2018-03-02 中国科学院长春光学精密机械与物理研究所 Image defogging method and system based on deep learning neutral net

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
余春艳;林晖翔;徐小丹;叶鑫焱;: "雾天退化模型参数估计与CUDA设计" *
谢伟;周玉钦;游敏;: "融合梯度信息的改进引导滤波" *
陈永;郭红光;艾亚鹏;: "基于双域分解的多尺度深度学习单幅图像去雾" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861939A (en) * 2020-07-30 2020-10-30 四川大学 Single image defogging method based on unsupervised learning
CN111932365A (en) * 2020-08-11 2020-11-13 武汉谦屹达管理咨询有限公司 Financial credit investigation system and method based on block chain
CN111932365B (en) * 2020-08-11 2021-09-10 上海华瑞银行股份有限公司 Financial credit investigation system and method based on block chain
CN114331874A (en) * 2021-12-07 2022-04-12 西安邮电大学 Unmanned aerial vehicle aerial image defogging method and device based on residual detail enhancement

Also Published As

Publication number Publication date
CN111161159B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN112233038B (en) True image denoising method based on multi-scale fusion and edge enhancement
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN111161159B (en) Image defogging method and device based on combination of priori knowledge and deep learning
Li et al. Multi-scale single image dehazing using Laplacian and Gaussian pyramids
CN110796009A (en) Method and system for detecting marine vessel based on multi-scale convolution neural network model
Kapoor et al. Fog removal in images using improved dark channel prior and contrast limited adaptive histogram equalization
CN110136075B (en) Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle
Liu et al. Image de-hazing from the perspective of noise filtering
CN110675340A (en) Single image defogging method and medium based on improved non-local prior
CN107292830A (en) Low-light (level) image enhaucament and evaluation method
CN113284061B (en) Underwater image enhancement method based on gradient network
Guo et al. Joint raindrop and haze removal from a single image
Qian et al. CIASM-Net: a novel convolutional neural network for dehazing image
CN114627269A (en) Virtual reality security protection monitoring platform based on degree of depth learning target detection
CN115937019A (en) Non-uniform defogging method combining LSD (local Scale decomposition) quadratic segmentation and deep learning
CN112164010A (en) Multi-scale fusion convolution neural network image defogging method
CN115187474A (en) Inference-based two-stage dense fog image defogging method
Zhou et al. Sparse representation with enhanced nonlocal self-similarity for image denoising
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN115035011A (en) Low-illumination image enhancement method for self-adaptive RetinexNet under fusion strategy
Li et al. DLT-Net: deep learning transmittance network for single image haze removal
CN113962889A (en) Thin cloud removing method, device, equipment and medium for remote sensing image
CN117078553A (en) Image defogging method based on multi-scale deep learning
Zhang et al. A compensation textures dehazing method for water alike area
CN114140361A (en) Generation type anti-network image defogging method fusing multi-stage features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant