CN111161159A - Image defogging method and device based on combination of priori knowledge and deep learning - Google Patents
Image defogging method and device based on combination of priori knowledge and deep learning Download PDFInfo
- Publication number
- CN111161159A CN111161159A CN201911226437.6A CN201911226437A CN111161159A CN 111161159 A CN111161159 A CN 111161159A CN 201911226437 A CN201911226437 A CN 201911226437A CN 111161159 A CN111161159 A CN 111161159A
- Authority
- CN
- China
- Prior art keywords
- image
- layer
- convolution
- base layer
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013135 deep learning Methods 0.000 title claims abstract description 23
- 238000012545 processing Methods 0.000 claims abstract description 27
- 238000002834 transmittance Methods 0.000 claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 23
- 239000013598 vector Substances 0.000 claims description 14
- 238000005070 sampling Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 3
- 238000010030 laminating Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 238000003491 array Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007123 defense Effects 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention relates to an image defogging method and device based on combination of priori knowledge and deep learning. The method comprises the following steps: decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb(ii) a Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image AcWherein c belongs to { r, g, b }; constructing a deep convolutional neural network on the base layer image ZbProcessing is carried out to obtain a transmissivity image t; utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image defogging method and device based on combination of priori knowledge and deep learning.
Background
In the foggy weather, floating particles in the sun, fog or dust cause the picture to fade and be blurred, the contrast and softness are reduced, the image quality is severely restricted, and the applications of video monitoring and analysis, target identification, urban traffic, aerial photography, military and national defense and the like are limited. Therefore, the method has a direct relationship with the civil life to the clear processing of the foggy image, and has great practical significance to the production and the life of people.
The existing defogging algorithms are mainly divided into three categories: non-model based defogging algorithms, and deep learning based defogging algorithms. The defogging algorithm based on the non-model mainly achieves the aim of improving the image by directly stretching the contrast of the image. The common methods are as follows: histogram equalization, homomorphic filtering algorithms, Retinex model-based algorithms, and Retinex model-improved-based algorithms. The methods carry out defogging according to the optical imaging principle, so that the contrast among image colors is more balanced, the image effect is softer, but the obtained image cannot be effectively enhanced in contrast, the method weakens dark or bright areas in the original image, and blurs the key points of the image.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an image defogging method and device based on combination of priori knowledge and deep learning.
In a first aspect, the present invention provides an image defogging method based on a combination of priori knowledge and deep learning, including:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb;
Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image AcWherein c belongs to { r, g, b };
constructing a deep convolutional neural network pair to the base layerImage ZbProcessing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.
Preferably, the input original fog image Z is decomposed by a weighted guided filter to obtain a base layer image ZbThe method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formulap(x, y) and bp(x,y):
Wherein, muZ,ρ(x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; gamma-shapedY(x, Y) is the luminance component of the original hazy image Z at point (x, Y), Y ═ max { Z {r,Zg,Zb},Zr,Zg,ZbR, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z using the following formulab:
Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);
Wherein Z isb(x, y) is the base layer image ZbOf the pixel grey value at point (x, y).
Preferably, the base layer image Z is searched by using a quadtree search methodbProcessing to obtain global atmosphere lightPartial image AcThe method specifically comprises the following steps:
step A, base layer image ZbAre equally divided into four rectangular areas Zb-i(i∈{1,2,3,4}),Zb-iRespectively has a length and a width of Zb1/2 for the length and width of;
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Zb-end;
Obtaining the atmospheric light component image A by using the following formulac:
|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,
In the formula (Z)b-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x, y)) is the final subdivided region Zb-endThe color vector at the midpoint (x, y), (255 ) is a pure white vector.
Preferably, the deep convolutional neural network is constructed on the base layer image ZbThe transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the basic image ZbDown-sampling to obtain low-resolution base layer image Zb' post-extracting low-level features, wherein the base image Z is aligned using a first convolution layerbThe formula for downsampling is:
wherein the content of the first and second substances,convolution for the ith layer in the first convolution layerLayer at the index of channel c, the low resolution base layer image ZbThe low-level feature of'; x is a base image ZbY is the base image ZbThe ordinate of (a); x' is a low resolution base layer image Zb'abscissa, y' is the low resolution base layer image ZbThe ordinate of `;convolution weight arrays of the ith convolution layer in the first convolution layer under the index channels of the layers c and c';a deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (·) max (·,0) denotes a ReLU activation function and zero padding is used as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
low layer characteristics to be obtainedThe number of input layers is nLExtracting a local feature L from a second convolution layer of 2, wherein the size of a convolution kernel in the second convolution layer is 3 multiplied by 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layerThe number of input layers is nG1After the third convolution layer with convolution kernel size of 3 × 3 and step length of 2 is input, the number of layers n is inputG2Obtaining a global feature G of the 2 full connection layer;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level featureCorresponding hybrid feature map FL=σ(L+G);
For mixed feature map FLObtaining sum-low by performing convolution processing for σ (L + G)Layer characteristicsAnd (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t.
Preferably, the global atmospheric light component image A is utilized based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Ze(x, y) is the detail layer image ZeThe gray value of the pixel at the middle point (x, y), Zb(x, y) is the base layer image ZbA pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at a midpoint (x, y) of the transmittance image t,where η is a constant greater than zero.
Preferably, the pair of mixed feature maps FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=Lr+wcLc
wherein L isrTo reconstruct the loss function, LcAs a function of color loss, wcTo be assigned to the color loss function LcWeight of, LrIs shown asLcIs shown as
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and ∠ (J (x, y), Z (x, y)) represents the included angle of the three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
In a second aspect, the invention provides an image defogging device based on combination of priori knowledge and deep learning, wherein the device comprises a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the image defogging method according to any one of claims 1 to 6 based on the combination of the prior knowledge and the deep learning.
The image defogging method and the image defogging device based on the combination of the prior knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain a detail layer image and a basic layer image, then a quad-tree search method and a deep neural network are used for processing the basic layer image to obtain a global atmosphere light component image and a transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for restoring the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an image defogging method based on combination of a priori knowledge and deep learning according to an embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, the single-image defogging method based on the combination of the prior knowledge and the deep learning described in the present invention includes the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb;
Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image Ac;
Constructing a deep convolutional neural network on the base layer image ZbProcessing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.
Specifically, the original input foggy image Z is decomposed by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image ZbThe method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formulap(x, y) and bp(x,y):
Wherein, muZ,ρ(x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;
is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z;
ΓY(x, y) is the luminance component of point (x, y) of the original hazy image Z;
λ is a constant greater than 1;
obtaining the base layer image Z using the following formulab:
Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);
Wherein Z isb(x, y) is the base layer image ZbThe pixel gray value at point (x, y);
z (x, y) is a pixel gray value at any point (x, y) in the original foggy image Z;
ap(x, y) and bp(x, y) weighted filter coefficients at point (x, y) in the original hazy image Z.
Specifically, the base layer image Z is searched by using a quadtree search methodbProcessing to obtain a global atmosphere light component image AcThe method specifically comprises the following steps:
step A, base layer image ZbAre equally divided into four rectangular areas Zb-i(i∈{1,2,3,4}),Zb-iRespectively has a length and a width of Zb1/2 for the length and width of;
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
repeating steps B and C and overlapping the rectangular region with the highest fractionUpdating n times to obtain the final subdivision area Zb-end;
Obtaining the atmospheric light component image A by using the following formulac:
|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,
In the formula (Z)b-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x, y)) is the final subdivided region Zb-endThe color vector of any pixel point (255 ) is a pure white vector.
Specifically, the deep convolutional neural network is constructed on the base layer image ZbThe processing to obtain the transmittance image t specifically includes the following steps:
using the first convolution layer to pair the basic image ZbDown-sampling to obtain low-resolution base layer image Zb' post-extracting low-level features, wherein the base image Z is aligned using a first convolution layerbThe formula for downsampling is:
wherein the content of the first and second substances,the low resolution base layer image Z for the ith convolutional layer at the index of channel cbThe low-level feature of';
x is a base image ZbThe abscissa of (a);
y is a base image ZbThe ordinate of (a);
x' is a low resolution base layer image ZbThe abscissa of';
y' is the low resolution base layer image ZbThe ordinate of `;
σ (·) max (·,0) denotes the ReLU activation function and zero padding is used as a boundary condition for all convolutional layers;
s is the step size of the convolution kernel;
low layer characteristics to be obtainedThe number of input layers is nLExtracting a local feature L from a second convolution layer of 2, wherein the size of a convolution kernel in the second convolution layer is 3 multiplied by 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layerThe number of input layers is nG1After the third convolution layer with convolution kernel size of 3 × 3 and step length of 2 is input, the number of layers n is inputG2Obtaining a global feature G of the 2 full connection layer;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level featureCorresponding hybrid feature map FL=σ(L+G);
For mixed feature map FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)And (3) performing up-sampling on the corresponding preliminary atmospheric refractive index characteristic graph, and outputting to obtain a transmittance image t (x, y).
Specifically, the global atmospheric light component image A is utilized based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeMisting the originalAnd recovering the image Z, wherein the formula for obtaining the defogged image J is as follows:
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J,
Ze(x, y) is the detail layer image ZeThe gray value of the pixel at the midpoint (x, y),
Zb(x, y) is the base layer image ZbThe gray value of the pixel at the midpoint (x, y),
t (x, y) is the output resulting in a transmittance image,
In the step, according to an atmosphere scattering model Z (x, y) ═ t (x, y) J (x, y) + Ac(1-t (x, y)), J (x, y) is a defogged image, and (x, y) is the space coordinate of a pixel point, the recovery of the fogging image is carried out, the obtained recovered image can be expressed as,
where t0 is a parameter for ensuring the processing effect of the dense fog region, Z (x, y) ═ J (x, y) + n (x, y), J is a noiseless image, and n is noise, that is, the image processing method isThe noise will be amplified, taking into account that the noise is mainly contained in the detail layer image ZeAnd noise on the image, the present invention recovers the later image as,
the atmospheric light component a is obtained by step 2.1, the transmittance t is obtained by step 2.2,in order to reduce the effect of noise on the restored image, which is shown as,
η is a constant, in this example η equals 6, and it is found by experiment that t (x, y) is ∈ [0,1 ∈]When t (x, y) < 1/η, pixel points of the sky region of (x, y) data,noise of the sky area is prevented from being amplified;
specifically, the pair of mixed feature maps FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=Lr+wcLc
wherein L isrTo reconstruct the loss function, LcAs a function of color loss, wcTo be assigned to the color loss function LcWeight of, LrIs shown asLcIs shown as
J is a defogged image, Z is a foggy image, c belongs to (R, G, B) as a channel index, ∠ (J (x, y), Z (x, y)) represents an included angle of three-dimensional color vectors of the foggy image and the defogged image at a pixel point (x, y)), and although a reconstruction error can measure the similarity of the original image and the defogged image, the consistent angles of the color vectors cannot be ensured, so that a color error function is added to ensure the consistent angles of the color vectors of the same pixel point of the image before and after defogging.
The image defogging method and the image defogging device based on the combination of the prior knowledge and the deep learning have the advantages that the weighted guide filter is used for decomposing the foggy image to obtain a detail layer image and a basic layer image, then a quad-tree search method and a deep neural network are used for processing the basic layer image to obtain a global atmosphere light component image and a transmissivity image, and finally the global atmosphere light component image, the transmissivity image and the detail layer image are used for restoring the foggy image to obtain the defogged image. Because the global atmosphere light component image and the transmissivity image are estimated or calculated by utilizing the base layer image, the amplification of image noise is avoided, and the defogging treatment can be better carried out on the foggy image. And calculating a loss function by using the defogged image to perform feedback parameter adjustment, and adding color loss into the loss function to improve the robustness of the algorithm.
In another embodiment of the invention, an image defogging device based on combination of a priori knowledge and deep learning comprises a memory and a processor. The memory is used for storing the computer program. The processor is configured to implement the image defogging method based on the combination of the priori knowledge and the deep learning when the computer program is executed.
The reader should understand that in the description of this specification, reference to the description of the terms "one embodiment," "some embodiments," "an example," "a specific example" or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Those skilled in the art will appreciate that various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined. Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.
Claims (7)
1. An image defogging method based on combination of priori knowledge and deep learning is characterized by comprising the following steps:
decomposing the input original foggy image Z by using a weighted guide filter to obtain a detail layer image ZeAnd base layer image Zb;
Using a quadtree search method to search the base layer image ZbProcessing to obtain a global atmosphere light component image AcWherein c belongs to { r, g, b };
constructing a deep convolutional neural network on the base layer image ZbProcessing is carried out to obtain a transmissivity image t;
utilizing the global atmospheric light component image A based on an atmospheric scattering modelcThe transmittance image t and the detail layer image ZeAnd restoring the original foggy image Z to obtain a defogged image J.
2. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the input original foggy image Z is decomposed by using a weighted guide filter to obtain a basic layer image ZbThe method specifically comprises the following steps:
acquiring a pixel gray value Z (x, y) at any point (x, y) in the original foggy image Z;
obtaining a weighted filter coefficient a at the point (x, y) in the Z of the original foggy image by using the following formulap(x, y) and bp(x,y):
Wherein, muZ,ρ(x, y) is the mean of the pixel grey values of the window centered at point (x, y) in the original hazy image Z, with ρ being the radius;is the variance of the pixel gray value of the window with point (x, y) as the center and rho as the radius in the original foggy image Z; gamma-shapedY(x, Y) is the luminance component of the original hazy image Z at point (x, Y), Y ═ max { Z {r,Zg,Zb},Zr,Zg,ZbR, G, B values at the point (x, y) in the hazy image Z, respectively; λ is a constant greater than 1;
obtaining the base layer image Z using the following formulab:
Zb(x,y)=ap(x,y)Z(x,y)+bp(x,y);
Wherein Z isb(x, y) is the base layer image ZbOf the pixel grey value at point (x, y).
3. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein said base layer image Z is searched by using a quadtree search methodbProcessing to obtain a global atmosphere light component image AcThe method specifically comprises the following steps:
step A, base layer image ZbAre equally divided into four rectangular areas Zb-i(i∈{1,2,3,4}),Zb-iRespectively has a length and a width of Zb1/2 for the length and width of;
b, defining the fraction of each rectangular area as the average value of the pixel gray value minus the standard deviation of the pixel gray value in the area;
step C, selecting the rectangular area with the highest score and further dividing the rectangular area into four rectangular areas;
step D, repeating the steps B and C, and carrying out iterative updating on the rectangular area with the highest score for n times to obtain a final subdivision area Zb-end;
Obtaining the atmospheric light component image A by using the following formulac:
|Ac(c∈{r,g,b})|=min|((Zb-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x,y))-(255,255,255))|,
In the formula (Z)b-end-r(x,y),Zb-end-g(x,y),Zb-end-b(x, y)) is the final subdivided region Zb-endThe color vector at the midpoint (x, y), (255 ) is a pure white vector.
4. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the construction of the deep convolutional neural network is performed on the basic layer image ZbThe transmittance image t is obtained by processing, and the method specifically comprises the following steps:
using the first convolution layer to pair the basic image ZbDown-sampling to obtain low-resolution base layer image Zb' post-extracting low-level features, wherein the base image Z is aligned using a first convolution layerbThe formula for downsampling is:
wherein the content of the first and second substances,the low resolution base layer image Z for the ith of the first convolutional layer at the index of channel cbThe low-level feature of'; x is a base image ZbY is the base image ZbThe ordinate of (a); x' is a low resolution base layer image Zb'abscissa, y' is the low resolution base layer image ZbThe ordinate of `;convolution weight arrays of the ith convolution layer in the first convolution layer under the index channels of the layers c and c';a deviation vector of the ith convolution layer in the first convolution layer under the c layer index channel is obtained; σ (·) max (·,0) denotes a ReLU activation function and zero padding is used as a boundary condition for all of the first convolutional layers; s is the step length of the convolution kernel of the first convolution layer;
low layer characteristics to be obtainedThe number of input layers is nLExtracting a local feature L from a second convolution layer of 2, wherein the size of a convolution kernel in the second convolution layer is 3 multiplied by 3, and the step size is 1;
low layer feature obtained by laminating a second convolution layerThe number of input layers is nG1After the third convolution layer with convolution kernel size of 3 × 3 and step length of 2 is input, the number of layers n is inputG2Obtaining a global feature G of the 2 full connection layer;
adding the local feature L and the global feature G, and inputting the sum into an activation function to obtain a low-level featureCorresponding hybrid feature map FL=σ(L+G);
5. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 1, wherein the atmosphere scattering model is based on and utilizedThe global atmospheric light component image AcThe transmittance image t and the detail layer image ZeAnd recovering the original foggy image Z, wherein the formula for obtaining the defogged image J is as follows:
wherein J (x, y) is the gray value of the pixel at the point (x, y) in the defogged image J, and Ze(x, y) is the detail layer image ZeThe gray value of the pixel at the middle point (x, y), Zb(x, y) is the base layer image ZbA pixel gray value at a midpoint (x, y), t (x, y) being a transmittance at a midpoint (x, y) of the transmittance image t,where η is a constant greater than zero.
6. The image defogging method based on the combination of the prior knowledge and the deep learning as claimed in claim 4, wherein the pair of mixed feature maps FLObtaining the feature of the lower layer by convolution processing with sigma (L + G)And then, performing up-sampling on the atmospheric refractive index characteristic diagram, and outputting a transmittance image t, wherein the method further comprises the following steps:
the loss function is constructed as follows:
L=Lr+wcLc
wherein L isrTo reconstruct the loss function, LcAs a function of color loss, wcTo be assigned to the color loss function LcWeight of, LrIs shown asLcIs shown as
J is a defogged image, Z is a fogging image, c belongs to (R, G, B) as a channel index, and ∠ (J (x, y), Z (x, y)) represents the included angle of the three-dimensional color vectors of the fogging image and the defogged image at a pixel point (x, y);
and performing parameter adjustment processing on the deep convolutional neural network by using the loss function.
7. An image defogging device based on combination of priori knowledge and deep learning is characterized by comprising a memory and a processor;
the memory for storing a computer program;
the processor, when executing the computer program, is configured to implement the image defogging method according to any one of claims 1 to 6 based on the combination of the prior knowledge and the deep learning.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911226437.6A CN111161159B (en) | 2019-12-04 | 2019-12-04 | Image defogging method and device based on combination of priori knowledge and deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911226437.6A CN111161159B (en) | 2019-12-04 | 2019-12-04 | Image defogging method and device based on combination of priori knowledge and deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111161159A true CN111161159A (en) | 2020-05-15 |
CN111161159B CN111161159B (en) | 2023-04-18 |
Family
ID=70556359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911226437.6A Active CN111161159B (en) | 2019-12-04 | 2019-12-04 | Image defogging method and device based on combination of priori knowledge and deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161159B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111861939A (en) * | 2020-07-30 | 2020-10-30 | 四川大学 | Single image defogging method based on unsupervised learning |
CN111932365A (en) * | 2020-08-11 | 2020-11-13 | 武汉谦屹达管理咨询有限公司 | Financial credit investigation system and method based on block chain |
CN114331874A (en) * | 2021-12-07 | 2022-04-12 | 西安邮电大学 | Unmanned aerial vehicle aerial image defogging method and device based on residual detail enhancement |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160292824A1 (en) * | 2013-04-12 | 2016-10-06 | Agency For Science, Technology And Research | Method and System for Processing an Input Image |
US20170109870A1 (en) * | 2015-10-16 | 2017-04-20 | Sogang University Research Foundation | Image processing device |
CN107749052A (en) * | 2017-10-24 | 2018-03-02 | 中国科学院长春光学精密机械与物理研究所 | Image defogging method and system based on deep learning neutral net |
US20180122051A1 (en) * | 2015-03-30 | 2018-05-03 | Agency For Science, Technology And Research | Method and device for image haze removal |
-
2019
- 2019-12-04 CN CN201911226437.6A patent/CN111161159B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160292824A1 (en) * | 2013-04-12 | 2016-10-06 | Agency For Science, Technology And Research | Method and System for Processing an Input Image |
US20180122051A1 (en) * | 2015-03-30 | 2018-05-03 | Agency For Science, Technology And Research | Method and device for image haze removal |
US20170109870A1 (en) * | 2015-10-16 | 2017-04-20 | Sogang University Research Foundation | Image processing device |
CN107749052A (en) * | 2017-10-24 | 2018-03-02 | 中国科学院长春光学精密机械与物理研究所 | Image defogging method and system based on deep learning neutral net |
Non-Patent Citations (3)
Title |
---|
余春艳;林晖翔;徐小丹;叶鑫焱;: "雾天退化模型参数估计与CUDA设计" * |
谢伟;周玉钦;游敏;: "融合梯度信息的改进引导滤波" * |
陈永;郭红光;艾亚鹏;: "基于双域分解的多尺度深度学习单幅图像去雾" * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111861939A (en) * | 2020-07-30 | 2020-10-30 | 四川大学 | Single image defogging method based on unsupervised learning |
CN111932365A (en) * | 2020-08-11 | 2020-11-13 | 武汉谦屹达管理咨询有限公司 | Financial credit investigation system and method based on block chain |
CN111932365B (en) * | 2020-08-11 | 2021-09-10 | 上海华瑞银行股份有限公司 | Financial credit investigation system and method based on block chain |
CN114331874A (en) * | 2021-12-07 | 2022-04-12 | 西安邮电大学 | Unmanned aerial vehicle aerial image defogging method and device based on residual detail enhancement |
Also Published As
Publication number | Publication date |
---|---|
CN111161159B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112233038B (en) | True image denoising method based on multi-scale fusion and edge enhancement | |
CN111915530B (en) | End-to-end-based haze concentration self-adaptive neural network image defogging method | |
CN111161159B (en) | Image defogging method and device based on combination of priori knowledge and deep learning | |
Li et al. | Multi-scale single image dehazing using Laplacian and Gaussian pyramids | |
CN110796009A (en) | Method and system for detecting marine vessel based on multi-scale convolution neural network model | |
Kapoor et al. | Fog removal in images using improved dark channel prior and contrast limited adaptive histogram equalization | |
CN110136075B (en) | Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle | |
Liu et al. | Image de-hazing from the perspective of noise filtering | |
CN110675340A (en) | Single image defogging method and medium based on improved non-local prior | |
CN107292830A (en) | Low-light (level) image enhaucament and evaluation method | |
CN113284061B (en) | Underwater image enhancement method based on gradient network | |
Guo et al. | Joint raindrop and haze removal from a single image | |
Qian et al. | CIASM-Net: a novel convolutional neural network for dehazing image | |
CN114627269A (en) | Virtual reality security protection monitoring platform based on degree of depth learning target detection | |
CN115937019A (en) | Non-uniform defogging method combining LSD (local Scale decomposition) quadratic segmentation and deep learning | |
CN112164010A (en) | Multi-scale fusion convolution neural network image defogging method | |
CN115187474A (en) | Inference-based two-stage dense fog image defogging method | |
Zhou et al. | Sparse representation with enhanced nonlocal self-similarity for image denoising | |
CN113421210B (en) | Surface point Yun Chong construction method based on binocular stereoscopic vision | |
CN115035011A (en) | Low-illumination image enhancement method for self-adaptive RetinexNet under fusion strategy | |
Li et al. | DLT-Net: deep learning transmittance network for single image haze removal | |
CN113962889A (en) | Thin cloud removing method, device, equipment and medium for remote sensing image | |
CN117078553A (en) | Image defogging method based on multi-scale deep learning | |
Zhang et al. | A compensation textures dehazing method for water alike area | |
CN114140361A (en) | Generation type anti-network image defogging method fusing multi-stage features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |