CN113240588A - Image defogging and exposure method based on enhanced atmospheric scattering model - Google Patents

Image defogging and exposure method based on enhanced atmospheric scattering model Download PDF

Info

Publication number
CN113240588A
CN113240588A CN202110044013.9A CN202110044013A CN113240588A CN 113240588 A CN113240588 A CN 113240588A CN 202110044013 A CN202110044013 A CN 202110044013A CN 113240588 A CN113240588 A CN 113240588A
Authority
CN
China
Prior art keywords
image
original image
formula
point
equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110044013.9A
Other languages
Chinese (zh)
Other versions
CN113240588B (en
Inventor
鞠铭烨
王欣
刘菊萍
张登银
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202110044013.9A priority Critical patent/CN113240588B/en
Publication of CN113240588A publication Critical patent/CN113240588A/en
Application granted granted Critical
Publication of CN113240588B publication Critical patent/CN113240588B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image defogging and exposure method based on an enhanced atmospheric scattering model, belonging to the technical field of image processing, and the method provides an image defogging system based on the enhanced atmospheric scattering model, wherein the system comprises the following modules: an image input module: for inputting an original image; an image processing module: the system is used for defogging the original image; an image output module: the image processing device is used for outputting a defogged image corresponding to the original image; the method comprises the steps of firstly, inputting an original image through an image input module; then, defogging the original image through an image processing module; and finally, outputting the defogged image corresponding to the original image through an image output module: the method has better modeling capability and higher performance in the aspects of image defogging processing efficiency and recovery quality.

Description

Image defogging and exposure method based on enhanced atmospheric scattering model
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to an image defogging and exposure method based on an enhanced atmospheric scattering model.
Background
Haze is a common natural phenomenon in the real world, and for information acquisition industries such as image information acquisition, the appearance of haze can seriously reduce the contrast between image information and real information and change the inherent color of an image. Not only haze, but also the contrast between image information and real information can be reduced in any weather which can affect the quality of the acquired image, and especially outdoor scene shooting is performed in worse severe weather, and a low-contrast image (the low-contrast image can be called as a foggy image) shot under the above conditions does not usually have enough information to ensure the normal operation of a visual system, so that how to recover the foggy image so as to ensure the real effectiveness of image information extraction is of great importance.
The most intuitive method for restoring the foggy image is the traditional enhancement method, which realizes restoration of the foggy image by increasing local or global contrast, and the traditional enhancement method comprises Retinex, a histogram equalization method and the like, but the traditional enhancement method ignores the physical degradation process of the foggy image, so the visual quality of the restoration result is limited. Further, in order to make up for the above-mentioned shortcomings, an image defogging method based on an Atmospheric Scattering Model (ASM) has been proposed, which can generate high-quality results by making full use of additional information, and the Atmospheric Scattering Model (ASM) is widely used to describe the degradation process of a single blurred image, and mathematically, the modeling thereof is as follows:
I(x,y)=A·ρ(x,y)·t(x,y)+A·(1-t(x,y))
in the above formula, (x, y) represents the position coordinates of the foggy image I, a represents the atmospheric light value, ρ (x, y) represents the scene reflectivity in a fogless scene, t (x, y) represents the atmospheric transmittance, and when the particles suspended in the atmosphere are uniformly distributed in space, t (x, y) is e-β·d(x,y)Where β represents the scattering coefficient (also called the haze concentration influence coefficient) and d (x, y) represents the scene depth.
The above-described image defogging methods based on atmospheric scattering models include dark channel methods, fog line methods, and the like, but unfortunately, these methods require expensive costs and other additional preconditions, thereby limiting the usefulness of the method in many applications.
The above-mentioned single image defogging method has made an important progress in the past decades, and as the information technology has been continuously innovated with the passage of the era, the current image defogging method is mainly classified into a priori-based technology and a learning-based technology. The core idea of the prior art-based method is to estimate imaging parameters by using potential prior knowledge to reduce uncertainty of scene depth, and then combine an Atmospheric Scattering Model (ASM) based method to recover a fog-free result. There is also an hl (haze lines) algorithm that uses fog lines to determine atmospheric light and atmospheric transmittance, thereby achieving image defogging. There are also a priori Gamma corrected based Image defogging method (IDGCP) algorithms that use Gamma corrected priors to synthesize a virtual transformation of a foggy Image, and then design global defogging by extracting scene depth from the transformation and foggy Image. The image processed by the method for the foggy image has clear result and small loss of details, but if the parameters are not accurate in the defogging process, the method can cause the result of the image processing to be not ideal.
The core idea of the method based on the learning technology is to utilize a neural network to extract the characteristics of the foggy image without artificially designing an image characteristic extractor, and a system based on a convolutional neural network is called a DehazeNet algorithm under the inspiration of famous image prior, and the method realizes end-to-end image defogging. There is also a Multi-scale Convolutional Neural network (MSCNN) algorithm that establishes a Convolutional Neural Network (CNN) to improve image recovery performance by learning more features. The fog image processed by the method has good processing result, but has higher requirements on the processing capacity and the memory of a computing platform and higher cost.
Therefore, it is important to propose an image defogging technology with higher robustness to eliminate the adverse effect of the low-contrast image and reconstruct the low-contrast image blur information.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects that the defogging effect of an image is not ideal, the operation expense of a defogging algorithm is large and the like caused by dim images and the like after the defogging of the defogged image in the prior art, the invention provides the image defogging and exposure method based on the enhanced atmospheric scattering model.
The technical scheme is as follows: in order to achieve the above object, the image defogging and exposure method based on the enhanced atmospheric scattering model of the present invention provides an image defogging system based on the enhanced atmospheric scattering model, which comprises the following modules:
an image input module: for reading an original image I;
an image processing module: the system is used for carrying out defogging treatment on the original image I;
an image output module: the image processing device is used for outputting a defogged image I' corresponding to the original image I;
the method comprises the following steps:
s1, firstly, reading an original image I through an image input module;
s2, defogging the original image I through the image processing module;
s3 finally, the image output module outputs the defogged image I' corresponding to the original image I.
Further, the original image I is not limited to only a foggy image.
Further, the image processing module includes an EASM module that provides an EASM model whose expression is as follows:
i (x, y) ═ a · (1- α (x, y)). ρ (x, y) · t (x, y) + a · (1-t (x, y)) formula (1)
In the above formula (1), (x, y) indicates the position coordinates of the point in the original image I, a indicates the atmospheric light value, ρ (x, y) indicates the scene reflectance of the point having the position coordinates (x, y) in the fog-free scene, t (x, y) indicates the atmospheric transmittance of the point having the position coordinates (x, y), and α (x, y) indicates the light absorption coefficient of the point having the position coordinates (x, y).
Further, the air conditioner is provided with a fan,
order to
Figure BDA0002896462240000031
Wherein t isminIs the minimum value of t (x, y);
then the above equation (1) etc. is changed to equation (2):
Figure BDA0002896462240000032
in the above formula (2), (x, y) denotes the position coordinates of the point in the original image I, a denotes the atmospheric light value of the original image I, ρ (x, y) denotes the scene reflectance of the point with the position coordinates (x, y) in the fog-free scene, t (x, y) denotes the atmospheric transmittance of the point with the position coordinates (x, y), α (x, y) denotes the light absorption coefficient of the point with the position coordinates (x, y), and t (x, y) denotes the light absorption coefficient of the point with the position coordinates (x, y)minIs the minimum value of t (x, y).
Further, the air conditioner is provided with a fan,
converting equation (2) to equation (3):
Figure BDA0002896462240000033
in the above-mentioned formula (3),
Figure BDA0002896462240000034
the average value of the atmospheric light values of the original image I on the R, G, B three color channels is calculated by equation (4):
Figure BDA0002896462240000035
in the above formula (4), c is the color channel index, c ∈ { R, G, B } refers to selecting R, G, B three color channels, ARFor the atmospheric light value of the original image I on the R color channel, AGFor the atmospheric value of the original image I on the G color channel, ABThe atmospheric light value of the original image I on the B color channel is obtained;
in the above-mentioned formula (3),(x, y) refers to the position coordinates of the points in the original image I,
Figure BDA0002896462240000041
it is the local mean value of the point with position coordinate (x, y) on R, G, B three color channels, and is calculated by formula (5):
Figure BDA0002896462240000042
in the above formula (5), Ω (x, y) is a set of points in the neighborhood centered on coordinates (x, y), | Ω (x, y) | is the total number of points in the set Ω (x, y), c is a color channel index, c ∈ { R, G, B } refers to R, G, B three color channels selected, I ∈ { R, G, B }, where I is a color channel index, and I is a color channel indexR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the R color channel of a point with position coordinates (x ', y'), IG(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values on the G color channel of a point with position coordinates (x ', y'), IB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate of (x ', y') on the B color channel, and a component value of the point with the position coordinate of (x ', y') on the R color channel, a component value of the point with the position coordinate of (x ', y') on the G color channel, and a component value of the point with the position coordinate of (x ', y') on the B color channel are directly obtained through software;
in the above formula (3), (x, y) refers to the position coordinates of the point in the original image I,
Figure BDA0002896462240000043
the average of the scene reflectivities at R, G, B for a point with location coordinates (x, y) in a fog-free scene is calculated by equation (6):
Figure BDA0002896462240000044
in the above equation (6), Ω (x, y) is a set of points in the neighborhood centered on the coordinates (x, y)In the formula, | Ω (x, y) | is the total number of the midpoint in the set Ω (x, y), C is the color channel index, C ∈ { R, G, B } refers to R, G, B three color channels, ρR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values, ρ, of a point with position coordinate (x ', y') on the R color channel in a fog-free sceneG(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the G color channel of a point with position coordinate (x ', y') in the fog-free scene, ρB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate (x ', y') on the B color channel in the fog-free scene, and the component value of the point with the position coordinate (x ', y') on the R color channel in the fog-free scene, the component value of the point with the position coordinate (x ', y') on the G color channel in the fog-free scene, and the component value of the point with the position coordinate (x ', y') on the B color channel in the fog-free scene are directly obtained through software.
Further, the air conditioner is provided with a fan,
order to
Figure BDA0002896462240000045
Fitting a logarithmic function with a rational function according to equation (7):
Figure BDA0002896462240000051
in the above formula (7), p1And p2Are all parameters of rational functions;
according to
Figure BDA0002896462240000052
And formula (7) converting the formula (3) into formula (8):
Figure BDA0002896462240000053
converting equation (8) and the like to equation (9):
λ1·(t(x,y))22(x,y)·t(x,y)+λ3(x, y) ═ 0 equation (9)
In the above-mentioned formula (9),
Figure BDA0002896462240000054
solving t (x, y) according to equation (9), t (x, y) is then expressed as equation (10):
Figure BDA0002896462240000055
then, using a guided filter, equation (10) is converted to equation (11):
Figure BDA0002896462240000056
in the above equation (11), F (·) represents a pilot filter operator;
equation (11) is simplified to equation (12):
Figure BDA0002896462240000057
in the above-mentioned formula (12),
Figure BDA0002896462240000058
is the average of the atmospheric light values of the original image I over R, G, B three color channels,
Figure BDA0002896462240000059
calculating by formula (4);
Figure BDA00028964622400000510
is the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure BDA00028964622400000511
calculated by formula (5); p is a radical of1And p2Are all modeled by formula (7)Obtaining; phi (-) is a calculation formula for the atmospheric transmittance of a point with position coordinates (x, y);
combining the formula (12) and the formula (2), obtaining a scene reflectivity recovery formula of a point with the position coordinates (x, y) under the fog-free scene, and referring to the formula (13):
Figure BDA00028964622400000512
in the above formula (13), SF (-) is a scene reflectivity recovery formula, and a is an atmospheric light value of the original image I;
Figure BDA0002896462240000061
refers to the average value of the atmospheric light values of the original image I at R, G, B three color channels,
Figure BDA0002896462240000062
calculating by formula (4);
Figure BDA0002896462240000063
is the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure BDA0002896462240000064
calculated by formula (5); p is a radical of1And p2Are obtained by simulation of equation (7).
Further, the air conditioner is provided with a fan,
the image processing module also comprises an IDE module which provides a global stretching strategy calculation t of a golden section method in a one-dimensional search algorithmminReferring to equation (14):
Figure BDA0002896462240000065
in the above formula (14), argmin {. is a minimum function; Ψ (-) is a saturation operator function; epsilon is a preset defogging result saturation; SF (-) is the scene reflectivity recovery formula; a being the original image IAn atmospheric light value; ↓δIs the down-sampling operator with coefficient delta, (I) ↓δand
Figure BDA0002896462240000066
The method is characterized in that the corresponding image or matrix in brackets is down-sampled to 1/delta of the original resolution;
Figure BDA0002896462240000067
is the average of the atmospheric light values of the original image I over R, G, B three color channels,
Figure BDA0002896462240000068
calculating by formula (4); p is a radical of1And p2Are obtained by simulation of equation (7).
Further, t obtained from the formula (14)minThe mathematical model corresponding to the defogged image I 'is obtained by substituting the formula (13) ρ (x, y), and the defogged image I' is ρ (x, y).
Further, δ is 4, and ∈ is 0.02.
Further, the air conditioner is provided with a fan,
the S2 includes the steps of:
s21, firstly, the image processing module receives an original image I;
s22, the EASM module performs mathematical transformation on the original image I to obtain a corresponding mathematical transformation formula, and then the EASM module performs mathematical processing on the mathematical transformation formula;
s23 then determining the t by the IDE module providing a global stretching strategy of the golden section method in the one-dimensional search algorithmmin
And S24, outputting the defogged image I' corresponding to the mathematical model after mathematical processing to an image output module for the next output work.
Has the advantages that: compared with the prior art, the invention has the advantages that:
1. after the image is defogged, the defogging effect of the image is not obvious, and the image is still dim after the defogging, the method can effectively improve the defects, and the image after the defogging is closer to real information, is more accurate and has better defogging effect;
2. the prior art has high running time cost for image defogging, so that the running expense is high, the time cost is greatly reduced, and the image defogging efficiency is improved;
3. according to the method, the light absorption coefficient alpha is introduced into a traditional atmospheric scattering model ASM to obtain an enhanced atmospheric scattering model EASM, the enhanced atmospheric scattering model EASM can overcome the defects of poor defogged image effect and the like caused by dim images and the like after defogging caused by the traditional atmospheric scattering model ASM, a foggy scene can be better simulated, the images after defogging are enabled to better accord with real scene information, the EASM has better modeling capacity for the blurred images, and the method has higher performance in the aspects of processing efficiency and recovery quality;
4. the method provides the IDE module, and the IDE module can simultaneously carry out defogging and exposure on a single foggy image without post-processing based on a global stretching strategy, thereby effectively reducing the running time cost and improving the processing efficiency of the foggy image; the IDE module does not need any training process or extra information related to scene depth, and tools or calculation formulas used by the IDE module are simple to operate, so that the high efficiency and strong robustness of the method are ensured; and the IDE module improves the visibility of the hazy image.
Drawings
FIG. 1 is a diagram of a system architecture provided by the method of the present invention.
FIG. 2 is a flow chart of the method steps of the present invention.
Fig. 3 is an original image I in an embodiment of the method of the invention.
Fig. 4 is a dark channel image corresponding to the original image I of fig. 3.
Fig. 5 is a diagram of the defogging effect obtained after the defogging treatment of fig. 3 by using the HL method.
Fig. 6 is a diagram of the defogging effect obtained after the defogging treatment of fig. 3 by using the IDGCP method.
Fig. 7 is a graph showing the defogging effect obtained after the defogging treatment of fig. 3 by the DehazeNet method.
Fig. 8 is a graph of the defogging effect obtained after the defogging process of fig. 3 is performed by the MSCNN method.
FIG. 9 is a graph showing the defogging effect obtained after the defogging treatment of FIG. 3 by the method of the present invention.
FIG. 10 is a quantitative comparative visual chart of defogging effects of the HL method, the IDGCP method, the DehazeNet method, the MSCNN method and the 5 methods of the invention after processing the image in FIG. 3.
FIG. 11 is a schematic diagram of component values of points in an image on an R color channel, a G color channel, and a B color channel, respectively, obtained by MATLAB software.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The first embodiment is as follows:
an enhanced atmospheric scattering model-based image defogging and exposure method of the embodiment provides an enhanced atmospheric scattering model-based image defogging system, and referring to fig. 1, the system comprises the following modules:
an image input module: the method is used for reading an original image I, wherein the original image I refers to image information obtained by outdoor scene information acquisition under the condition of haze weather or worse severe weather, and is not limited to a foggy image;
an image processing module: the image processing device is used for carrying out defogging treatment on the original image I to obtain a defogged image I';
an image output module: the defogged image I' corresponding to the original image I is output;
therefore, based on the above enhanced atmospheric scattering model image defogging system, the method of the embodiment includes the following steps: firstly, reading an original image I through an image input module; then, the image input module sends the input original image I to the image processing module, and the image processing module carries out defogging processing on the original image I; and finally, the image processing module sends the defogged image I 'to the image output module, and the image output module outputs the defogged image I' corresponding to the original image I.
Example two:
in this embodiment, based on the first embodiment, the image processing module includes an EASM module, and the EASM module is configured to perform mathematical processing transformation on an original image I to obtain a defogged image I' corresponding to the original image I.
The EASM module provides an EASM Model, and the EASM Model is an Enhanced Atmospheric Scattering Model EASM (EASM) obtained by improving a traditional Atmospheric Scattering Model ASM, wherein the traditional Atmospheric Scattering Model ASM is as follows:
I(x,y)=A·ρ(x,y)·t(x,y)+A·(1-t(x,y))
in the above formula, (x, y) represents a spatial position coordinate of the original image I, a represents an atmospheric light value, ρ (x, y) represents a scene reflectance of a point having a position coordinate of (x, y) in a fog-free scene, t (x, y) represents an atmospheric transmittance of a point having a position coordinate of (x, y), and when particles suspended in the atmosphere are uniformly distributed in space, t (x, y) ═ e-β·d(x,y)Where β represents a scattering coefficient (also referred to as a fog concentration influence coefficient), and d (x, y) represents a scene depth of a point whose position coordinate is (x, y).
The model expression of the enhanced atmospheric scattering model EASM described above refers to equation (1):
i (x, y) ═ a · (1- α (x, y)). ρ (x, y) · t (x, y) + a · (1-t (x, y)) formula (1)
The model expression formula (1) of the enhanced atmospheric scattering model EASM is obtained by adding a light absorption coefficient α (x, y) to an atmospheric scattering model ASM, where in the formula (1), (x, y) denotes a spatial position coordinate of an original image I, α (x, y) denotes a light absorption coefficient of a point whose position coordinate is (x, y), a denotes an atmospheric light value, ρ (x, y) denotes a scene reflectivity of a point whose position coordinate is (x, y) in a fog-free scene, t (x, y) denotes an atmospheric transmittance of a point whose position coordinate is (x, y), and when particles suspended in the atmosphere are uniformly distributed in the space, t (x, y) ═ e-β·d(x,y)Where β represents a scattering coefficient (also referred to as a fog concentration influence coefficient), and d (x, y) represents a scene depth of a point whose position coordinate is (x, y).
When the image processing module receives the original image I, firstly, the original image I is subjected to mathematical replacement through the enhanced atmospheric scattering model EASM to obtain a mathematical conversion formula corresponding to the original image I, and then, the mathematical conversion formula is further processed to obtain a mathematical model corresponding to the defogged image of the original image I, so that the defogged image of the original image I is obtained.
Example three:
in the embodiment, an image defogging and exposure method based on an enhanced atmospheric scattering model is provided, which comprises the following steps of based on the second embodiment,
the model expression of the enhanced atmospheric scattering model EASM refers to equation (1):
i (x, y) ═ a · (1- α (x, y)). ρ (x, y) · t (x, y) + a · (1-t (x, y)) formula (1)
In the above formula (1), (x, y) refers to the spatial position coordinates of the original image I, a refers to the atmospheric light value, α (x, y) refers to the light absorption coefficient of the point whose position coordinate is (x, y), ρ (x, y) refers to the scene reflectance of the point whose position coordinate is (x, y) in the fog-free scene, and t (x, y) refers to the atmospheric transmittance of the point whose position coordinate is (x, y);
let the above-mentioned
Figure BDA0002896462240000091
Wherein t isminIs the minimum value of t (x, y) which represents the atmospheric transmittance of a point with the position coordinate (x, y), and when the particles suspended in the atmosphere are uniformly distributed in the space, t (x, y) is equal to e-β·d(x,y)Where β represents the scattering coefficient, d (x, y) represents the scene depth of a point with position coordinates (x, y), tmin=e-β·max(d)
According to the above
Figure BDA0002896462240000092
Converting equation (1) to equation (2):
Figure BDA0002896462240000093
in the above formula (2), (x, y) refers to the spatial position coordinates of the original image I, a refers to the atmospheric light value, ρ (x, y) refers to the scene reflectivity of the point with the position coordinates (x, y) in the fog-free scene, t (x, y) refers to the atmospheric transmittance of the point with the position coordinates (x, y), and t isminIs the minimum value of t (x, y);
in the above formula (2), the atmospheric light value a of the original image I is obtained by the following method: calculating the original image I by a He method to obtain a dark channel image corresponding to the original image I, and then processing the dark channel image to obtain an atmospheric light value;
the He method is adopted because the known He method finds that the brightness value of at least one color channel is very low (close to 0) for some pixel points in a local area of a non-sky part in a natural image after statistics on a large number of fog-free images is carried out; the expression of the known hazy image ASM model is: i (x, y) ═ a · ρ (x, y) · t (x, y) + a · (1-t (x, y)), the product of the atmospheric light value a and the scene reflectance ρ (x, y) in a fog-free scene is denoted by J (x, y), and J (x, y) ═ a · ρ (x, y) represents a fog-free image; accordingly, a dark channel prior model assumption can be obtained, that is, for any pair of natural fog-free images, the dark channel satisfies:
Figure BDA0002896462240000101
where Ω (x, y) is a set of pixel points centered at coordinates (x, y), c is the color channel index, JR、JG、JBRefers to three color channels of a fog-free image expressed using RGB; assuming a dark channel prior model, the dark channel pixel values of the fog-free image are usually small, substantially approaching 0, but the dark channel pixel values of the fog-containing image are substantially close to the fog density values, making it easier to calculate the atmospheric light values.
As described above, the atmospheric light value a of the original image I is obtained by: firstly, obtaining a dark channel image of an original image I according to a He method; then, the dark channel image is divided into pixel points, and each pixel point has a corresponding imageThe prime value; then arranging all pixel points in the dark channel image according to the sequence of pixel values from large to small; finally, selecting pixel points with pixel values in a certain proportion in front, setting n selected pixel points, corresponding the spatial position coordinates of the pixel points in the dark channel image to the original image I to obtain the pixel points of the corresponding spatial position coordinates in the original image I, and setting the pixel values of the pixel points in R, G, B three color channels as XR、Xc、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000102
Atmospheric light value of original image I in G color channel
Figure BDA0002896462240000103
Figure BDA0002896462240000104
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000105
Thus obtaining the atmospheric light value of the original image I
Figure BDA0002896462240000106
When the image processing module receives the original image I, firstly, the original image I is subjected to mathematical replacement through the formula (2) to obtain a mathematical conversion formula corresponding to the original image I, and then, the mathematical conversion formula is further processed to obtain a mathematical model corresponding to the image of the original image I after defogging, so that the defogged image of the original image I is obtained.
Example four
In the embodiment, based on the third embodiment,
converting equation (2) to equation (3):
Figure BDA0002896462240000111
in the above-mentioned formula (3),
Figure BDA0002896462240000112
the average value of the atmospheric light values of the original image I on R, G, B three color channels, that is, the average value of the atmospheric light value of the original image I on the R color channel, the atmospheric light value of the original image I on the G color channel, and the atmospheric light value of the original image I on the B color channel;
Figure BDA0002896462240000113
calculated by equation (4):
Figure BDA0002896462240000114
in the above formula (4), c is the color channel index, c ∈ { R, G, B } refers to selecting R, G, B three color channels, ARFor the atmospheric light value of the original image I on the R color channel, AGFor the atmospheric value of the original image I on the G color channel, ABThe atmospheric light value of the original image I on the B color channel is obtained;
according to the method, firstly, a dark channel image of an original image I is obtained according to a He method; then, dividing the dark channel image into pixel points, wherein each pixel point has a corresponding pixel value; then arranging all pixel points in the dark channel image according to the sequence of pixel values from large to small, selecting pixel points with pixel values in a certain proportion in front, setting n selected pixel points, and corresponding the spatial positions of the pixel points in the dark channel image to the original image I; finally, obtaining pixel points of the space position corresponding to the original image I, and setting the pixel values of the pixel points on R, G, B three color channels as X respectivelyR、XG、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000115
Original image I in G colorAtmospheric light value on the tunnel
Figure BDA0002896462240000116
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000117
Figure BDA0002896462240000118
Thus obtaining
Figure BDA0002896462240000119
Namely, it is
Figure BDA00028964622400001110
Figure BDA00028964622400001111
Inputting an original image I into an image processing module, correspondingly outputting atmospheric light values of the original image I on three color channels by the image processing module, and then carrying out average algorithm processing on the atmospheric light values of the three color channels to obtain an atmospheric light value
Figure BDA00028964622400001112
In the above formula (3), (x, y) refers to the position coordinates of the point in the original image I,
Figure BDA00028964622400001113
refers to the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure BDA00028964622400001114
calculated by the formula (5),
Figure BDA00028964622400001115
in the above equation (5), Ω (x, y) is a set of points in the neighborhood centered on the coordinates (x, y)In the formula, | Ω (x, y) | is the total number of the midpoint in the set Ω (x, y), c is the color channel index, c ∈ { R, G, B } refers to R, G, B three color channels, IR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the R color channel of a point with position coordinates (x ', y'), IG(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values on the G color channel of a point with position coordinates (x ', y'), IB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate of (x ', y') on the B color channel, and the component value of the point with a position coordinate of (x ', y') on the R color channel, the component value of the point with a position coordinate of (x ', y') on the G color channel, and the component value of the point with a position coordinate of (x ', y') on the B color channel are obtained by direct software acquisition, specifically by MATLAB software acquisition,
referring to fig. 11, fig. 11 is component values of a point on an R color channel, a G color channel, and a B color channel of an image respectively obtained by MATLAB software, fig. 11(a) is a certain original image, fig. 11(B) is a component value of a point on an R color channel of the original image, fig. 11(c) is a component value of a point on a G color channel of the original image, and fig. 11(d) is a component value of a point on a B color channel of the original image.
In the above formula (3), (x, y) refers to the position coordinates of the point in the original image I,
Figure BDA0002896462240000121
the average value of the scene reflectivity on R, G, B color channels, which represents a point with position coordinates (x, y) in a fog-free scene, is calculated by equation (6):
Figure BDA0002896462240000122
in the above equation (6), Ω (x, y) is a set of points in the neighborhood centered on the coordinates (x, y), | Ω (x, y) | is the total number of points in the set Ω (x, y), c is a color channel index, and c ∈ { R, G, B } means R, G, B three choicesIndividual color channel, pR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values, ρ, of a point with position coordinate (x ', y') on the R color channel in a fog-free sceneG(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the G color channel of a point with position coordinate (x ', y') in the fog-free scene, ρB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate (x ', y') on the B color channel in the fog-free scene, and the component value of the point with the position coordinate (x ', y') on the R color channel in the fog-free scene, the component value of the point with the position coordinate (x ', y') on the G color channel in the fog-free scene, and the component value of the point with the position coordinate (x ', y') on the B color channel in the fog-free scene are directly obtained through software, specifically, obtained through MATLAB software.
According to MATLAB simulation calculation to obtain
Figure BDA0002896462240000123
In this embodiment, let
Figure BDA0002896462240000124
Fitting a logarithmic function with a rational function according to equation (7):
Figure BDA0002896462240000131
in the above formula (7), p1And p2All are parameters of rational functions and can be obtained through MATLAB simulation fitting;
jiang
Figure BDA0002896462240000132
And formula (7) is substituted into formula (3), then the conversion is to formula (8):
Figure BDA0002896462240000133
the variables of equation (8) have the same meaning as the variables of equation (3);
continuing to convert equation (8) to equation (9):
λ1·(t(x,y))22(x,y)·t(x,y)+λ3(x, y) ═ 0 equation (9)
In the above-mentioned formula (9),
Figure BDA0002896462240000134
then, t (x, y) is solved according to equation (9), and t (x, y) is expressed by equation (10):
Figure BDA0002896462240000135
to avoid assuming atmospheric reflectivity as a constant, resulting in image depth discontinuities at times, guided filtering is employed
And a directional filter for converting equation (10) to equation (11):
Figure BDA0002896462240000136
in the above equation (11), F (·) represents a pilot filter operator;
equation (11) is simplified to equation (12):
Figure BDA0002896462240000137
in the above equation (12), φ (-) is a calculation equation of the atmospheric transmittance of the point having the position coordinates (x, y), ln (t)min),
Figure BDA0002896462240000138
p1,p2Of the five variables, the number of variables,
Figure BDA0002896462240000139
r, G, B three color channels for original image IIs measured in the ambient light measurement of (a),
Figure BDA00028964622400001310
calculating by formula (4);
Figure BDA00028964622400001311
is the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure BDA00028964622400001312
calculated by formula (5); p is a radical of1And p2Are obtained by simulation of a formula (7); phi (-) is a calculation formula for the atmospheric transmittance of a point with position coordinates (x, y);
then, combining the formula (12) and the formula (2), obtaining a scene reflectivity recovery formula of a point with the position coordinate (x, y) under the fog-free scene, wherein the recovery formula of the scene reflectivity ρ (x, y) is used for scene defogging and exposure, and the recovery formula of the scene reflectivity ρ (x, y) refers to the formula (13):
Figure BDA0002896462240000141
in the above equation (13), SF (-) is Scene reflectivity Recovery equation (SARF) for defogging and exposure, A, I, ln (t)min),
Figure BDA0002896462240000142
p1,p2Of the 7 variables:
wherein A refers to the atmospheric light value of the original image I, and the atmospheric light value A of the original image I is obtained by the following method: firstly, obtaining a dark channel image of an original image I according to a He method; then, dividing the dark channel image into pixel points, wherein each pixel point has a corresponding pixel value; then arranging all pixel points in the dark channel image according to the sequence of pixel values from large to small; finally, selecting a certain proportion of pixel points with the pixel values in front, setting n selected pixel points, and setting the spatial positions of the pixel points in the dark channel imageThe mark corresponds to the original image I to obtain pixel points of the corresponding space position coordinates in the original image I, and the pixel values of the pixel points in R, G, B three color channels are respectively set as XR、XG、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000143
Atmospheric light value of original image I in G color channel
Figure BDA0002896462240000144
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000145
Thus obtaining the atmospheric light value of the original image I
Figure BDA0002896462240000146
Figure BDA0002896462240000147
Can be obtained by calculation of formula (4);
original image I: inputting an original image I, wherein the size of the original image I is I multiplied by j, performing mathematical transformation on the original image I in an image processing module to obtain a matrix of the original image I,
Figure BDA0002896462240000148
wherein [ R ] isi,j,Gi,j,Bi,j]The component values of the point with the position coordinate (I, j) in the original image I on R, G, B three color channels respectively are referred, and the component values of the point in the original image I on R, G, B three color channels respectively can be directly obtained through software, in this embodiment, directly obtained through MATLAB software;
Figure BDA0002896462240000149
can be obtained by calculation of formula (5);
parameter p1And p2Can be obtained by fitting of equation (7).
The EASM module of this embodiment processes the original image I through the above steps, and when the image processing module receives the original image I, the image processing module performs mathematical replacement on the original image I to obtain a defogged image I 'of a mathematical model corresponding to an image in which the original image I is defogged, where the image I' is defogged — ρ (x, y), so as to obtain a defogged image of the original image I.
Example five:
based on the fourth embodiment, the image processing module further includes an IDE module, and the IDE module is configured to further process a processing result of the EASM model to obtain a mathematical model corresponding to the image of the original image I after defogging, so as to obtain a defogged image of the original image I. To prevent the limitation of the gray world algorithm, i.e. when a local color block contains a sky or a monochromatic part, the average reflectivity of the local color block should be different from the value 0.5 we set in the equation, in this case tminTo be estimated erroneously, in order to obtain a global optimum result against over-enhancement and over-saturation, a global stretching strategy using the golden section method in the one-dimensional search algorithm is designed, which determines t using information of the entire image instead of using pixelsminThus, the IDE module provides a global stretch strategy for the golden section method in a one-dimensional search algorithm to determine tmin
The IDE module provides a global stretching strategy of golden section method in one-dimensional search algorithm to determine tminReferring to equation (14):
Figure BDA0002896462240000151
in the above formula (14), argmin {. is a minimum function; Ψ (-) is a saturation operator function; epsilon is a preset defogging result saturation; SF (-) is the scene reflectivity recovery formula;
a refers to the atmospheric light value of the original image I;
Figure BDA0002896462240000153
is the average of the atmospheric light values of the original image I over R, G, B three color channels,
Figure BDA0002896462240000154
calculating by formula (4);
δis the down-sampling operator with coefficient delta, (I) ↓δThe method comprises the steps of down-sampling a corresponding image or matrix I in brackets to 1/delta of the original resolution; -
Original image I: if the size of the original image I is I multiplied by j, the original image I is mathematically transformed in the image processing module to obtain a matrix of the original image I,
Figure BDA0002896462240000152
wherein [ R ] isi,j,Gi,j,Bi,j]The component values of the point with the position coordinate (I, j) in the original image I on R, G, B three color channels respectively are referred, and the component values of the point in the original image I on R, G, B three color channels respectively can be directly obtained through software, in this embodiment, directly obtained through MATLAB software;
Figure BDA0002896462240000161
refer to the corresponding image or matrix in parentheses
Figure BDA0002896462240000162
Down-sampling to 1/delta of the original resolution,
Figure BDA0002896462240000163
Figure BDA0002896462240000164
wherein
Figure BDA0002896462240000165
Three color channels at R, G, B, which refer to a point with position coordinates (I, j) in the original image IThe local mean value on the track is,
Figure BDA0002896462240000166
calculated by formula (5);
p1and p2Are obtained by simulation of a formula (7);
obtaining t by the IDE module searchminThen, t is addedminSubstitution into
Figure BDA0002896462240000167
Figure BDA0002896462240000168
Then, a mathematical model corresponding to the defogged image I 'is obtained, and the defogged image I' is ρ (x, y).
In this embodiment, epsilon is a preset saturation of the defogging result, and 2% of pixel values are clipped in the shadow and highlight portions, that is, epsilon is set to 0.02; let δ be 4, then ↓δIs the down-sampling operator with the coefficient of 4, (I) ↓δ1/4, down-sampling the corresponding image or matrix in brackets to the original resolution;
Figure BDA0002896462240000169
refers to the down-sampling 1/4 of the corresponding image or matrix in brackets to the original resolution.
The IDE module of this embodiment processes the original image I according to t by the above stepsminThe calculation formula (14) and the scene reflectivity rho (x, y) recovery formula (13) are fitted to obtain a mathematical model corresponding to the defogged image I ', the defogged image I' is rho (x, y), and the mathematical model corresponding to the defogged image of the original image I is obtained, so that the defogged image of the original image I is obtained and is output by the image output module.
Example six:
in this embodiment, based on the fifth embodiment, referring to fig. 2, the method for defogging and exposing an image based on an enhanced atmospheric scattering model specifically includes the following steps:
firstly, an original image I is input through an image input module, and then an image processing moduleThe block defoggs the original image I, and finally outputs a defogged image I' corresponding to the original image I through an image output module; the image processing module receives an original image I, the EASM module in the image processing module performs mathematical transformation on the original image I, and then the IDE module provides a global stretching strategy of a golden section method in a one-dimensional search algorithm to determine tmin(ii) a And finally, taking the mathematical model after mathematical treatment as the mathematical model of the defogged image.
The method comprises the following steps:
step 1, setting an original image I, wherein the original image I refers to image information obtained by outdoor scene information acquisition under a haze weather condition or a worse severe weather condition, and the original image I is not limited to a foggy image;
step 2, inputting the original image I into an image processing module, and firstly, carrying out mathematical transformation on the original image I by an EASM module, specifically:
the original image I is represented as:
i (x, y) ═ a · (1- α (x, y)). ρ (x, y) · t (x, y) + a · (1-t (x, y)) formula (1)
In the above formula (1), (x, y) denotes a spatial position coordinate of the original image I, α (x, y) denotes a light absorption coefficient of a point whose position coordinate is (x, y), a denotes an atmospheric light value, ρ (x, y) denotes a scene reflectance of a point whose position coordinate is (x, y) in a fog-free scene, and t (x, y) denotes an atmospheric transmittance of a point whose position coordinate is (x, y), where when particles suspended in the atmosphere are uniformly distributed over the space, t (x, y) denotes e-β·d(x,y)Where β represents a scattering coefficient (also referred to as a fog concentration influence coefficient), and d (x, y) represents a scene depth of a point whose position coordinate is (x, y).
Let the above-mentioned
Figure BDA0002896462240000171
Wherein t isminIs the minimum value of t (x, y), and t (x, y) represents the atmospheric transmittance of the point with the position coordinate (x, y) in the original image I, and when the particles suspended in the atmosphere are uniformly distributed in the space, t (x, y) ═ e-β·d(x,y)Wherein, in the step (A),beta represents the scattering coefficient, d (x, y) represents the scene depth for a point with position coordinates (x, y), tmin=e-β·max(d)
According to the above
Figure BDA0002896462240000172
Converting equation (1) to equation (2):
Figure BDA0002896462240000173
in the above formula (2), (x, y) refers to the spatial position coordinates of the original image I, a refers to the atmospheric light value, ρ (x, y) refers to the scene reflectivity of the point with the position coordinates (x, y) in the fog-free scene, t (x, y) refers to the atmospheric transmittance of the point with the position coordinates (x, y), and t isminIs the minimum value of t (x, y);
step 3, in the formula (2), the atmospheric light value a of the original image I is obtained by the following method: calculating the original image I by a He method to obtain a dark channel image corresponding to the original image I, and then processing the dark channel image to obtain an atmospheric light value;
the He method is adopted because the known He method finds that the brightness value of at least one color channel is very low (close to 0) for some pixel points in a local area of a non-sky part in a natural image after statistics on a large number of fog-free images is carried out; the expression of the known hazy image ASM model is: i (x, y) ═ a · ρ (x, y) · t (x, y) + a · (1-t (x, y)), the product of the atmospheric light value a and the scene reflectance ρ (x, y) in a fog-free scene is denoted by J (x, y), and J (x, y) ═ a · ρ (x, y) represents a fog-free image; accordingly, a dark channel prior model assumption can be obtained, that is, for any pair of natural fog-free images, the dark channel satisfies:
Figure BDA0002896462240000181
wherein Ω (x, y) is a set of pixels in the neighborhood centered on the coordinates (x, y),c is the color channel index, JR、JG、JBRefers to three color channels of a fog-free image expressed using RGB; assuming a dark channel prior model, the dark channel pixel values of the fog-free image are usually small, substantially approaching 0, but the dark channel pixel values of the fog-containing image are substantially close to the fog density values, making it easier to calculate the atmospheric light values.
As described above, the atmospheric light value a of the original image I is obtained by: firstly, obtaining a dark channel image of an original image I according to a He method; then, dividing the dark channel image into pixel points, wherein each pixel point has a corresponding pixel value; then arranging all pixel points in the dark channel image according to the sequence of pixel values from large to small; finally, selecting pixel points with pixel values in a certain proportion in front, setting n selected pixel points, corresponding the spatial position coordinates of the pixel points in the dark channel image to the original image I to obtain the pixel points of the corresponding spatial position coordinates in the original image I, and setting the pixel values of the pixel points in R, G, B three color channels as XR、XG、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000182
Atmospheric light value of original image I in G color channel
Figure BDA0002896462240000183
Figure BDA0002896462240000184
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000185
Thus obtaining the atmospheric light value of the original image I
Figure BDA0002896462240000186
Step 4, converting the formula (2) into a formula (3):
Figure BDA0002896462240000187
in the above-mentioned formula (3),
Figure BDA0002896462240000188
the average value of the atmospheric light values of the original image I on R, G, B three color channels, that is, the average value of the atmospheric light value of the original image I on the R color channel, the atmospheric light value of the original image I on the G color channel, and the atmospheric light value of the original image I on the B color channel;
in the step 5, the step of the method is that,
Figure BDA0002896462240000191
is the average of the atmospheric light values of the original image I in the three color channels (R, G, B were chosen),
Figure BDA0002896462240000192
calculated by equation (4):
Figure BDA0002896462240000193
in the above formula (4), c is the color channel index, c ∈ { R, G, B } refers to selecting R, G, B three color channels, ARFor the atmospheric light value of the original image I on the R color channel, AGFor the atmospheric value of the original image I on the G color channel, ABThe atmospheric light value of the original image I on the B color channel is obtained;
according to the method, firstly, a dark channel image of an original image I is obtained according to a He method; then, dividing the dark channel image into pixel points, wherein each pixel point has a corresponding pixel value; then arranging all pixel points in the dark channel image according to the sequence of pixel values from large to small, selecting pixel points with pixel values in a certain proportion in front, setting n selected pixel points, and corresponding the spatial positions of the pixel points in the dark channel image to the original image I; finally, obtaining pixel points of the space position corresponding to the original image I, and setting each pixel point at R, G, B three color channelsAre respectively XR、XG、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000194
Atmospheric light value of original image I on G color channel
Figure BDA0002896462240000195
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000196
Figure BDA0002896462240000197
Thus obtaining
Figure BDA0002896462240000198
Namely, it is
Figure BDA0002896462240000199
Figure BDA00028964622400001910
Step 6, (x, y) refers to the position coordinates of the points in the original image I,
Figure BDA00028964622400001911
refers to the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure BDA00028964622400001912
calculated by the formula (5),
Figure BDA00028964622400001913
in the above formula (5), Ω (x, y) is a set of points in the neighborhood centered on coordinates (x, y), | Ω (x, y) | is the total number of points in the set Ω (x, y), c is the color channel index, and c ∈ { R, G, B } refers to R, G being selectedB three color channels, IR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the R color channel of a point with position coordinates (x ', y'), IG(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values on the G color channel of a point with position coordinates (x ', y'), IB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate (x ', y') on the B color channel, and the component value of the point with a position coordinate (x ', y') on the R color channel, the component value of the point with a position coordinate (x ', y') on the G color channel, and the component value of the point with a position coordinate (x ', y') on the B color channel are obtained by direct software acquisition, specifically by MATLAB software acquisition.
Step 7, in the above formula (3), (x, y) refers to the position coordinates of the midpoint in the original image I,
Figure BDA0002896462240000201
the average value of the scene reflectivity on R, G, B color channels, which represents a point with position coordinates (x, y) in a fog-free scene, is calculated by equation (6):
Figure BDA0002896462240000202
in the above equation (6), Ω (x, y) is a set of points in the neighborhood centered on coordinates (x, y), | Ω (x, y) | is the total number of points in the set Ω (x, y), C is a color channel index, C ∈ { R, G, B } means that R, G, B three color channels are selected, ρ ∈ { R, G, B }, andR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values, ρ, of a point with position coordinate (x ', y') on the R color channel in a fog-free sceneG(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the G color channel of a point with position coordinate (x ', y') in the fog-free scene, ρB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate (x ', y') in a fog-free scene on the B color channel, and the point with a position coordinate (x ', y') in the R color channel in the fog-free sceneThe component values on the color channel, the component value of the point with the position coordinate (x ', y') on the G color channel in the fog-free scene, and the component value of the point with the position coordinate (x ', y') on the B color channel in the fog-free scene are obtained by directly obtaining through software, specifically by obtaining through MATLAB software.
According to MATLAB simulation calculation to obtain
Figure BDA0002896462240000203
In this embodiment, let
Figure BDA0002896462240000204
Step 8, fitting a logarithmic function by using a rational function according to the formula (7):
Figure BDA0002896462240000205
in the above formula (7), p1And p2All are parameters of rational functions and can be obtained through MATLAB simulation fitting;
step 9, according to
Figure BDA0002896462240000206
And formula (7) substituted into formula (3), the formula (3) is converted into a formula
Formula (8):
Figure BDA0002896462240000207
the variables of equation (8) have the same meaning as the variables of equation (3);
continuing to convert equation (8) to equation (9):
λ1·(t(x,y))22(x,y)·t(x,y)+λ3(x, y) ═ 0 equation (9)
In the above-mentioned formula (9),
Figure BDA0002896462240000208
then, t (x, y) is solved according to equation (9), and t (x, y) is expressed by equation (10):
Figure BDA0002896462240000211
to avoid assuming that the atmospheric reflectivity is constant, resulting in image depth discontinuities in some cases, a guided filter is used, with which equation (10) can be converted to equation (11):
Figure BDA0002896462240000212
in the above equation (11), F (·) represents a pilot filter operator;
equation (11) is simplified to equation (12):
Figure BDA0002896462240000213
in the formula (12), φ (-) is a calculation formula of the atmospheric transmittance of a point having a position coordinate of (x, y), ln (t)min),
Figure BDA0002896462240000214
p1,p2Of the five variables, the number of variables,
Figure BDA0002896462240000215
is the average of the atmospheric light values of the original image I over R, G, B three color channels,
Figure BDA0002896462240000216
calculating by formula (4);
Figure BDA0002896462240000217
is the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure BDA0002896462240000218
calculated by formula (5); p is a radical of1And p2Are obtained by simulation of a formula (7); phi (-) is a calculation formula for the atmospheric transmittance of a point with position coordinates (x, y);
then, combining the formula (12) and the formula (2), obtaining a scene reflectivity recovery formula of a point with the position coordinate (x, y) under the fog-free scene, wherein the recovery formula of the scene reflectivity ρ (x, y) is used for scene defogging and exposure, and the recovery formula of the scene reflectivity ρ (x, y) refers to the formula (13):
Figure BDA0002896462240000219
in the above equation (13), SF (-) is Scene reflectivity Recovery equation (SARF) for defogging and exposure, A, I, ln (t)min),
Figure BDA00028964622400002110
p1,p2Of the 7 variables:
wherein A refers to the atmospheric light value of the original image I, and the atmospheric light value A of the original image I is obtained by the following method: firstly, obtaining a dark channel image of an original image I according to a He method; then, dividing the dark channel image into pixel points, wherein each pixel point has a corresponding pixel value; then arranging all pixel points in the dark channel image according to the sequence of pixel values from large to small; finally, selecting pixel points with pixel values in a certain proportion in front, setting n selected pixel points, corresponding the spatial position coordinates of the pixel points in the dark channel image to the original image I to obtain the pixel points of the corresponding spatial position coordinates in the original image I, and setting the pixel values of the pixel points in R, G, B three color channels as XR、Xc、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000221
Atmospheric light value of original image I in G color channel
Figure BDA0002896462240000222
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000223
Thus obtaining the atmospheric light value of the original image I
Figure BDA0002896462240000224
Original image I: inputting an original image I, wherein the size of the original image I is I multiplied by j, performing mathematical transformation on the original image I in an image processing module to obtain a matrix of the original image I,
Figure BDA0002896462240000225
wherein [ R ] isi,j,Gi,j,Bi,j]The component values of the point with the position coordinate (I, j) in the original image I on R, G, B three color channels respectively are referred, and the component values of the point in the original image I on R, G, B three color channels respectively can be directly obtained through software, in this embodiment, directly obtained through MATLAB software;
Figure BDA0002896462240000226
can be obtained by calculation of formula (4);
Figure BDA0002896462240000227
can be obtained by calculation of formula (5);
parameter p1And p2Can be obtained by fitting of formula (7);
the original image I is processed through the above method steps to obtain a mathematical model I ═ ρ (x, y) of the corresponding defogged image.
Example seven:
in this embodiment, an image defogging and exposure method based on an enhanced atmospheric scattering model further includes, based on the sixth embodiment, with reference to fig. 2, the following steps:
wherein the image processing moduleThe block also comprises an IDE module, wherein the IDE module is used for further processing the processing result of the EASM model to obtain a mathematical model corresponding to the defogged image of the original image I, so as to obtain the defogged image of the original image I. To prevent the limitation of the gray world algorithm, i.e. when a local color block contains a sky or a monochromatic part, the average reflectivity of the local color block should be different from the value 0.5 we set in the equation, in this case tminTo be estimated erroneously, in order to obtain a global optimum result against over-enhancement and over-saturation, a global stretching strategy using the golden section method in the one-dimensional search algorithm is designed, which determines t using information of the entire image instead of using pixelsminThus, the IDE module provides a global stretch strategy for the golden section method in a one-dimensional search algorithm to determine tmin
The IDE module provides a global stretching strategy of golden section method in one-dimensional search algorithm to determine tminReferring to equation (14):
Figure BDA0002896462240000231
in the above formula (14), argmin {. is a minimum function; Ψ (-) is a saturation operator function; epsilon is a preset defogging result saturation; SF (-) is the scene reflectivity recovery formula; a refers to the atmospheric light value of the original image I;
δis the down-sampling operator with coefficient delta, (I) ↓δThe method is that I is reduced and sampled to 1/delta of the original resolution for the image or the matrix corresponding to the parenthesis, and an original image I is set as follows: if the size of the original image I is I multiplied by j, the original image I is mathematically transformed in the image processing module to obtain a matrix of the original image I,
Figure BDA0002896462240000232
wherein [ R ] isi,j,Gi,j,Bi,j]Refers to the component values of the point with position coordinate (I, j) in the original image I on R, G, B color channels, respectively, and the point in the original image I is R, G, B color channelThe component values on the three color channels can be directly obtained through software, and in the embodiment, the component values are directly obtained through MATLAB software;
Figure BDA0002896462240000233
refer to the corresponding image or matrix in parentheses
Figure BDA0002896462240000234
Down-sampling to 1/delta of the original resolution,
Figure BDA0002896462240000235
Figure BDA0002896462240000236
wherein
Figure BDA0002896462240000237
Refers to the local mean value of the point with position coordinates (I, j) in the original image I on R, G, B three color channels,
Figure BDA0002896462240000238
calculated by formula (5);
Figure BDA0002896462240000239
is the average of the atmospheric light values of the original image I over R, G, B three color channels,
Figure BDA00028964622400002310
calculating by formula (4); p is a radical of1And p2Are obtained by simulation of a formula (7);
obtaining t by the IDE module searchminAfter that, combine
Figure BDA00028964622400002311
Figure BDA00028964622400002312
Then obtain the mathematics corresponding to the defogged image IModel, defogged image I ═ ρ (x, y).
In this embodiment, epsilon is a preset saturation of the defogging result, and 2% of pixel values are clipped in the shadow and highlight portions, that is, epsilon is set to 0.02; let δ be 4, then ↓δIs the down-sampling operator with the coefficient of 4, (I) ↓δ1/4, down-sampling the corresponding image or matrix in brackets to the original resolution;
Figure BDA0002896462240000248
refers to the down-sampling 1/4 of the corresponding image or matrix in brackets to the original resolution.
In this embodiment, p1And p2Are all obtained by simulation of formula (7), p10.397 and p2=0.07747;
To sum up, t can be calculated by the IDE module in this embodimentmin
The IDE module of this embodiment processes the original image I according to t by the above stepsminThe calculation formula (14) and the scene reflectivity rho (x, y) recovery formula (13) are fitted to obtain a mathematical model corresponding to the defogged image I ', the defogged image I' is rho (x, y), and the mathematical model corresponding to the defogged image of the original image I is obtained, so that the defogged image of the original image I is obtained and is output by the image output module.
Example eight:
an image defogging and exposure method based on the enhanced atmospheric scattering model of the embodiment is based on the seventh embodiment,
the atmospheric light value A of the original image I is obtained by the following method: firstly, obtaining a dark channel image of an original image I according to a He method; then, dividing the dark channel image into pixel points, wherein each pixel point has a corresponding pixel value; then arranging all pixel points in the dark channel image according to the sequence of pixel values from large to small; finally, selecting a certain proportion of pixel points with the pixel values in front, in the embodiment, selecting pixel points with the pixel values in front 0.1%, setting n selected pixel points, and corresponding the spatial position coordinates of the pixel points in the dark channel image to the original image I to obtain the original image ISetting the pixel values of the corresponding pixel points of the spatial position coordinates at R, G, B three color channels as XR、XG、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000241
Atmospheric light value of original image I in G color channel
Figure BDA0002896462240000242
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000243
Thus obtaining the atmospheric light value of the original image I
Figure BDA0002896462240000244
Figure BDA0002896462240000245
Figure BDA0002896462240000246
Calculated by equation (4):
Figure BDA0002896462240000247
in the above formula (4), c is the color channel index, c ∈ { R, G, B } refers to selecting R, G, B three color channels, ARFor the atmospheric light value of the original image I on the R color channel, AGFor the atmospheric value of the original image I on the G color channel, ABThe atmospheric light value of the original image I on the B color channel is obtained;
according to the method, firstly, a dark channel image of an original image I is obtained according to a He method; then, dividing the dark channel image into pixel points, wherein each pixel point has a corresponding pixel value; then all pixel points in the dark channel image are arranged according to the sequence of the pixel values from large to small, and certain pixel value in front is selectedIn the present embodiment, pixel points with a pixel value of 0.1% in the first time are selected as proportional pixel points, n selected pixel points are set, and the spatial positions of the pixel points in the dark channel image are mapped to the original image I; finally, obtaining pixel points of the space position corresponding to the original image I, and setting the pixel values of the pixel points on R, G, B three color channels as X respectivelyR、XG、XBThen the atmospheric light value of the original image I in the R color channel
Figure BDA0002896462240000251
Atmospheric light value of original image I on G color channel
Figure BDA0002896462240000252
Atmospheric light value of original image I in B color channel
Figure BDA0002896462240000253
Thus obtaining
Figure BDA0002896462240000254
Figure BDA0002896462240000255
Namely, it is
Figure BDA0002896462240000256
Example nine:
in an image defogging and exposure method based on an enhanced atmospheric scattering model according to an eighth embodiment, an original image I is shown in fig. 3, where fig. 3(a) and 3(b) are both fog-free images, and fig. 3(c), 3(d) and 3(e) are both fog-containing images. As can be seen from the above, the dark channel image is calculated according to the He method for the original image I, and the atmospheric light value is estimated through the dark channel prior model, so that the He method is firstly used to process the image I in fig. 3, so as to obtain the dark channel image map 4 corresponding to the image I in fig. 3, the processing result is shown in fig. 4, the dark channel image corresponding to the image I in fig. 3(a) is the image 4(a), the dark channel image corresponding to the image I in fig. 3(b) is the image 4(b), the dark channel image corresponding to the image I in fig. 3(c) is the image 4(c), the dark channel image map 4(b) corresponding to the image I in fig. 3(a) is relatively dark, and the dark channel image 4(c) corresponding to the image I in fig. 3(c) and the dark channel image 4(d) corresponding to the image I in fig. 3(e) and the image 4(e) in fig. 3(c) corresponding to the image 4(e) (e) The images of all dark channels are brighter, and the pixel values are close to the fog concentration.
In this embodiment, an HL method, an IDGCP method, a DehazeNet method, an MSCNN method, and a method (IDE) of the present invention are respectively used to perform defogging processing on an original image I corresponding to fig. 3, fig. 5 is a defogging effect diagram obtained after the defogging processing of fig. 3 is performed by using the HL method, in fig. 5, a defogging image corresponding to fig. 3(a) is fig. 5(a), a defogging image corresponding to fig. 3(b) is fig. 5(b), a defogging image corresponding to fig. 3(c) is fig. 5(c), a defogging image corresponding to fig. 3(d) is fig. 5(d), and a defogging image corresponding to fig. 3(e) is fig. 5 (e); fig. 6 is a diagram of the defogging effect obtained after the defogging process of fig. 3 is performed by using the IDGCP method, the defogged image corresponding to fig. 3(a) is fig. 6(a), the defogged image corresponding to fig. 3(b) is fig. 6(b), the defogged image corresponding to fig. 3(c) is fig. 6(c), the defogged image corresponding to fig. 3(d) is fig. 6(d), and the defogged image corresponding to fig. 3(e) is fig. 6 (e); fig. 7 is a diagram of the defogging effect obtained after the defogging process of fig. 3 is performed by using the DehazeNet method, the defogging image corresponding to fig. 3(a) is fig. 7(a), the defogging image corresponding to fig. 3(b) is fig. 7(b), the defogging image corresponding to fig. 3(c) is fig. 7(c), the defogging image corresponding to fig. 3(d) is fig. 7(d), and the defogging image corresponding to fig. 3(e) is fig. 7 (e); fig. 8 is a defogging effect diagram obtained after the defogging process of fig. 3 is performed by using the MSCNN method, the defogging image corresponding to fig. 3(a) is fig. 8(a), the defogging image corresponding to fig. 3(b) is fig. 8(b), the defogging image corresponding to fig. 3(c) is fig. 8(c), the defogging image corresponding to fig. 3(d) is fig. 8(d), and the defogging image corresponding to fig. 3(e) is fig. 8 (e); fig. 9 is a graph of the defogging effect obtained after the defogging process of fig. 3 is performed by using the method of the present invention, the defogged image corresponding to fig. 3(a) is fig. 9(a), the defogged image corresponding to fig. 3(b) is fig. 9(b), the defogged image corresponding to fig. 3(c) is fig. 9(c), the defogged image corresponding to fig. 3(d) is fig. 9(d), and the defogged image corresponding to fig. 3(e) is fig. 9 (e). It can be seen that the present invention has better effect on image defogging than the other 4 types, and has smaller running expense.
The HL method, IDGCP method, DehazeNet method, MSCNN method and the method of the Invention (IDE) were compared below using a qualitative method.
Table 1 shows that the HL method, the IDGCP method, the DehazeNet method, the MSCNN method and the invention method (IDE)5 methods are used for quantitative comparison of the defogging effect of the Image in FIG. 3, and three common indicators are adopted, namely the mean ratio of the gradients at the visible edge (R), a Natural Image Quality Evaluator (NIQE) and a Fog Density Evaluator (FADE), wherein the larger the R is, the more abundant the information contained in the Image is; the smaller the NIQE is, the truer and more natural the restored image is; the smaller FADE indicates less haze remaining in the defogged picture. It can thus be seen that the inventive method works better by comparing the defogging effect with the remaining 4 methods, and the quantitative comparison visual chart of the defogging effect in table 1 refers to fig. 10, where fig. 10(a) is a comparison of the 5 methods for the operating time of the defogging process of fig. 3(e) by the image size s of 400 times the 400 size, fig. 10(b) is a comparison of the performance indicators of the defogging process R of fig. 3(e) with the 5 methods, fig. 10(c) is a comparison of the performance indicators of the defogging process NIQE of fig. 3(e) with the 5 methods, and fig. 10(d) is a comparison of the performance indicators of the defogging process FADE of fig. 3(e) with the 5 methods.
Tables 2 and 3 show that the HL method, IDGCP method, DehazeNet method, MSCNN method and method of the Invention (IDE)5 methods process fig. 3 comparing the time required for different and same resolutions respectively, tables 2 show the HL method, IDGCP method, and tables 3 show the DehazeNet method, MSCNN method and method of the invention, by comparing with the remaining 4 methods, the defogging speed of the invention has a great advantage, and experiments show that, compared with ASM, EASM has a better modeling capability for blurred images, and the final IDE is superior to most of the latest technologies in terms of processing efficiency and recovery quality.
TABLE 1
Figure BDA0002896462240000271
TABLE 3
Figure BDA0002896462240000272
Figure BDA0002896462240000281
Example ten:
in the embodiment, based on the ninth embodiment, the simulation language is matlab (R2016b), the operating environment is Windows 10, and the computer is configured as intel (R) core (tm) i5-7200UCPU @2.50GHz 16GB RAM.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (10)

1. An image defogging and exposure method based on an enhanced atmospheric scattering model is characterized by comprising the following steps: the method provides an enhanced atmospheric scattering model-based image defogging system comprising the following modules:
an image input module: for reading an original image I;
an image processing module: the system is used for carrying out defogging treatment on the original image I;
an image output module: the image processing device is used for outputting a defogged image I' corresponding to the original image I;
the method comprises the following steps:
s1, firstly, reading an original image I through an image input module;
s2, defogging the original image I through the image processing module;
s3 finally, the image output module outputs the defogged image I' corresponding to the original image I.
2. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 1, wherein: the original image I is not limited to only a foggy image.
3. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 2, wherein: the image processing module includes an EASM module that provides an EASM model whose expression is as follows:
i (x, y) ═ a · (1- α (x, y)). ρ (x, y) · t (x, y) + a · (1-t (x, y)) formula (1)
In the above formula (1), (x, y) indicates the position coordinates of the point in the original image I, a indicates the atmospheric light value, ρ (x, y) indicates the scene reflectance of the point having the position coordinates (x, y) in the fog-free scene, t (x, y) indicates the atmospheric transmittance of the point having the position coordinates (x, y), and α (x, y) indicates the light absorption coefficient of the point having the position coordinates (x, y).
4. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 3, wherein:
order to
Figure FDA0002896462230000011
Wherein t isminIs the minimum value of t (x, y);
then the above equation (1) etc. is changed to equation (2):
Figure FDA0002896462230000012
in the above formula (2), (x, y) denotes the position coordinates of the point in the original image I, a denotes the atmospheric light value of the original image I, ρ (x, y) denotes the scene reflectance of the point with the position coordinates (x, y) in the fog-free scene, t (x, y) denotes the atmospheric transmittance of the point with the position coordinates (x, y), α (x, y) denotes the light absorption coefficient of the point with the position coordinates (x, y), and t (x, y) denotes the light absorption coefficient of the point with the position coordinates (x, y)minIs the minimum value of t (x, y).
5. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 4, wherein:
converting equation (2) to equation (3):
Figure FDA0002896462230000021
in the above-mentioned formula (3),
Figure FDA0002896462230000022
the average value of the atmospheric light values of the original image I on the R, G, B three color channels is calculated by equation (4):
Figure FDA0002896462230000023
in the above formula (4), c is the color channel index, c ∈ { R, G, B } refers to selecting R, G, B three color channels, ARFor the atmospheric light value of the original image I on the R color channel, AGFor the atmospheric value of the original image I on the G color channel, ABThe atmospheric light value of the original image I on the B color channel is obtained;
in the above formula (3), (x, y) refers to the position coordinates of the point in the original image I,
Figure FDA0002896462230000024
it is the local mean value of the point with position coordinate (x, y) on R, G, B three color channels, and is calculated by formula (5):
Figure FDA0002896462230000025
in the above formula (5), Ω (x, y) is a set of points in the neighborhood centered on coordinates (x, y), | Ω (x, y) | is the total number of points in the set Ω (x, y), c is a color channel index, and c ∈ { R, G, B } refers to R, G, B three colors selectedColor channel, IR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the R color channel of a point with position coordinates (x ', y'), IG(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values on the G color channel of a point with position coordinates (x ', y'), IB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate of (x ', y') on the B color channel, and a component value of the point with the position coordinate of (x ', y') on the R color channel, a component value of the point with the position coordinate of (x ', y') on the G color channel, and a component value of the point with the position coordinate of (x ', y') on the B color channel are directly obtained through software;
in the above formula (3), (x, y) refers to the position coordinates of the point in the original image I,
Figure FDA0002896462230000026
the average of the scene reflectivities at R, G, B for a point with location coordinates (x, y) in a fog-free scene is calculated by equation (6):
Figure FDA0002896462230000027
in the above formula (6), Ω (x, y) is a set of points in the neighborhood centered on coordinates (x, y), | Ω (x, y) | is the total number of points in the set Ω (x, y), c is a color channel index, c ∈ { R, G, B } refers to R, G, B three color channels being selected, ρ ∈ { R, G, B }, andR(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values, ρ, of a point with position coordinate (x ', y') on the R color channel in a fog-free sceneG(x ', y'), (x ', y') ∈ Ω (x, y) refer to the component values on the G color channel of a point with position coordinate (x ', y') in the fog-free scene, ρB(x ', y'), (x ', y') ∈ Ω (x, y) refer to component values of a point with a position coordinate (x ', y') on the B color channel in a fog-free scene, component values of a point with a position coordinate (x ', y') on the R color channel in a fog-free scene,y ') and the component value of the point with the position coordinate (x ', y ') on the B color channel under the fog-free scene are obtained by directly acquiring through software.
6. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 5, wherein:
order to
Figure FDA0002896462230000031
Fitting a logarithmic function with a rational function according to equation (7):
Figure FDA0002896462230000032
in the above formula (7), p1And p2Are all parameters of rational functions;
according to
Figure FDA0002896462230000033
And formula (7) converting the formula (3) into formula (8):
Figure FDA0002896462230000034
converting equation (8) and the like to equation (9):
λ1·(t(x,y))22(x,y)·t(x,y)+λ3(x, y) ═ 0 equation (9)
In the above-mentioned formula (9),
Figure FDA0002896462230000035
solving t (x, y) according to equation (9), t (x, y) is then expressed as equation (10):
Figure FDA0002896462230000036
then, using a guided filter, equation (10) is converted to equation (11):
Figure FDA0002896462230000037
in the above equation (11), F (·) represents a pilot filter operator;
equation (11) is simplified to equation (12):
Figure FDA0002896462230000041
in the above-mentioned formula (12),
Figure FDA0002896462230000042
is the average of the atmospheric light values of the original image I over R, G, B three color channels,
Figure FDA0002896462230000043
calculating by formula (4);
Figure FDA0002896462230000044
is the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure FDA0002896462230000045
calculated by formula (5); p is a radical of1And p2Are obtained by simulation of a formula (7); phi (-) is a calculation formula for the atmospheric transmittance of a point with position coordinates (x, y);
combining the formula (12) and the formula (2), obtaining a scene reflectivity recovery formula of a point with the position coordinates (x, y) under the fog-free scene, and referring to the formula (13):
Figure FDA0002896462230000046
in the above formula (13), SF (-) is a scene reflectivity recovery formula, and a is an atmospheric light value of the original image I;
Figure FDA0002896462230000047
refers to the average value of the atmospheric light values of the original image I at R, G, B three color channels,
Figure FDA0002896462230000048
calculating by formula (4);
Figure FDA0002896462230000049
is the local mean of a point with position coordinates (x, y) on R, G, B three color channels,
Figure FDA00028964622300000410
calculated by formula (5); p is a radical of1And p2Are obtained by simulation of equation (7).
7. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 6, wherein: the image processing module also comprises an IDE module which provides a global stretching strategy calculation t of a golden section method in a one-dimensional search algorithmminReferring to equation (14):
Figure FDA00028964622300000411
in the above formula (14), argmin {. is a minimum function; Ψ (-) is a saturation operator function; epsilon is a preset defogging result saturation; SF (-) is the scene reflectivity recovery formula; a refers to the atmospheric light value of the original image I; ↓δIs the down-sampling operator with coefficient delta, (I) ↓δAnd
Figure FDA00028964622300000412
the method is characterized in that the corresponding image or matrix in brackets is down-sampled to 1/delta of the original resolution;
Figure FDA00028964622300000413
is the average of the atmospheric light values of the original image I over R, G, B three color channels,
Figure FDA00028964622300000414
calculating by formula (4); p is a radical of1And p2Are obtained by simulation of equation (7).
8. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 7, wherein: t obtained from the formula (14)minThe mathematical model corresponding to the defogged image I 'is obtained by substituting the formula (13) ρ (x, y), and the defogged image I' is ρ (x, y).
9. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 8, wherein: let δ be 4 and let e be 0.02.
10. The enhanced atmospheric scattering model-based image defogging and exposure method according to claim 9, wherein: the S2 includes the steps of:
s21, firstly, the image processing module receives an original image I;
s22, the EASM module performs mathematical transformation on the original image I to obtain a corresponding mathematical transformation formula, and then the EASM module performs mathematical processing on the mathematical transformation formula;
s23 then determining the t by the IDE module providing a global stretching strategy of the golden section method in the one-dimensional search algorithmmin
And S24, outputting the defogged image I' corresponding to the mathematical model after mathematical processing to an image output module for the next output work.
CN202110044013.9A 2021-01-13 2021-01-13 Image defogging and exposure method based on enhanced atmospheric scattering model Active CN113240588B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110044013.9A CN113240588B (en) 2021-01-13 2021-01-13 Image defogging and exposure method based on enhanced atmospheric scattering model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110044013.9A CN113240588B (en) 2021-01-13 2021-01-13 Image defogging and exposure method based on enhanced atmospheric scattering model

Publications (2)

Publication Number Publication Date
CN113240588A true CN113240588A (en) 2021-08-10
CN113240588B CN113240588B (en) 2023-03-21

Family

ID=77130017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110044013.9A Active CN113240588B (en) 2021-01-13 2021-01-13 Image defogging and exposure method based on enhanced atmospheric scattering model

Country Status (1)

Country Link
CN (1) CN113240588B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066745A (en) * 2021-10-13 2022-02-18 南京邮电大学 Image defogging method based on regional line prior

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530246A (en) * 2016-10-28 2017-03-22 大连理工大学 Image dehazing method and system based on dark channel and non-local prior
CN111640070A (en) * 2020-04-24 2020-09-08 同济大学 Image simulation method in atmospheric degradation phenomenon

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106530246A (en) * 2016-10-28 2017-03-22 大连理工大学 Image dehazing method and system based on dark channel and non-local prior
CN111640070A (en) * 2020-04-24 2020-09-08 同济大学 Image simulation method in atmospheric degradation phenomenon

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066745A (en) * 2021-10-13 2022-02-18 南京邮电大学 Image defogging method based on regional line prior

Also Published As

Publication number Publication date
CN113240588B (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN107527332B (en) Low-illumination image color retention enhancement method based on improved Retinex
CN105976330B (en) A kind of embedded greasy weather real time video image stabilization
CN106157267B (en) Image defogging transmissivity optimization method based on dark channel prior
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN110163818B (en) Low-illumination video image enhancement method for maritime unmanned aerial vehicle
CN107767354A (en) A kind of image defogging algorithm based on dark primary priori
CN114331873B (en) Non-uniform illumination color image correction method based on region division
US20070098290A1 (en) Automatic compositing of 3D objects in a still frame or series of frames
CN105654440B (en) Quick single image defogging algorithm based on regression model and system
CN106530247B (en) A kind of multi-scale image restorative procedure based on structural information
CN108564549A (en) A kind of image defogging method based on multiple dimensioned dense connection network
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
CN110827218B (en) Airborne image defogging method based on weighted correction of HSV (hue, saturation, value) transmissivity of image
CN108182671B (en) Single image defogging method based on sky area identification
CN110570381B (en) Semi-decoupling image decomposition dark light image enhancement method based on Gaussian total variation
CN111462022B (en) Underwater image sharpness enhancement method
CN113012068A (en) Image denoising method and device, electronic equipment and computer readable storage medium
CN109493299A (en) A method of eliminating point light source illumination effect
CN113240588B (en) Image defogging and exposure method based on enhanced atmospheric scattering model
CN106469440B (en) Dark defogging parallel optimization method based on OpenCL
CN111598788A (en) Single image defogging method based on quadtree decomposition and non-local prior
CN112288726B (en) Method for detecting foreign matters on belt surface of underground belt conveyor
CN113822816A (en) Haze removing method for single remote sensing image optimized by aerial fog scattering model
CN117830134A (en) Infrared image enhancement method and system based on mixed filtering decomposition and image fusion
Lin et al. Image dehazing algorithm based on improved guided filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant