CN109949239B - Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image - Google Patents

Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image Download PDF

Info

Publication number
CN109949239B
CN109949239B CN201910177950.4A CN201910177950A CN109949239B CN 109949239 B CN109949239 B CN 109949239B CN 201910177950 A CN201910177950 A CN 201910177950A CN 109949239 B CN109949239 B CN 109949239B
Authority
CN
China
Prior art keywords
image
value
haze
sharpening
adaptive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910177950.4A
Other languages
Chinese (zh)
Other versions
CN109949239A (en
Inventor
黄富瑜
邹昌帆
国涛
王文廷
吴健
黄欣鑫
王伟奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Army Engineering University of PLA
Original Assignee
Army Engineering University of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Army Engineering University of PLA filed Critical Army Engineering University of PLA
Priority to CN201910177950.4A priority Critical patent/CN109949239B/en
Publication of CN109949239A publication Critical patent/CN109949239A/en
Application granted granted Critical
Publication of CN109949239B publication Critical patent/CN109949239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention discloses a self-adaptive sharpening method suitable for a multi-concentration multi-scene haze image, which comprises a self-adaptive image sharpening method based on a haze image degradation model and a dark primary color prior model; the self-adaptive image sharpening method comprises a dark primary color value self-adaptive acquisition method, an atmosphere light intensity self-adaptive estimation method and a sharpening coefficient self-adaptive calculation method which are used for realizing parameter acquisition or data processing in a self-adaptive mode; the self-adaptive sharpening method suitable for the multi-concentration multi-scene haze image is characterized in that sharpening parameters are completely acquired according to the characteristics of the haze image, parameters do not need to be set manually, and the method has higher robustness in the aspects of sharpening the multi-concentration multi-scene haze image, and the acquired sharpening image has the advantages of large information quantity, high contrast, clear image and no halation phenomenon, and is obviously superior to a multi-scale Retinex enhanced sharpening method and a He dark primary prior restoration sharpening method.

Description

Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
Technical Field
The invention relates to a self-adaptive sharpening method suitable for a multi-concentration multi-scene haze image, and belongs to the technical field of image sharpening processing methods.
Background
With the continuous progress of society and the rapid development of science and technology, a computer vision system has been widely applied to various fields such as urban traffic, video monitoring, intelligent navigation, remote sensing imaging, military reconnaissance and the like; therefore, the research on the image sharpening technology reduces the influence of haze on the image quality, and has important practical value and practical significance for improving the visual performance of a computer.
According to different image sharpening mechanisms, the existing image sharpening methods are mainly divided into an image enhancement method based on image processing and an image restoration method based on a physical model; the first type of method does not depend on a physical model, and directly utilizes an image processing algorithm to improve the image contrast, and typical methods are as follows: histogram equalization, retinex, etc., which do not consider the cause of degradation of haze images, and it is difficult to mechanically improve the quality of haze images; the second method is to invert the haze image degradation process by constructing a haze image generation model, and finally restore the haze-free image, and the typical method is as follows: partial differential equation method, depth definition method and image restoration method based on priori information; in contrast, the partial differential equation method and the image depth sharpening method generally assume that haze obeys a uniform distribution model, and a haze image is restored by adopting an overall uniform method, and the actual haze distribution is not uniform, so that the sharpening effect of the image is poor, and some detail information is lost; the image restoration method based on the prior information is established on the basis of carrying out statistical analysis on the actual haze image, and a better haze image restoration effect can be obtained.
A large number of statistical experiments are carried out on the haze-free image and the haze-free image, the contrast of the haze-free image is found to be obviously higher than that of the haze-free image, and based on the haze-free image and the haze-free image, a definition method for maximizing local contrast is provided, the method achieves the purpose of definition through local contrast maximization treatment, but the color of the treated image has the phenomena of cannon and distortion; in addition, the prior knowledge that the light propagation is irrelevant to the partial surface reflection part of the scene is utilized to estimate the reflectivity of the scene and obtain the haze-free image, but the method has good haze treatment effect on the premise of obtaining vivid image colors, and is difficult to be used for image restoration in dense fog weather; aiming at the problems, an image sharpening method based on dark channel priori theory is provided in the prior art, and the transmittance optimization is carried out by adopting guide filtering instead of soft matting, so that the sharpening processing speed is improved, but the color of a bright area of the processed image is unnatural and the blocking effect is obvious; the method of combining the bright channel and the dark channel is adopted to estimate the atmospheric light value and the transmissivity, so that the problem of color distortion caused by the definition of a bright area is solved to a certain extent, but the problem of self-adaptive definition of haze images with different concentrations is not considered; in addition, in the prior primary color prior and improved definition algorithm thereof, the definition coefficient omega for adjusting the definition effect mostly adopts a fixed value, and is less adaptively adjusted according to actual conditions; therefore, in order to solve the problems that the existing algorithm is difficult to be universally suitable for the definition of the multi-scene haze images with different concentrations and unclear brightness, and the like, the invention provides a self-adaptive definition algorithm based on the dark primary color priori theory, the definition parameters of the algorithm are completely acquired according to the self characteristics of the haze images, no artificial parameters are required, and the self-adaptive definition algorithm has higher robustness in the aspects of the definition of the multi-concentration and multi-scene haze images.
Disclosure of Invention
In order to solve the problems, the invention provides a self-adaptive sharpening method suitable for multi-concentration multi-scene haze images, which does not need to set sharpening parameters manually, and the acquired images are clear and natural in color and have better robustness in the aspects of multi-concentration multi-scene haze image sharpening.
The self-adaptive image sharpening method suitable for the multi-concentration multi-scene haze image comprises a self-adaptive image sharpening method based on a haze image degradation model and a dark primary color prior model; the self-adaptive image sharpening method comprises a dark primary color value self-adaptive acquisition method, an atmosphere light intensity self-adaptive estimation method and a sharpening coefficient self-adaptive calculation method which are used for realizing parameter acquisition or data processing in a self-adaptive mode;
the self-adaptive acquisition method of the dark primary color value is to carry out self-adaptive segmentation on a bright and dark area of a haze image by utilizing a rapid OSTU algorithm, and acquire the dark primary color value of the bright and dark area in the segmented area;
the adaptive estimation method of the atmospheric light intensity is to carry out adaptive estimation on the atmospheric light intensity of a bright and dark area according to the distribution condition of the bright area;
the adaptive calculation method of the sharpening coefficient is to provide adaptive calculation of the sharpening coefficient by adopting a gray level concentration method through statistics of the characteristics of the haze image histogram.
Further, the haze-based image degradation model is as follows
I(x)=J(x)t(x)+A[1-t(x)], (1)
Wherein: i (x) is the haze image intensity observed by the imaging equipment; j (x) is the fog-free image intensity, namely the image to be restored; t (x) is transmittance and reflects the haze penetration ability of light; a is the atmospheric light intensity at infinity; the reflected light of the target corresponding to J (x) t (x) enters the part of the imaging equipment after being attenuated by atmospheric scattering, and the part is attenuated exponentially with the increase of the depth of the scene; a1-t (x) ] corresponds to the part of the atmospheric light entering the imaging device after scattering, and the part increases with the increase of the scene depth, so that scene blurring and color shift distortion can be caused;
the term of the formula (1) is shifted and arranged, and the haze-free image intensity is expressed as
Figure GDA0004188579640000031
In the above formulae, only I (x) is known, and the essence of image sharpness is to restore a sharp image J (x) by estimating the transmittance t (x) and the atmospheric light intensity a according to the above model.
Further, the dark primary color prior model is a dark primary color prior rule counted according to a He dark primary color prior restoration and definition method, and is represented on an image by the existence of dark pixel points at each position of a haze-free image, wherein the dark pixel points are the dark primary colors J dark The gray values of the dark primary color points tend to be 0, which satisfies
Figure GDA0004188579640000032
Wherein: superscript c denotes the RGB channels of the image; y ε Ω (x) represents a window centered on pixel x;
assuming that the known atmospheric light intensity A and the transmittance t (x) in the window omega (x) are constants, the method can be obtained by arranging the formula (1) and performing two minimum operations:
Figure GDA0004188579640000041
the estimated value of the obtainable transmittance is given by the simultaneous formulas (3) and (4)
Figure GDA0004188579640000042
Taking the visual haziness of people observing distant scenes in actual life into consideration, introducing a sharpening coefficient omega (0 is more than or equal to 1), and correcting the above formula to be
Figure GDA0004188579640000043
Wherein: the smaller the omega value is, the more fog components in the restored image are, the weaker the sharpening capability is, but the smaller the omega value is, and the purpose of illegal sharpening is achieved when the omega value is close to 0; the closer the omega value is to 1, the stronger the sharpening capability is, but the oversaturation of the image color is caused, and the sense of reality is lacking;
the method for estimating the atmospheric light intensity A comprises the following steps: firstly, obtaining pixels with maximum brightness of 0.1% in dark primary colors of an image, and then using the maximum value of the pixels corresponding to the pixels in the original image as an estimated value of A;
the estimated transmittance t (x) and the atmospheric light intensity A in the formula (2) can be used for obtaining a haze-free image, and in actual calculation, in order to avoid image distortion caused by too small transmittance, a lower transmittance limit t is usually set 0 The haze-free image is
Figure GDA0004188579640000044
Further, the method for adaptively acquiring the dark primary color value comprises the following specific operation steps: classifying the image pixels, and determining an optimal segmentation threshold value by maximizing the distance between the classified classes;
the first step, assuming that the image I has L gray levels, the number of the ith level pixels is n i The total pixel number is
Figure GDA0004188579640000045
The probability of occurrence of the ith pixel is p i =n i /N;
Second, for the set threshold k, dividing the image into two classes A and B, wherein the class A gray scale interval is [0, k]Class B gray scale interval is [ k+1, L-1 ]]The gray average value mu of the whole image and the gray average value mu of class A interval A (k) Class B interval gray scale average mu B (k) The method comprises the following steps of:
Figure GDA0004188579640000051
the probability distribution of the two types of gray intervals satisfies:
Figure GDA0004188579640000052
the inter-class variance when defining the threshold k as:
σ 2 (k)=p A [μ-μ A (k)] 2 +p B [μ-μ B (k)] 2 , (10)
according to the maximum inter-class variance criterion, changing the L-1 threshold k from 0 to find out the k value which makes the variance maximum in the above formula, namely the optimal segmentation threshold T; for each change of the threshold k, the above formulas are recalculated, and for reducing the time consumption of operation, a plurality of threshold points are firstly screened out according to the valley gray scale characteristics of the haze image histogram, and then substituted into the formulas, so that the rapid OSTU threshold segmentation is realized;
third, according to the gray level i and the pixel number n i Build L relation pairs [ i, n ] i ]Assume a curve i, n i ]The gray level at the upper position is T j When the pixel number n is j The gray level T is set to satisfy the following relation j Screening to be threshold points:
Figure GDA0004188579640000053
wherein: delta is the size of the screening window, taking delta = 3pixels; α is a minimum limiting ratio, taking α=0.1%;
fourthly, performing threshold segmentation on the haze image by using an improved rapid OSTU threshold method, and according to the threshold point screening and bright and dark region segmentation results, the method can be seen: after screening, only 49 threshold points are left, and the number of the threshold points participating in calculation is reduced by more than 80%; meanwhile, the bright and dark areas are effectively separated, and the clear treatment of the separated areas is facilitated;
fifth, according to formula (3), the minimum value of the RGB three channels of each pixel point is the dark primary color value, namely:
Figure GDA0004188579640000061
if the haze images with different concentrations adopt the above method for solving the dark primary color values, color distortion of a bright area can be caused; the minimum gray value of the bright area is higher than that of the dark area, if the dark primary color value is obtained by directly adopting the minimum value method, the transmittance is calculated inaccurately, and the three-channel gray average value is selected as the dark primary color value of the bright area, so that the actual situation can be reflected; for this purpose, the dark primary color minimum value is corrected by solving the dark primary color value using the following equation:
Figure GDA0004188579640000062
wherein: t (T) c Threshold is partitioned for c-channels, c ε { R, G, B }.
Further, the specific operation steps of the adaptive estimation method for the atmospheric light intensity are as follows: by using a bright-dark zoning mean methodSelf-adaptive estimation of atmospheric light intensity, when the pixel number of a bright area is P b c When the brightness is less than 10%, indicating that the bright areas are less, firstly taking the bright points of 0.1% in front of the dark primary color image, and then finding out the gray average value of the bright points corresponding to the pixel points in the original image as an atmospheric light intensity estimation value; when the pixel number of the bright area is P b c And when the gray level average value is more than or equal to 10%, indicating that the bright areas are more, taking the gray level average value of all the pixel points in the bright areas as an atmospheric light intensity estimated value, and adopting the following calculation formula:
Figure GDA0004188579640000063
wherein: n (N) b Is the number of pixels in the bright area.
Further, the method for adaptively calculating the sharpening coefficients comprises the following specific operation steps: through carrying out histogram statistics on haze images with different concentrations, the larger the haze concentration is, the more concentrated the histogram distribution is, and most gray values are concentrated in the size of [ m ] c ,M c ]By utilizing the property, a definition coefficient calculation method based on gray level concentration is provided, and the method meets the following conditions:
Figure GDA0004188579640000071
wherein: omega c Adjusting the coefficient to be the minimum value, taking the value omega c =0.15; upper and lower limits m c And M c The method meets the following conditions:
Figure GDA0004188579640000072
wherein: alpha is an upper limit and lower limit adjustment coefficient, and the value alpha=1%;
substituting the above formulas into the transmittance formula (6) correspondingly to obtain the transmittance of the bright and dark areas of the haze image; and then combining the formula (7) to obtain the restored image J 0
Compared with the prior art, the self-adaptive sharpening method suitable for the multi-concentration multi-scene haze image is characterized in that a rapid OSTU method is used for dividing bright and dark areas of the haze image, obtaining dark primary values of the bright and dark areas and carrying out self-adaptive estimation on atmospheric light intensity values of different areas; according to the characteristics of the haze image histogram, calculating a sharpness coefficient by a gray level concentration method; the self-adaptive image sharpening method has the advantages that the sharpening parameters are completely acquired according to the characteristics of the haze image, the manual parameter setting is not needed, the robustness is high in the aspects of multi-concentration and multi-scene haze image sharpening, the acquired sharpening image is large in information quantity, high in contrast, clear in image, natural in color and free from halation, and the method is obviously superior to a multi-scale Retinex enhanced sharpening method and a He dark primary prior restoration sharpening method.
Drawings
Fig. 1 is a schematic diagram of an atmospheric scattering model proposed by McCartney.
Fig. 2 is a flow chart of the adaptive image sharpening method of the present invention.
Fig. 3 is a live view of a campus of the present invention capturing haze images.
FIG. 4 is a schematic view of the threshold point screening and bright and dark region segmentation results of the present invention;
wherein, the diagram (a) is a threshold point screening schematic diagram of the present invention, and the diagram (b) is a bright and dark region segmentation result schematic diagram of the present invention.
Detailed Description
The self-adaptive sharpening method suitable for the multi-concentration multi-scene haze image as shown in fig. 2 comprises a self-adaptive image sharpening method based on a haze image degradation model and a dark primary prior model; the self-adaptive image sharpening method consists of a dark primary color value self-adaptive acquisition (obtain dark channel image adaptively) method, an atmosphere light intensity self-adaptive estimation (estimate air light adaptively) method and a sharpening coefficient self-adaptive calculation (calculate dehazing coefficient adaptively) method which adopt a self-adaptive mode to realize parameter acquisition or data processing;
the self-adaptive acquisition method of the dark primary color value is to carry out self-adaptive segmentation on a bright and dark area of a haze image by utilizing a rapid OSTU algorithm, and acquire the dark primary color value of the bright and dark area in the segmented area;
the adaptive estimation method of the atmospheric light intensity is to carry out adaptive estimation on the atmospheric light intensity of a bright and dark area according to the distribution condition of the bright area;
the adaptive calculation method of the sharpening coefficient is to provide adaptive calculation of the sharpening coefficient by adopting a gray level concentration method through statistics of the characteristics of the haze image histogram.
Wherein the haze image degradation model can better reveal the degradation mechanism of the haze image, according to the model, the target image acquired by the imaging device is mainly composed of an attenuation component and an environment scattering light component of the reflected light from the target, as shown in figure 1,
simplifying the McCartney to obtain a simplified model of haze image degradation as follows
I(x)=J(x)t(x)+A[1-t(x)], (1)
Wherein: i (x) is the haze image intensity observed by the imaging equipment; j (x) is the fog-free image intensity, namely the image to be restored; t (x) is transmittance and reflects the haze penetration ability of light; a is the atmospheric light intensity at infinity; as shown in fig. 1, J (x) t (x) corresponds to a portion of the target reflected light in fig. 1 that enters the imaging device after being attenuated by atmospheric scattering, and the portion is attenuated exponentially as the depth of the scene increases; a [1-t (x) ] corresponds to the part of the image forming apparatus shown in fig. 1 into which atmospheric light is scattered, which increases as the depth of the scene increases, causing blurring of the scene and distortion of color shift;
the term of the formula (1) is shifted and arranged, and the haze-free image intensity can be expressed as
Figure GDA0004188579640000091
In the above formulas, only I (x) is a known term, and the essence of image sharpening is to restore a clear image J (x) by estimating the transmittance t (x) and the atmospheric light intensity a according to the above model;
dark source counted by He dark primary prior restoration and definition methodThe color prior rule that shadows in natural sceneries are visible everywhere is expressed on an image, namely, dark pixel points exist at each position of a foggless image, and the dark pixel points are dark primary colors J dark The gray values of the dark primary color points tend to be 0, which satisfies
Figure GDA0004188579640000092
Wherein: superscript c denotes the RGB channels of the image; y ε Ω (x) represents a window centered on pixel x;
assuming that the known atmospheric light intensity A and the transmittance t (x) in the window omega (x) are constants, the method can be obtained by arranging the formula (1) and performing two minimum operations:
Figure GDA0004188579640000093
the estimated value of the obtainable transmittance is given by the simultaneous formulas (3) and (4)
Figure GDA0004188579640000094
Taking the visual haziness of people observing distant scenes in actual life into consideration, introducing a sharpening coefficient omega (0 is more than or equal to 1), and correcting the above formula to be
Figure GDA0004188579640000095
Wherein: the smaller the omega value is, the more fog components in the restored image are, the weaker the sharpening capability is, but the smaller the value is (approaching to 0) the purpose of illegal sharpening is achieved; the closer the omega value is to 1, the stronger the sharpening capability is, but the oversaturation of the image color is caused, and the sense of reality is lacking;
the method for estimating the atmospheric light intensity A comprises the following steps: firstly, obtaining pixels with maximum brightness of 0.1% in dark primary colors of an image, and then using the maximum value of the pixels corresponding to the pixels in the original image as an estimated value of A;
the estimated penetrationThe haze-free image can be obtained by the overrate t (x) and the atmospheric light intensity A formula (2), and in actual calculation, in order to avoid image distortion caused by the overlarge transmittance, a lower limit t of the transmittance is usually set 0 The haze-free image is
Figure GDA0004188579640000101
The method for adaptively acquiring the dark primary color value comprises the following specific operation steps: classifying the image pixels, and determining an optimal segmentation threshold value by maximizing the distance between the classified classes;
the first step, assuming that the image I has L gray levels, the number of the ith level pixels is n i The total pixel number is
Figure GDA0004188579640000102
The probability of occurrence of the ith pixel is p i =n i /N;
Second, for the set threshold k, dividing the image into two classes A and B, wherein the class A gray scale interval is [0, k]Class B gray scale interval is [ k+1, L-1 ]]The gray average value mu of the whole image and the gray average value mu of class A interval A (k) Class B interval gray scale average mu B (k) The method comprises the following steps of:
Figure GDA0004188579640000103
the probability distribution of the two types of gray intervals satisfies:
Figure GDA0004188579640000104
the inter-class variance when defining the threshold k as:
σ 2 (k)=p A [μ-μ A (k)] 2 +p B [μ-μ B (k)] 2 , (10)
according to the maximum inter-class variance criterion, changing the L-1 threshold k from 0 to find out the k value which makes the variance maximum in the above formula, namely the optimal segmentation threshold T; for each change of the threshold k, the above formulas are recalculated, and for reducing the time consumption of operation, a plurality of threshold points are firstly screened out according to the valley gray scale characteristics of the haze image histogram, and then substituted into the formulas, so that the rapid OSTU threshold segmentation is realized;
third, according to the gray level i and the pixel number n i Build L relation pairs [ i, n ] i ]Assume a curve i, n i ]The gray level at the upper position is T j When the pixel number n is j The gray level T is set to satisfy the following relation j Screening to be threshold points:
Figure GDA0004188579640000111
wherein: delta is the size of the screening window, taking delta = 3pixels; α is a minimum limiting ratio, taking α=0.1%;
fourth, the haze image shown in fig. 3 is subjected to threshold segmentation by using an improved rapid OSTU threshold method, and the results of screening the threshold points and segmenting the bright and dark areas are shown in fig. 4, so that it can be seen that: after screening, only 49 threshold points are left, and the number of the threshold points participating in calculation is reduced by more than 80%; meanwhile, the bright and dark areas are effectively separated, and the clear treatment of the separated areas is facilitated;
fifth, according to formula (3), the minimum value of the RGB three channels of each pixel point is the dark primary color value, namely:
Figure GDA0004188579640000112
if the haze images with different concentrations adopt the above method for solving the dark primary color values, color distortion of a bright area can be caused; the minimum gray value of the bright area is higher than that of the dark area, if the dark primary color value is obtained by directly adopting the minimum value method, the transmittance is calculated inaccurately, and the three-channel gray average value is selected as the dark primary color value of the bright area, so that the actual situation can be reflected; for this purpose, the dark primary color minimum value is corrected by solving the dark primary color value using the following equation:
Figure GDA0004188579640000113
wherein: t (T) c Threshold is partitioned for c-channels, c ε { R, G, B }.
The method for adaptively estimating the atmospheric light intensity comprises the following specific operation steps: the atmospheric light intensity self-adaptive estimation is carried out by adopting a bright and dark division area mean value method, and when the pixel number of the bright area is equal to the duty ratio P b c When the brightness is less than 10%, indicating that the bright areas are less, firstly taking the bright points of 0.1% in front of the dark primary color image, and then finding out the gray average value of the bright points corresponding to the pixel points in the original image as an atmospheric light intensity estimation value; when the pixel number of the bright area is P b c And when the gray level average value is more than or equal to 10%, indicating that the bright areas are more, taking the gray level average value of all the pixel points in the bright areas as an atmospheric light intensity estimated value, and adopting the following calculation formula:
Figure GDA0004188579640000121
wherein: n (N) b Is the number of pixels in the bright area.
The method for adaptively calculating the sharpening coefficients comprises the following specific operation steps: through carrying out histogram statistics on haze images with different concentrations, the larger the haze concentration is, the more concentrated the histogram distribution is, and most gray values are concentrated in the size of [ m ] c ,M c ]By utilizing the property, a definition coefficient calculation method based on gray level concentration is provided, and the method meets the following conditions:
Figure GDA0004188579640000122
wherein: omega c Adjusting the coefficient to be the minimum value, taking the value omega c =0.15; upper and lower limits m c And M c The method meets the following conditions:
Figure GDA0004188579640000123
wherein: alpha is an upper limit and lower limit adjustment coefficient, and the value alpha=1%;
substituting the above formulas into the transmittance formula (6) correspondingly to obtain the transmittance of the bright and dark areas of the haze image; and then combining the formula (7) to obtain the restored image J 0
According to the self-adaptive sharpening method suitable for the multi-concentration multi-scene haze image, a rapid OSTU method is used for dividing a bright and dark area of the haze image, obtaining a dark primary color value of the bright and dark area, and carrying out self-adaptive estimation on atmospheric light intensity values of different areas; according to the characteristics of the haze image histogram, calculating a sharpness coefficient by a gray level concentration method; the subjective and objective evaluation results show that: the method for adaptively sharpening the image has the advantages of large information quantity of the sharpened image, high contrast, image definition, natural color and no halation phenomenon, and is obviously superior to a multi-scale Retinex enhanced sharpening method and a He dark primary color priori restoration sharpening method.
The above embodiments are merely preferred embodiments of the present invention, and all changes and modifications that come within the meaning and range of equivalency of the structures, features and principles of the invention are therefore intended to be embraced therein.

Claims (1)

1. The adaptive image sharpening method suitable for the multi-concentration multi-scene haze image comprises an adaptive image sharpening method based on a haze image degradation model and a dark primary color prior model; the method is characterized in that: the self-adaptive image sharpening method comprises a dark primary color value self-adaptive acquisition method, an atmosphere light intensity self-adaptive estimation method and a sharpening coefficient self-adaptive calculation method which are used for realizing parameter acquisition or data processing in a self-adaptive mode;
the self-adaptive acquisition method of the dark primary color value is to carry out self-adaptive segmentation on a bright and dark area of a haze image by utilizing a rapid OSTU algorithm, and acquire the dark primary color value of the bright and dark area in the segmented area;
the adaptive estimation method of the atmospheric light intensity is to carry out adaptive estimation on the atmospheric light intensity of a bright and dark area according to the distribution condition of the bright area;
the adaptive calculation method of the sharpening coefficient is to provide adaptive calculation of the sharpening coefficient by adopting a gray level concentration method through statistics of the characteristics of the haze image histogram;
the haze-based image degradation model is as follows
I(x)=J(x)t(x)+A[1-t(x)], (1)
Wherein: i (x) is the haze image intensity observed by the imaging equipment; j (x) is the fog-free image intensity, namely the image to be restored; t (x) is transmittance and reflects the haze penetration ability of light; a is the atmospheric light intensity at infinity; the reflected light of the target corresponding to J (x) t (x) enters the part of the imaging equipment after being attenuated by atmospheric scattering, and the part is attenuated exponentially with the increase of the depth of the scene; a1-t (x) ] corresponds to the part of the atmospheric light entering the imaging device after scattering, and the part increases with the increase of the scene depth, so that scene blurring and color shift distortion can be caused;
the term of the formula (1) is shifted and arranged, and the haze-free image intensity is expressed as
Figure FDA0004103917260000011
In the above formulas, only I (x) is a known term, and the essence of image sharpening is to restore a clear image J (x) by estimating the transmittance t (x) and the atmospheric light intensity a according to the above model;
the dark primary color prior model is a dark primary color prior rule counted according to a He dark primary color prior restoration and definition method, and is expressed on an image, namely, dark pixel points exist at each position of a haze-free image, and the dark pixel points are the dark primary colors J dark The gray values of the dark primary color points tend to be 0, which satisfies
Figure FDA0004103917260000021
Wherein: superscript c denotes the RGB channels of the image; y ε Ω (x) represents a window centered on pixel x;
assuming that the known atmospheric light intensity A and the transmittance t (x) in the window omega (x) are constants, the method can be obtained by arranging the formula (1) and performing two minimum operations:
Figure FDA0004103917260000022
the estimated value of the obtainable transmittance is given by the simultaneous formulas (3) and (4)
Figure FDA0004103917260000023
Taking the visual haziness of people observing distant scenes in actual life into consideration, introducing a sharpening coefficient omega (0 is more than or equal to 1), and correcting the above formula to be
Figure FDA0004103917260000024
Wherein: the smaller the omega value is, the more fog components in the restored image are, the weaker the sharpening capability is, but the smaller the omega value is, and the purpose of illegal sharpening is achieved when the omega value is close to 0; the closer the omega value is to 1, the stronger the sharpening capability is, but the oversaturation of the image color is caused, and the sense of reality is lacking;
the method for estimating the atmospheric light intensity A comprises the following steps: firstly, obtaining pixels with maximum brightness of 0.1% in dark primary colors of an image, and then using the maximum value of the pixels corresponding to the pixels in the original image as an estimated value of A;
the estimated transmittance t (x) and the atmospheric light intensity A in the formula (2) can be used for obtaining a haze-free image, and in actual calculation, in order to avoid image distortion caused by too small transmittance, a lower transmittance limit t is usually set 0 The haze-free image is
Figure FDA0004103917260000025
The method for adaptively acquiring the dark primary color value comprises the following specific operation steps: classifying the image pixels, and determining an optimal segmentation threshold value by maximizing the distance between the classified classes;
the first step, assuming that the image I has L gray levels, the number of the ith level pixels is n i The total pixel number is
Figure FDA0004103917260000031
The probability of occurrence of the ith pixel is p i =n i /N;
Second, for the set threshold k, dividing the image into two classes A and B, wherein the class A gray scale interval is [0, k]Class B gray scale interval is [ k+1, L-1 ]]The gray average value mu of the whole image and the gray average value mu of class A interval A (k) Class B interval gray scale average mu B (k) The method comprises the following steps of:
Figure FDA0004103917260000032
the probability distribution of the two types of gray intervals satisfies:
Figure FDA0004103917260000033
the inter-class variance when defining the threshold k as:
σ 2 (k)=p A [μ-μ A (k)] 2 +p B [μ-μ B (k)] 2 , (10)
according to the maximum inter-class variance criterion, changing the L-1 threshold k from 0 to find out the k value which makes the variance maximum in the above formula, namely the optimal segmentation threshold T; for each change of the threshold k, the above formulas are recalculated, and for reducing the time consumption of operation, a plurality of threshold points are firstly screened out according to the valley gray scale characteristics of the haze image histogram, and then substituted into the formulas, so that the rapid OSTU threshold segmentation is realized;
third, according to the gray level i and the pixel number n i Build L relation pairs [ i, n ] i ]Assuming a curveLines [ i, n ] i ]The gray level at the upper position is T j When the pixel number n is j The gray level T is set to satisfy the following relation j Screening to be threshold points:
Figure FDA0004103917260000034
wherein: delta is the size of the screening window, taking delta = 3pixels; α is a minimum limiting ratio, taking α=0.1%;
fourthly, performing threshold segmentation on the haze image by using an improved rapid OSTU threshold method, and according to the threshold point screening and bright and dark region segmentation results, the method can be seen: after screening, only 49 threshold points are left, and the number of the threshold points participating in calculation is reduced by more than 80%; meanwhile, the bright and dark areas are effectively separated, and the clear treatment of the separated areas is facilitated;
fifth, according to formula (3), the minimum value of the RGB three channels of each pixel point is the dark primary color value, namely:
Figure FDA0004103917260000041
if the haze images with different concentrations adopt the above method for solving the dark primary color values, color distortion of a bright area can be caused; the minimum gray value of the bright area is higher than that of the dark area, if the dark primary color value is obtained by directly adopting the minimum value method, the transmittance is calculated inaccurately, and the three-channel gray average value is selected as the dark primary color value of the bright area, so that the actual situation can be reflected; for this purpose, the dark primary color minimum value is corrected by solving the dark primary color value using the following equation:
Figure FDA0004103917260000042
wherein: t (T) c Dividing a threshold value for a c channel, c epsilon { R, G, B };
the method for adaptively estimating the atmospheric light intensity comprises the following specific operation steps:the atmospheric light intensity self-adaptive estimation is carried out by adopting a bright and dark division area mean value method, and when the pixel number of the bright area is the duty ratio
Figure FDA0004103917260000043
When the brightness area is less, firstly taking the bright spots of 0.1% in front of the dark primary color image, and then finding out the gray average value of the corresponding pixel points of the bright spots in the original image as an atmospheric light intensity estimation value; when the bright area pixel number is +.>
Figure FDA0004103917260000044
When the number of the bright areas is large, the gray average value of all the pixel points in the bright areas is used as the atmospheric light intensity estimated value, and the calculation formula is as follows:
Figure FDA0004103917260000045
wherein: n (N) b The number of pixel points in a bright area;
the method for adaptively calculating the sharpening coefficients comprises the following specific operation steps: through carrying out histogram statistics on haze images with different concentrations, the larger the haze concentration is, the more concentrated the histogram distribution is, and most gray values are concentrated in the size of [ m ] c ,M c ]By utilizing the property, a definition coefficient calculation method based on gray level concentration is provided, and the method meets the following conditions:
Figure FDA0004103917260000051
wherein: omega c Adjusting the coefficient to be the minimum value, taking the value omega c =0.15; upper and lower limits m c And M c The method meets the following conditions:
Figure FDA0004103917260000052
wherein: alpha is an upper limit and lower limit adjustment coefficient, and the value alpha=1%;
substituting the above formulas into the transmittance formula (6) correspondingly to obtain the transmittance of the bright and dark areas of the haze image; and then combining the formula (7) to obtain the restored image J 0
CN201910177950.4A 2019-03-11 2019-03-11 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image Active CN109949239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910177950.4A CN109949239B (en) 2019-03-11 2019-03-11 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910177950.4A CN109949239B (en) 2019-03-11 2019-03-11 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image

Publications (2)

Publication Number Publication Date
CN109949239A CN109949239A (en) 2019-06-28
CN109949239B true CN109949239B (en) 2023-06-16

Family

ID=67009534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910177950.4A Active CN109949239B (en) 2019-03-11 2019-03-11 Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image

Country Status (1)

Country Link
CN (1) CN109949239B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110544009B (en) * 2019-07-26 2022-12-09 中国人民解放军海军航空大学青岛校区 Aviation organic coating aging damage quantitative evaluation method based on digital image processing
CN112330558A (en) * 2020-11-05 2021-02-05 山东交通学院 Road image recovery early warning system and method based on foggy weather environment perception

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327439A (en) * 2016-08-16 2017-01-11 华侨大学 Rapid fog and haze image sharpening method
CN106940882A (en) * 2017-02-15 2017-07-11 国网江苏省电力公司常州供电公司 A kind of transformer substation video image clarification method for meeting human-eye visual characteristic
JP2018117211A (en) * 2017-01-17 2018-07-26 キヤノン株式会社 Image processing apparatus and image processing method, imaging apparatus, program
CN109087254A (en) * 2018-04-26 2018-12-25 长安大学 Unmanned plane image haze sky and white area adaptive processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327439A (en) * 2016-08-16 2017-01-11 华侨大学 Rapid fog and haze image sharpening method
JP2018117211A (en) * 2017-01-17 2018-07-26 キヤノン株式会社 Image processing apparatus and image processing method, imaging apparatus, program
CN106940882A (en) * 2017-02-15 2017-07-11 国网江苏省电力公司常州供电公司 A kind of transformer substation video image clarification method for meeting human-eye visual characteristic
CN109087254A (en) * 2018-04-26 2018-12-25 长安大学 Unmanned plane image haze sky and white area adaptive processing method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于改进暗通道先验的交通图像去雾新方法;王泽胜;《控制与决策》;20180330;第486-490页 *
基于高亮区域自适应处理的监控图像去雾算法;李云峰;《计算机应用与软件》;20180330;第209-214页 *
针对明亮区域的自适应全局暗原色先验去雾;邓莉;《光学精密工程》;20160415(第04期);第892-900页 *

Also Published As

Publication number Publication date
CN109949239A (en) 2019-06-28

Similar Documents

Publication Publication Date Title
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN107103591B (en) Single image defogging method based on image haze concentration estimation
CN107767353A (en) A kind of adapting to image defogging method based on definition evaluation
CN108389175B (en) Image defogging method integrating variation function and color attenuation prior
CN108765336B (en) Image defogging method based on dark and bright primary color prior and adaptive parameter optimization
CN110288550B (en) Single-image defogging method for generating countermeasure network based on priori knowledge guiding condition
CN111598791B (en) Image defogging method based on improved dynamic atmospheric scattering coefficient function
CN110349113B (en) Adaptive image defogging method based on dark primary color priori improvement
CN108133462B (en) Single image restoration method based on gradient field region segmentation
CN105447825B (en) Image defogging method and its system
CN110211067A (en) One kind being used for UUV Layer Near The Sea Surface visible images defogging method
CN106780390B (en) Single image to the fog method based on marginal classification Weighted Fusion
CN109118450B (en) Low-quality image enhancement method under sand weather condition
CN111598814B (en) Single image defogging method based on extreme scattering channel
CN105023246B (en) A kind of image enchancing method based on contrast and structural similarity
CN109949239B (en) Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
CN115456905A (en) Single image defogging method based on bright and dark region segmentation
CN111598800A (en) Single image defogging method based on space domain homomorphic filtering and dark channel prior
CN105913391B (en) A kind of defogging method can be changed Morphological Reconstruction based on shape
CN112419163A (en) Single image weak supervision defogging method based on priori knowledge and deep learning
Khan et al. Recent advancement in haze removal approaches
CN108765337B (en) Single color image defogging processing method based on dark channel prior and non-local MTV model
CN112907461A (en) Defogging and enhancing method for infrared degraded image in foggy day
CN110852971B (en) Video defogging method based on dark channel prior and Retinex and storage medium
CN109191405B (en) Aerial image defogging algorithm based on transmittance global estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant