CN105447825B - Image defogging method and its system - Google Patents

Image defogging method and its system Download PDF

Info

Publication number
CN105447825B
CN105447825B CN201510645510.9A CN201510645510A CN105447825B CN 105447825 B CN105447825 B CN 105447825B CN 201510645510 A CN201510645510 A CN 201510645510A CN 105447825 B CN105447825 B CN 105447825B
Authority
CN
China
Prior art keywords
image
scattering function
atmospheric scattering
value
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510645510.9A
Other languages
Chinese (zh)
Other versions
CN105447825A (en
Inventor
廖斌
訚鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University
Original Assignee
Hubei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University filed Critical Hubei University
Priority to CN201510645510.9A priority Critical patent/CN105447825B/en
Publication of CN105447825A publication Critical patent/CN105447825A/en
Application granted granted Critical
Publication of CN105447825B publication Critical patent/CN105447825B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The present invention provides a kind of image defogging methods, include the following steps:Step 1, acquisition include the image of mist;Step 2 obtains initial atmospheric scattering function according to the boundary condition of image;Step 3, the atmospheric scattering function for being refined using recurrence bilateral filtering initial;Step 4 reuses recurrence two-sided filter and obtains local contrast;Step 5 based on local contrast, is adaptively handled the fog-zone domain of various concentration;Step 6 is quickly obtained defogging as a result, and carrying out that final result is carried out output displaying after hue adjustment according to image degradation model.The marginal information of the method for the present invention defogging result is kept well, and details is more clear.And there is not residual mist and halo effect in the region of depth of field mutation, restored true scene to the full extent, met the visual experience of people on the whole.

Description

Image defogging method and its system
Technical field
The invention belongs to image procossing and computer vision field more particularly to a kind of image defogging method and its systems.
Background technology
At present, for image defogging, research work is concentrated mainly on the defogging to single image.Common image defogging side Method includes Retinex methods, Fattal methods, He methods and Tarel methods.
The degraded image of Retinex methods setting input is the product of luminance picture and reflected image.By removing or dropping The influence of low-luminosity picture, so as to which the reflected image for retaining reflection scene essence as possible carrys out defogging.However, it is at colouring information The Nonlinear function of input picture coloration is employed to compensate the loss of colouring information during reason, can not restore the true of scene Solid color.
Fattal methods assume initially that the reflectivity of image local area is constant, and the reflectivity and transmission rate are phases It is mutually independent.Using isolated component estimation reflectivity direction, and based on Markov random field estimation image color, obtain compared with Good defogging result.But this method assumes statistics in regional area independently of each other and needs sufficient colouring information, causes It is not apparent enough in low visibility or colouring information, can usually fail.
The a large amount of outdoor high-visibility images of He method statistics analysis simultaneously extract dark channel prior.Pass through the prior estimate air Light and transmission rate recycle image to scratch figure and transmission rate are advanced optimized.Image is carried out using the physical model of greasy weather imaging to go Mist.This method defog effect is preferable, often invalid without additional conditions, but when dark primary is not present in scene.Together When, using FIG pull handle when this method optimizes transmission rate, the system of linear equations of a Large Scale Sparse is substantially to solve for, is possessed Very high time and space complexity.
Tarel methods assume that atmospheric scattering function levels off to maximum value in a certain range, using improved medium filtering Atmospheric scattering function is estimated, so as to that single image is handled into line visibility defogging.This method is although possess relatively low Time complexity, higher execution efficiency, but can not preferably keep image border, and easily so that defogging knot Fruit generates halo effect.
Tan methods utilize contrast this priori lower than high-visibility image of low visibility image, are gone by maximizing Mist Image Warping enhances image.It is normalized based on Markov random field as a result, it is in the region that the depth of field is mutated It is easy to generate halo effect, while the color of image excessively saturation that defogging goes out.
Invention content
The purpose of the present invention is:In view of the deficiencies of the prior art, provide that a kind of details is clear, depth of field sudden change region is without residual mist With halo effect, it can preferably restore the image defogging method and its system of real scene.
In order to achieve the above object, on the one hand, the present invention provides a kind of image defogging methods, include the following steps:
Step 1, acquisition include the image of mist;
Step 2 obtains initial atmospheric scattering function according to the boundary condition of image;
Step 3, the atmospheric scattering function for being refined using recurrence bilateral filtering initial;
Step 4 reuses recurrence two-sided filter and obtains local contrast;
Step 5 based on local contrast, is adaptively handled the fog-zone domain of various concentration;
Step 6, defogging is quickly obtained according to image degradation model as a result, and carry out after hue adjustment by final result into Row output displaying.
On the other hand, the present invention provides a kind of image defogging system, including image capture module, processing module, figure As output module, wherein,
Image capture module:Acquisition includes the image of mist and sends processing module to;
Processing module:Initial atmospheric scattering function is obtained according to the boundary condition of image, using recurrence bilateral filtering come The atmospheric scattering function for refining initial reuses recurrence two-sided filter and obtains local contrast, right based on local contrast The fog-zone domain of various concentration is adaptively handled, and is quickly obtained defogging according to image degradation model as a result, and carrying out tone Adjustment;
Image output module:Final result is subjected to output displaying.
Beneficial effects of the present invention are as follows:Image defogging method provided by the invention and its system, key is can Obtain accurate atmospheric scattering function.First, initial atmospheric scattering function is obtained using the boundary condition of given image.In order to Halation phenomenon can be effectively prevented from, the atmospheric scattering function for refining using recurrence bilateral filtering initial.It reuses and passs later Two-sided filter is returned to obtain local contrast.Based on local contrast, the fog-zone domain of various concentration can adaptively be carried out Processing.In order to obtain more true effect, its result is handled using tone mapping.In order to accelerate this method, this Invention uses a kind of adaptive nonuniform sampling strategy.It is empty that its corresponding high dimensional feature is built for the input picture of input Between, and adaptive subdivision is carried out to feature space by Gauss KD (k-dimensional) trees.Using Gauss measures and weights amount pixel The similarity of point character pair vector, the then subspace that each subdivision obtains are made of the similar pixel cluster of feature.Meter Calculate the sampling of each leaf node subspace, and with the input data set of these sampled point approximate representation images.What is obtained as a result, adopts Sampling point quantity can be less than input data set.Defogging processing is carried out, and result of calculation is interpolated into input number based on these sampled points According to collection.This method fast and effeciently can carry out defogging processing to single image.
Description of the drawings
Fig. 1 is the flow chart of image defogging method of the present invention.
Specific embodiment
As shown in Figure 1, the present invention provides a kind of image defogging method, include the following steps:
Step 1, acquisition include the image of mist;
Step 2 obtains initial atmospheric scattering function according to the boundary condition of image;
Step 3, the atmospheric scattering function for being refined using recurrence bilateral filtering initial;
Step 4 reuses recurrence two-sided filter and obtains local contrast;
Step 5 based on local contrast, is adaptively handled the fog-zone domain of various concentration;
Step 6, defogging is quickly obtained according to image degradation model as a result, and carry out after hue adjustment by final result into Row output displaying.
Preferably, global atmosphere light A is obtained according to image degradation model in step 2, using the window centered on pixel i Mouth ω (i) carries out mini-value filtering respectively to the r of input picture, g, b triple channels, then, it is maximum to choose filtered triple channel Value is used as global atmosphere light A,
Wherein, j is pixel index, and I is input picture.
It is further preferred that the initial atmospheric scattering function is obtained by following methods, it is assumed that in window ω (i) Atmospheric scattering function be constant, be defined as V (i), according to scene radiation be typically bounded this physical characteristic, i.e. J >=B, It can obtain
Wherein, V (i) is atmospheric scattering function, and ζ is the pixel belonged in window ω (i), and a ∈ { r, g, b }, J are scenes Radiation, lower boundaries of the B for scene radiation, Wb(i) it is that the lower boundary for radiating scene substitutes into the top of gained atmospheric scattering function Boundary.
Still further preferably, scene transmission value in window is allowed to have subtle variation, thus by the initial of scene transfer Estimated value is further adjusted to,Wherein, Wa(i) it is atmospheric scattering function initial estimate.
Still further preferably, in the step 3 using recurrence bilateral filtering operator, based on a series of one-dimensional discrete Signal operation, it is as follows to refining for initial atmospheric scattering function by successive ignition,
Wherein, Mrbf(i) it is the atmospheric scattering function after refining, Wherein, σs1, σr1Respectively spatial domain and the parameter in range domain.This mistake Cheng Jiwei M (i)=RBF (Wa(i),σs1r1)。
Still more preferably, to atmospheric scattering function initial estimate W in the step 4a(i) more large scale is carried out Bilateral filtering obtain the calculation formula of M ' (i), M ' (i) and local contrast N (i) and be respectively,
M ' (i)=RBF (Wa(i),σs2r2),σs2s1r2r1
N (i)=| M (i)-M ' (i) |
Wherein, σs2, σr2Respectively spatial domain and the parameter in range domain, M (i) are bilateral to initial atmospheric scattering function The result of filtering.
The estimated value of atmospheric scattering function is,
In formula, Vb(i) it is the estimated value of the atmospheric scattering function based on local contrast, m is the average gray value of M (i), K is an adjustable coefficient,
Due to 0≤Vb(i)≤Wa(i), using Vf(i)=max (min (pVb(i),Wa(i)), 0) to Vb(i) it uses restraint, Wherein, Vf(i) for final atmospheric scattering Function Estimation value, p is for balance factor, and it acts as retain fraction mist so that goes Result after mist is truer.
Still more preferably, processing step adaptively is carried out such as to the fog-zone domain of various concentration in the step 5 Under:First, the spatial positional information of the pixel based on input picture and colouring information construct its 5 dimensional feature space;Then, it is sharp Adaptive cluster is carried out to the characteristic point in feature space with Gauss KD trees, in the smaller region of characteristic point similarity, is carried out Fine granularity subdivision, Local Clustering number is more, in the region that characteristic point similarity is larger, carries out coarseness subdivision, Local Clustering number It is less, if subspace includes threshold value of the color variance higher than setting of characteristic point, then continue to subdivision;Conversely, then Stop subdivision, obtain leaf node, calculate the similarity of characteristic point and the subspace mean value that leaf node subspace is included respectively, All characteristic points in the leaf node are weighted with average, center of mass point so as to obtain the subspace and as sampled point, Finally, Gauss KD trees will store the input image data collection that m sampled point contains n pixel to approximate representation;Finally, During by being interpolated into input picture based on adaptively sampled result of calculation, according to the similarity of pixel sampled point corresponding to its, By the result of calculation weighted interpolation of sampled point to input picture.
Still more preferably, it uses in the step 6 and is carried out based on the tone mapping method of the logarithmic equation Hue adjustment, it is as follows,
Wherein, LdBe output brightness value, Ld,maxFor the maximum brightness value that can be exported, LwFor the brightness value of input, Lw,maxFor the maximum brightness value of input, h is biasing coefficient.
Preferably, image includes single photo or continuous video image in the step 1.
Another aspect of the present invention provides a kind of image defogging system, including image capture module, processing module, image Output module, wherein,
Image capture module:Acquisition includes the image of mist and sends processing module to;
Processing module:Initial atmospheric scattering function is obtained according to the boundary condition of image, using recurrence bilateral filtering come The atmospheric scattering function for refining initial reuses recurrence two-sided filter and obtains local contrast, right based on local contrast The fog-zone domain of various concentration is adaptively handled, and is quickly obtained defogging according to image degradation model as a result, and carrying out tone Adjustment;
Image output module:Final result is subjected to output displaying.
With reference to specific embodiment, the invention will be further described, but the present invention is not limited to following embodiments.
A. global atmosphere light is obtained according to image degradation model, it is specific as follows:Under the conditions of the greasy weather, due to suspending in air The influence of particle, when the light ray energy of scene reaches observation area, it may occur that attenuation.Meanwhile the aerial atmosphere light in day scatters to During observation area, energy can also be additional to target image.As a result,
I (i)=J (i) t (i)+A (1-t (i)) (1)
Wherein, i is pixel index, and I is input picture, and J is scene radiation, and A is global atmosphere light.T is scene transfer, T (i)=e-βd(i), wherein, β is atmospheric particles scattering coefficient, and d is the depth of field.Since atmospheric scattering function is V=A (1-t (i)), Therefore formula (1) can be rewritten as
To input picture into the processing of line visibility defogging to obtain the real scene of high-visibility, i.e., based on the I in formula (1) J is solved with reference to A and t.Wherein, for the calculating of A, typical method is He methods.Its choose dark in preceding 0.1% it is most bright Pixel, and choose these pixels and correspond to the brightest pixel values of input picture as A.In order to improve the efficiency of the method for the present invention and simultaneous Accuracy is cared for, using the window ω (i) centered on pixel i to the r of input picture, g, b triple channels carry out minimum value filter respectively Wave.Then, filtered triple channel maximum value is chosen as A, it is as follows,
Wherein, j is pixel index.
B. the initial atmospheric scattering function is obtained by following methods.Assuming that the atmospheric scattering letter in window ω (i) Number is constant, is defined as V (i).Minimum operation twice is carried out respectively to formula (2) both sides, can be obtained,
It is typically this physical characteristic of bounded according to scene radiation, i.e. J >=B can be obtained
At this time if window ω (i) sizes are 1 × 1, B=0, and the global atmosphere light A of three Color Channels is identical, that It will obtainBut in most cases, the boundary B of given image is not 0, and three colors are led to The A in road is also not necessarily identical, at this momentIt is inapplicable, but V (i)≤Wb(i) it is still applicable in.
In fact, in above-mentioned calculating process, need to ensure precondition of the scene transmission value for constant in window always, This can not usually meet in practical situation.Therefore, the present invention slightly relaxes the precondition, allows scene transmission value in window There is subtle variation, so as to which the initial estimate of scene transfer be further adjusted to,
Directly use initial estimate Wa(i) result obtained will appear halation, it is therefore desirable to Wa(i) it does and further refines. Meanwhile atmospheric scattering function show as its input picture variation shoulder should be it is smooth, it is thin with the texture in image Save information and no dependence.Also, in the fringe region of depth of field mutation, transition can occur for atmospheric scattering function.Therefore, in reply It states to obtain initial estimate Wa(i) carry out protecting edge and refine, remove the influence of grain details, avoid defogging result generate residual mist or Halation phenomenon.
C. the atmospheric scattering function for refining initial using the recurrence bilateral filtering.Based on traditional bilateral filtering operator To initial estimate Wa(i) above-mentioned requirements can be reached by being smoothed.But there is no using traditional bilateral by the present invention Filter operator, but use recurrence bilateral filtering operator.Similar with traditional bilateral filtering, recurrence bilateral filtering considers image Spatial domain and range domain on similarity, can reach protect side local smoothing method effect.In addition, recurrence bilateral filtering is based on one The one-dimensional discrete signal operation of series, by successive ignition, can accurately keep the marginal information of different angle, obtain more traditional The better handling result of bilateral filtering.It is as follows to refining for initial value using recurrence bilateral filtering:
Wherein,Wherein, σs1, σr1 Respectively spatial domain and the parameter in range domain.Formula (6) is abbreviated as M (i)=RBF (Wa(i),σs1r1)。
D. recurrence bilateral filtering is reused, the bilateral filter of different scale twice is actually carried out to initial estimate Wave weighs the local contrast of image by the difference of this filter result twice.The result of M (i) after refining is whole partially dark, And local contrast is not high, it is therefore desirable to further adjustment.Due to atmospheric scattering function reflection be atmosphere light participate in into The part of picture, it is only related with atmosphere light and scene depth, therefore background parts from observer farther out, mist it is dense, after imaging very Fuzzy, the value of the corresponding atmospheric scattering function in this part also should be higher, while foreground part is nearer from observer, and mist is relatively thin, because The value of this their corresponding atmospheric scattering function should be relatively low.But if the rgb value of given image in itself is higher so that in image M (i) in, corresponding gray value is still very high.Therefore, if directly using M (i) approximate atmospheric scattering function V (i), it is likely that by face The brighter object of color is mistakenly considered the region of mistiness, and so as to cause in the image after defogging, entirety is partially dim.
In a width foggy image, the region of mistiness is influenced bigger for the thin region of mist by atmosphere light, Contrast is lower.Therefore, can the mistiness region and the thin region of mist roughly be distinguished by local contrast.Based on figure As the property of itself, local contrast can it is poor by the gray value standard in spatial neighborhood or gray value is asked gradient come It represents.But the defects of common there are one these methods, value is all very big at edge pixel point, commonly greater than its region Average value, therefore easily generate edge halation phenomenon.For two-sided filter, the parameter σ of kernel functions, σrDirectly determine The smoothness of filter result.Parameter value is bigger, and image is smoothed more severe, and the grain details of loss are more.Therefore, Edge halo effect occurs in order to prevent, to initial estimate Wa(i) the recurrence bilateral filtering of different scale twice is carried out, is passed through The difference of this filter result twice weighs the local contrast of image.Part in front, to initial estimate Wa(i) It has carried out a bilateral filtering and has obtained M (i), at this point, increasing the parameter value of kernel function, to initial estimate Wa(i) it carries out more The bilateral filtering of large scale obtains M ' (i).For the region of mistiness, since itself is very fuzzy, many details are invisible, even if To Wa(i) it is filtered using the parameter of bigger, filter result changes seldom relative to M (i);And for the prospect pair at the thin place of mist As due to containing a degree of grain details, when to Wa(i) it when carrying out the filtering of more large scale, can filter out more More detailed information, filter result difference is relatively large twice.In form, the calculating of M ' (i) and local contrast N (i) are public Formula is respectively
M ' (i)=RBF (Wa(i),σs2r2),σs2s1r2r1
N (i)=| M (i)-M ' (i) |
In order to distinguish the region of mistiness and the higher object of self brightness value.When estimating atmospheric scattering light, it is also necessary to examine Consider the brightness value of pixel itself.The estimated value of atmospheric scattering function is
In formula, m is the average gray value of M (i), and k is adjustable coefficient, by weighing the relatively bright of pixel i Angle valuePurpose is for the bright non-thick fog object of color, makes its corresponding Vb(i) value reduces amplitude and increases, and for face The dark object of color, then reduce.Due to 0≤Vb(i)≤Wa(i), using Vf(i)=max (min (pVb(i),Wa(i)), 0) to Vb (i) use restraint parameter p the purpose of be retain fraction mist so that the result after defogging is more natural.
E. hue adjustment is carried out to result.Since atmosphere light is influenced in by foggy environment, the color of defogging result and right Still there is larger difference than degree and real scene, be usually expressed as that brightness and contrast is relatively low, and details visibility is poor.Therefore it needs Hue adjustment is carried out to defogging result.The present invention is used to be adjusted based on the tone mapping method of the logarithmic equation, such as Shown in lower,
Wherein, LdBe output brightness value, Ld,maxFor the maximum brightness value that can be exported, LwFor the brightness value of input, Lw,maxFor the maximum brightness value of input, h is biasing coefficient.Result after hue adjustment more meets true scene color, carefully Section is also more clear.
F. in order to which input picture visibility defogging is accelerated to calculate and accuracy is effectively ensured, the present invention schemes not for input As pixel is handled one by one, also it is not based on traditional uniform down-sampling preprocess method and input picture is analyzed, but Using a kind of adaptive nonuniform sampling strategy.For its corresponding high-dimensional feature space of the picture construction of input, and by The Gauss KD trees carry out feature space adaptive subdivision.First, the spatial position of the pixel based on input picture Information and colouring information construct its 5 dimensional feature space.Then, the characteristic point in feature space is carried out using Gauss KD trees adaptive The cluster answered.In the smaller region of characteristic point similarity, fine granularity subdivision is carried out, Local Clustering number is more.It is similar in characteristic point Larger region is spent, carries out coarseness subdivision, Local Clustering number is less.The result of the above process is mapped on input picture, table More coarse when now changing gentle image-region for subdivision, when subdivision fringe region, is more fine.If subspace is included The color variance of characteristic point is higher than the threshold value of setting, then continues to subdivision;Conversely, then stopping subdivision, leaf node is obtained.Point Not Ji Suan the characteristic point that is included of leaf node subspace and the subspace mean value similarity, to all characteristic points in the leaf node It is weighted averagely, so as to the center of mass point for obtaining the subspace and as sampled point.In this way, each leaf of Gauss KD trees Node will store a sampled point.Finally, Gauss KD trees will store m sampled point and contain n pixel to approximate representation The input image data collection of point.Present invention only requires defogging processing is carried out for these sampled points, and all pictures need not be related to Vegetarian refreshments.It is not directly to assign sample point data when finally, by input picture is interpolated into based on adaptively sampled result of calculation It is worth to the pixel corresponding to it.But according to the similarity of pixel sampled point corresponding to its, by the result of calculation of sampled point Weighted interpolation is to input picture.
Since the objective evaluation index method in the present invention is the method that picture contrast angle is weighed, mainly according to the world The atmospheric visibility definition that the illumination committee proposes, detects with reference to Logarithmic image processing model by visible edge to obtain contrast Figure, preferably having weighed the contrast of image using related evaluation metrics on this basis enhances ability.Using three kinds of indexs come The defog effect of distinct methods in objective evaluation Fig. 7 is visible edge ratio e respectively, it is seen that side specification gradient mean valueSaturation is black The percentage σ of color and white pixel point.If e,Bigger, σ is smaller, then the effect of defogging is better, and edge is kept as more preferably, right Than degree higher.
As shown in table 1, the σ of the method for the present invention is 0, is represented there is no promising 0 or the pixel for 255 gray levels, and e WithAlso it is larger, objectively illustrate the validity of the method for the present invention.
In table 2, sample rate and the method for the present invention and other methods that the method for the present invention obtains defogging result are given The comparison of run time.As can be seen that the method for the present invention has certain advantage in execution efficiency.
The Indexes Comparison of 1 defogging result of table
The time-consuming comparison of 2 defogging result of table
The marginal information of the method for the present invention defogging result is kept well, and details is more clear.And it dashes forward in the depth of field There is not residual mist and halo effect in the region of change, has restored true scene to the full extent, meets regarding for people on the whole Feel impression.Fattal methods are undesirable in the region defog effect of mistiness.Also, since colouring information is insufficient or independent element Variation does not lead to the result color that the statistical property of original image is unreliable, and defogging obtains significantly, and generally excessively saturation, distortion are existing As more serious.For He methods, when the color of original image regional area is larger with atmosphere light color distortion, dark Priori can fail, also, image border is not clear and definite enough, in the region for having the depth of field to be mutated, however it remains residual mist.Tarel methods It is quickly to estimate atmospheric scattering function with medium filtering, causes not removing in some fringe region mists clean.And also have The problem of contrast is poor, image fault generates.

Claims (9)

1. a kind of image defogging method, includes the following steps:
Step 1, acquisition include the image of mist;
Step 2 obtains initial atmospheric scattering function according to the boundary condition of image;
Step 3, the atmospheric scattering function for being refined using recurrence bilateral filtering initial;
Step 4 reuses recurrence two-sided filter and obtains local contrast;
Step 5 based on local contrast, is adaptively handled the fog-zone domain of various concentration;
Step 6 is quickly obtained defogging as a result, and carry out final result after hue adjustment defeated according to image degradation model Go out displaying;
Global atmosphere light A is obtained according to image degradation model in step 2, using the window ω (i) centered on pixel i to defeated Enter the r of image, g, b triple channels carry out mini-value filtering respectively, then, choose filtered triple channel maximum value as global big Gas light A,
Wherein, j is pixel index, and I is input picture.
2. image defogging method as described in claim 1, it is characterised in that:The initial atmospheric scattering function is by with lower section Method obtains, it is assumed that the atmospheric scattering function in window ω (i) is constant, is defined as V (i), is typically according to scene radiation This physical characteristic of bounded, i.e. J >=B can be obtained,
Wherein, V (i) is atmospheric scattering function, and ζ is the pixel belonged in window ω (i), and a ∈ { r, g, b }, J are scene spokes It penetrates, lower boundaries of the B for scene radiation, Wb(i) it is that the lower boundary for radiating scene substitutes into the coboundary of gained atmospheric scattering function.
3. image defogging method as claimed in claim 2, it is characterised in that:Scene transmission value in window is allowed to have subtle change Change, so as to which the initial estimate of scene transfer be further adjusted to,Wherein, Wa(i) it is big Gas scattering function initial estimate.
4. image defogging method as claimed in claim 3, it is characterised in that:It is calculated in the step 3 using recurrence bilateral filtering Son, it is as follows to refining for initial atmospheric scattering function by successive ignition based on a series of one-dimensional discrete signal operations,
Wherein, Mrbf(i) it is the atmospheric scattering function after refining, Wherein, σs1, σr1Respectively spatial domain and the parameter in range domain, this mistake Cheng Jiwei M (i)=RBF (Wa(i),σs1r1)。
5. image defogging method as claimed in claim 4, it is characterised in that:It is initial to atmospheric scattering function in the step 4 Estimated value Wa(i) bilateral filtering for carrying out more large scale obtains the calculation formula difference of M ' (i), M ' (i) and local contrast N (i) For,
M ' (i)=RBF (Wa(i),σs2r2),σs2s1r2r1
N (i)=| M (i)-M ' (i) |
Wherein, σs2, σr2Respectively spatial domain and the parameter in range domain, M (i) are to initial atmospheric scattering function bilateral filtering As a result,
The estimated value of atmospheric scattering function is,
In formula, Vb(i) it is the estimated value of the atmospheric scattering function based on local contrast, average gray values of the m for M (i), k mono- A adjustable coefficient,
Due to 0≤Vb(i)≤Wa(i), using Vf(i)=max (min (pVb(i),Wa(i)), 0) to Vb(i) it uses restraint, In, Vf(i) it is final atmospheric scattering Function Estimation value, p is balance factor.
6. image defogging method as claimed in claim 4, it is characterised in that:To the fog-zone domain of various concentration in the step 5 It is as follows adaptively to carry out processing step:First, the spatial positional information of the pixel based on input picture and colouring information structure Make its 5 dimensional feature space;Then, adaptive cluster is carried out to the characteristic point in feature space using Gauss KD trees, in feature Point similarity smaller region, carries out fine granularity subdivision, and Local Clustering number is more, in the region that characteristic point similarity is larger, into Row coarseness subdivision, Local Clustering number is less, if subspace includes threshold value of the color variance higher than setting of characteristic point, that Continue to subdivision;Conversely, then stopping subdivision, leaf node is obtained, calculates the characteristic point that leaf node subspace is included respectively With the similarity of the subspace mean value, all characteristic points in the leaf node are weighted averagely, so as to obtain the subspace Center of mass point and as sampled point, finally, Gauss KD trees will store m sampled point and contain n pixel to approximate representation The input image data collection of point;When finally, by input picture is interpolated into based on adaptively sampled result of calculation, according to pixel The similarity of sampled point corresponding to its, by the result of calculation weighted interpolation of sampled point to input picture.
7. image defogging method as claimed in claim 4, it is characterised in that:It is used in the step 6 based on logarithmic equation Tone mapping method carries out hue adjustment, as follows,
Wherein, LdBe output brightness value, Ld,maxFor the maximum brightness value that can be exported, LwFor the brightness value of input, Lw,maxFor The maximum brightness value of input, h are biasing coefficients.
8. image defogging method as described in claim 1, it is characterised in that:In the step 1 image include single photo or Continuous video image.
9. a kind of image defogging system, including image capture module, processing module, image output module, wherein,
Image capture module:Acquisition includes the image of mist and sends processing module to;
Processing module:Initial atmospheric scattering function is obtained according to the boundary condition of image, is refined using recurrence bilateral filtering Initial atmospheric scattering function reuses recurrence two-sided filter and obtains local contrast, based on local contrast, to difference The fog-zone domain of concentration is adaptively handled, and is quickly obtained defogging according to image degradation model as a result, and carrying out hue adjustment;
Image output module:Final result is subjected to output displaying;
Wherein, processing module obtains global atmosphere light A according to image degradation model, using the window ω (i) centered on pixel i To the r of input picture, g, b triple channels carry out mini-value filtering respectively, then, choose filtered triple channel maximum value as complete Office atmosphere light A,
Wherein, j is pixel index, and I is input picture.
CN201510645510.9A 2015-10-08 2015-10-08 Image defogging method and its system Expired - Fee Related CN105447825B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510645510.9A CN105447825B (en) 2015-10-08 2015-10-08 Image defogging method and its system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510645510.9A CN105447825B (en) 2015-10-08 2015-10-08 Image defogging method and its system

Publications (2)

Publication Number Publication Date
CN105447825A CN105447825A (en) 2016-03-30
CN105447825B true CN105447825B (en) 2018-06-12

Family

ID=55557959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510645510.9A Expired - Fee Related CN105447825B (en) 2015-10-08 2015-10-08 Image defogging method and its system

Country Status (1)

Country Link
CN (1) CN105447825B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780362B (en) * 2016-11-23 2019-07-02 哈尔滨工业大学 Road video defogging method based on dichromatic reflection model and bilateral filtering
CN107316279B (en) * 2017-05-23 2019-11-22 天津大学 Low light image Enhancement Method based on tone mapping and regularization model
CN107240075A (en) * 2017-05-27 2017-10-10 上海斐讯数据通信技术有限公司 A kind of haze image enhancing processing method and system
CN107392870B (en) * 2017-07-27 2020-07-21 Oppo广东移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN110189259B (en) * 2018-02-23 2022-07-08 荷兰移动驱动器公司 Image haze removing method, electronic device and computer readable storage medium
CN109934781B (en) * 2019-02-27 2020-10-23 合刃科技(深圳)有限公司 Image processing method, image processing device, terminal equipment and computer readable storage medium
CN111260582B (en) * 2020-01-17 2023-04-25 合肥登特菲医疗设备有限公司 IP image enhancement method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908210A (en) * 2010-08-13 2010-12-08 北京工业大学 Method and system for color image defogging treatment
CN102930514A (en) * 2012-09-27 2013-02-13 西安电子科技大学 Rapid image defogging method based on atmospheric physical scattering model

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101908210A (en) * 2010-08-13 2010-12-08 北京工业大学 Method and system for color image defogging treatment
CN102930514A (en) * 2012-09-27 2013-02-13 西安电子科技大学 Rapid image defogging method based on atmospheric physical scattering model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Single Image Haze Removal Using Dark Channel Prior;Kaiming He 等;《Proceedings of IEEE Computer Society Conference》;20091231;第1956-1963页 *
结合精确大气散射图计算的图像快速去雾;甘佳佳 等;《中国图象图形学报》;20130531;第18卷(第5期);第585-587页 *

Also Published As

Publication number Publication date
CN105447825A (en) 2016-03-30

Similar Documents

Publication Publication Date Title
CN105447825B (en) Image defogging method and its system
Xie et al. Improved single image dehazing using dark channel prior and multi-scale retinex
Tang et al. Investigating haze-relevant features in a learning framework for image dehazing
Tripathi et al. Single image fog removal using bilateral filter
Park et al. Single image dehazing with image entropy and information fidelity
Huang et al. An advanced single-image visibility restoration algorithm for real-world hazy scenes
CN107103591B (en) Single image defogging method based on image haze concentration estimation
Gao et al. Sand-dust image restoration based on reversing the blue channel prior
Chiang et al. Underwater image enhancement: using wavelength compensation and image dehazing (WCID)
CN108765336A (en) Image defogging method based on dark bright primary colors priori with auto-adaptive parameter optimization
Park et al. Single image haze removal with WLS-based edge-preserving smoothing filter
CN111062293B (en) Unmanned aerial vehicle forest flame identification method based on deep learning
KR20140140163A (en) Appatatus for image dehazing using the user controllable radical root operation
Pei et al. Effective image haze removal using dark channel prior and post-processing
Guo et al. Image dehazing based on haziness analysis
CN105023246B (en) A kind of image enchancing method based on contrast and structural similarity
CN110349113B (en) Adaptive image defogging method based on dark primary color priori improvement
Khan et al. Recent advancement in haze removal approaches
Zhang et al. A fast video image defogging algorithm based on dark channel prior
CN116664448B (en) Medium-high visibility calculation method and system based on image defogging
Ghate et al. New approach to underwater image dehazing using dark channel prior
CN112907461A (en) Defogging and enhancing method for infrared degraded image in foggy day
CN109949239B (en) Self-adaptive sharpening method suitable for multi-concentration multi-scene haze image
Xie et al. Image defogging method combining light field depth estimation and dark channel
CN110852971A (en) Video defogging method based on dark channel prior and Retinex and computer program product

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180612

Termination date: 20181008

CF01 Termination of patent right due to non-payment of annual fee