CN113379619B - Integrated processing method for defogging imaging, visibility extraction and depth of field estimation - Google Patents

Integrated processing method for defogging imaging, visibility extraction and depth of field estimation Download PDF

Info

Publication number
CN113379619B
CN113379619B CN202110518861.9A CN202110518861A CN113379619B CN 113379619 B CN113379619 B CN 113379619B CN 202110518861 A CN202110518861 A CN 202110518861A CN 113379619 B CN113379619 B CN 113379619B
Authority
CN
China
Prior art keywords
visibility
color
vector
atmospheric
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110518861.9A
Other languages
Chinese (zh)
Other versions
CN113379619A (en
Inventor
蒋大钢
孔令钊
张宇
刘昕
钟港
王毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110518861.9A priority Critical patent/CN113379619B/en
Publication of CN113379619A publication Critical patent/CN113379619A/en
Application granted granted Critical
Publication of CN113379619B publication Critical patent/CN113379619B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to an integrated processing method for defogging imaging, visibility extraction and depth of field estimation, which comprises the following steps: firstly, selecting a classical fogging model; secondly, obtaining the value of the atmospheric background light; thirdly, obtaining an estimated value of the transmissivity according to dark channel prior; fourthly, completing the recovery of the fog-free image; expressing the fogging model in a color space in a vector form, constructing an auxiliary line to solve the transmittance ratio of the same scene in two frames of images, and further solving the real color and the corresponding transmittance of the scene; sixthly, combining the change value of the distance between the scenery and the unmanned aerial vehicle to solve an atmospheric extinction coefficient, and calculating the visibility by using an empirical formula of the extinction coefficient and the visibility; and seventhly, solving the depth of the scene. The invention is beneficial to improving the aerial photography effect in the weather of haze and the like, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection.

Description

Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
Technical Field
The invention relates to the technical field of image processing, in particular to an integrated processing method for defogging imaging, visibility extraction and depth of field estimation.
Background
Along with the development of unmanned aerial technology, unmanned aerial vehicles increasingly replace traditional manned spacecrafts by virtue of the advantages of safety, maneuverability and the like, and are applied to various scenes such as patrol, investigation and rescue, survey and the like. Unmanned aerial vehicle image quality of taking photo by plane easily receives the influence of haze weather, leads to the image quality of taking photo by plane to descend. The unmanned aerial vehicle aerial photography task effect is influenced, the working scene and time of the unmanned aerial vehicle are limited, and therefore important practical requirements are brought to the research of the image defogging algorithm. Subsequent scholars have proposed methods such as enhancing image contrast, mean defogging, median defogging, etc. successively, which have a certain defogging effect, but the result is often serious in color distortion. In 2009, a dark channel prior is proposed by hokeming doctor and a defogging algorithm based on the dark channel prior is designed, and the algorithm still is a mainstream and classical defogging method by virtue of excellent effect and stability.
The dark channel defogging algorithm explains the principle that imaging colors are synthesized by scene real colors and atmospheric background light by utilizing atmospheric transmittance based on a physical model of atmospheric fogging, so that the real colors of the fogging are restored only by solving two unknown parameters of the transmittance of an image and the atmospheric background light, wherein the transmittance is obtained by knowing the atmospheric background light. The algorithm considers the sky area in the image to be the bright area of the dark channel image, so the dark channel defogging algorithm considers the first 1% of the brightest pixels of the dark channel image to be the atmospheric background light; and estimating the transmittance by using the prior statistical rule of the dark channel and the value of the atmospheric background light. And utilizing the estimated value of the atmospheric background light and the estimated transmissivity to finish the defogging of the image. Because the dark channel defogging algorithm determines that the dark channel defogging algorithm depends on the sky area in the image when the atmospheric background light is estimated in the algorithm design, but the aerial image does not contain the sky area, the atmospheric background light in the aerial image is not estimated accurately, the defogging result is dim, the color is distorted, the detail identification degree is low, and a good result cannot be obtained, so that the design of the defogging algorithm suitable for the aerial image has urgent requirements.
Nowadays, unmanned aerial vehicle application demand presents the trend that the function is diversified, the scene is diversified, the environment is complicated, and unmanned aerial vehicle technical development needs to move towards all-weather, multi-functional, wide field. In further research, it is found that in addition to trying to improve aerial image defogging imaging effects, it is only rarely studied whether more functions can be explored for the physical process of atmospheric fog penetration imaging. According to the attenuation rule of light in atmospheric transmission, the distance between the atmospheric extinction coefficient and the light transmission directly influences the transmissivity of the scenery, so whether the atmospheric extinction coefficient and the scenery depth can be calculated by utilizing the process is worthy of being researched. Among the parameters of environment detection, the monitoring of visibility (affected by haze, dust and smog, and corresponding relation between atmospheric extinction coefficient and visibility) belongs to important parameters in atmospheric environment. Carry on visibility detection device and must consume the limited load of unmanned aerial vehicle, reduce the efficiency of unmanned aerial vehicle flight. Therefore, if visibility can be estimated from the aerial images, the visibility meter can be omitted, environment-friendly monitoring can be carried out, unmanned aerial vehicle load is saved, and unmanned aerial vehicle flight efficiency is improved; unmanned aerial vehicles all have the demand to scene depth measurement in applications such as survey and drawing exploration, target tracking. If the depth of field can be estimated from the aerial image, a passive distance detection means can be directly provided without carrying equipment such as a binocular/multi-view camera, a radar and the like. As above-mentioned, be worth excavating visibility and distance information that bring to penetrating fog formation of image physical process, this will make unmanned aerial vehicle more worth in environmental protection monitoring field, provides the helping hand to unmanned aerial vehicle intelligent driving technical development simultaneously.
In summary, typical defogging algorithms in aerial image defogging applications are problematic and need to be improved. Meanwhile, the visibility and the scene depth are calculated by utilizing the atmospheric transmission process of light, so that the method has research and application values.
Disclosure of Invention
It is an object of the present invention to provide an integrated processing method for defogging imaging, visibility extraction and depth of field estimation that overcomes some or all of the deficiencies of the prior art.
The integrated processing method of defogging imaging, visibility extraction and depth of field estimation comprises the following steps:
selecting a classical fogging model I (x) t (x) D (x) plus (1-t (x)) A, wherein x is pixel coordinate, I (x) is observed foggy image, D (x) is natural color of the scene in non-attenuation state, A is atmosphere light background light, t (x) is transmissivity, wherein t (x) e-βd(x)β is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
secondly, searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving a value A of atmospheric background light by using a geometric relation between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration;
thirdly, obtaining the estimated value of the transmissivity t (x) according to the dark channel prior
Figure GDA0003410412120000031
Figure GDA0003410412120000032
Wherein: Ω is a square window centered at x, c is the color channel of D (x);
and fourthly, a deformation form of the fogging model:
Figure GDA0003410412120000033
completing the restoration of the fog-free image;
expressing the fogging model in a color space in a vector form, constructing an auxiliary line and solving the transmittance ratio t of the same scene in two frame imagesx1/tx2Further solving the real color D (x) and the corresponding transmittance t (x), t (x) including tx1And tx2
Sixthly, writing the transmittance column as a ratio formula:
Figure GDA0003410412120000034
wherein x1, x2 represent the same scene in both frames; combined with the change value d of the scenery relative to the distance of the unmanned aerial vehicle1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the extinction coefficient and the visibility can be utilized
Figure GDA0003410412120000035
Visibility V can be calculated;
seventhly, the transmittance t (x) and the extinction coefficient β are known, and t (x) is equal to e-βd(x)And obtaining the depth d of the scenery.
Preferably, in step three, in the dark channel prior, the dark channel map is defined as a grayscale image composed of the values of the lowest channel of all pixels:
Figure GDA0003410412120000036
wherein D (x)darkIs a dark channel image, consisting of a dark channel prior D (x)dark→ 0, from which an estimated value of t (x) can be obtained
Figure GDA0003410412120000037
Is given by the following formula.
Preferably, in the step five, the two frames of images with overlapped areas in the aerial image are selected to estimate the atmospheric background light a, the SIFT operator is used for feature matching, the same point in the two images is found, and the color vector of the same object on the two frames of images is represented as:
Figure GDA0003410412120000041
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space as
Figure GDA0003410412120000042
pi=txi,qi=(1-txi) (ii) a Wherein
Figure GDA0003410412120000043
Lie in a spatial plane; i is a reference number, i is a symbol,
Figure GDA0003410412120000044
an unattenuated color vector representing a scene
Figure GDA0003410412120000045
Vector of ambient light
Figure GDA0003410412120000046
In the formed plane, all the synthesized different scene actual imaging color vectors caused by different transmittances;
in two adjacent images, a plurality of pairs of point pairs which are in accordance with the description of the formula can be obtained by utilizing an image matching algorithmEach pair of point pairs being generated
Figure GDA0003410412120000047
In a space plane, there is a perpendicular line for each space plane
Figure GDA0003410412120000048
Perpendicular to
Figure GDA0003410412120000049
Then there are:
Figure GDA00034104121200000410
for several pairs of normal vectors to respective planes
Figure GDA00034104121200000411
Always have
Figure GDA00034104121200000412
Then the equation can be obtained:
Figure GDA00034104121200000413
by utilizing the rule of the above formula, the optimized formula and the fitting can be written
Figure GDA00034104121200000414
The direction of (a) is as follows:
Figure GDA00034104121200000415
wherein
Figure GDA00034104121200000416
For atmospheric background light
Figure GDA00034104121200000417
A unit vector of (a);
when the color of the scenery is combined with the atmospheric background light, the vector
Figure GDA00034104121200000418
The vector ends in the vector
Figure GDA00034104121200000419
Above the line connecting the endpoints, the equations can be written as follows using the geometrical relationship:
Figure GDA00034104121200000420
wherein a and b are constants, and a and b can be obtained by solving the above formula, wherein
Figure GDA00034104121200000421
Get the atmospheric background light
Figure GDA00034104121200000422
At this time, t (x) is estimated
Figure GDA00034104121200000423
The right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtained
Figure GDA00034104121200000424
From pi+qiWork up to 1 gives the following formula:
Figure GDA0003410412120000051
the ratio of two formulas in the above formula is used for obtaining:
Figure GDA0003410412120000052
make a vector
Figure GDA0003410412120000053
Parallel line A' I of2Cross OA with A', known as Δ OI1A1∽ΔA’I2A2(ii) a Then there are:
Figure GDA0003410412120000054
solving OA 'I and A' I by sine theorem2L, p is obtained from the relationship in the above formula1/p2A value of (d); further, the self color of the scenery can be solved
Figure GDA0003410412120000055
In the plane of each pair of matched pairs
Figure GDA0003410412120000056
Are all known; the corresponding transmittance t (x) at each point pair can be solved.
The invention provides a set of algorithm which can simultaneously realize three functions of defogging imaging, visibility extraction and depth of field estimation of the unmanned aerial vehicle without adding extra hardware equipment, so that unmanned aerial vehicle imaging can meet more diverse requirements of traffic monitoring, resource surveying, environmental protection monitoring and the like. Aiming at the condition that the traditional defogging algorithm is not suitable for aerial images, the invention provides an aerial video defogging imaging algorithm, and on the basis, the estimation of atmospheric visibility and depth of field is realized by further combining the flight data of an unmanned aerial vehicle and the process data of defogging imaging calculation. The invention is beneficial to improving aerial photography effect in haze and other weather, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection.
Drawings
Fig. 1 is a flowchart of an integrated processing method of defogging imaging, visibility extraction, and depth of field estimation in embodiment 1;
FIG. 2 is a schematic plan view of the derived vector and the auxiliary lines in embodiment 1;
FIG. 3 shows each pair of the embodiments 1
Figure GDA0003410412120000057
Normal vector of the plane formed
Figure GDA0003410412120000058
And
Figure GDA0003410412120000059
a schematic vertical view;
FIG. 4 is a diagram illustrating the depth estimation and visibility estimation results in embodiment 1;
FIG. 5 is a graph showing the defogging results in example 1.
Detailed Description
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples. It is to be understood that the examples are illustrative of the invention and not limiting.
Example 1
As shown in fig. 1, the present embodiment provides an integrated processing method for defogging imaging, visibility extraction and depth of field estimation, which includes:
in computer vision, selecting a classic fogging model:
I(x)=t(x)·D(x)+(1-t(x)).A (1)
where x is the pixel coordinate, I (x) is the observed foggy image, D (x) is the true color of the scene in the unattenuated state, A is the ambient background light, t (x) is the transmission, where t (x) has:
t(x)=e-βd(x),t(x)∈(0,1)(2)
beta is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
as shown in the formula (1), the color I (x) of the scene after being attenuated by atmospheric transmission is a linear superposition of the scene natural color D (x) and the atmospheric background light A about t (x).
And searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving the value A of the atmospheric background light by using the geometric relationship between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration.
Dark channels a priori assume that there is a channel with a low or even near zero intensity value in the RGB three color channels for most pixels in a statistically fog-free image. Based on the statistical rule, the dark channel map is defined as the gray image composed of the values of the lowest channel of all pixels:
Figure GDA0003410412120000061
wherein: Ω is a square window centered at x, c is d (x) a defined color channel; d (x)darkIs a dark channel image, consisting of a dark channel prior D (x)dark→ 0, from which an estimated value of t (x) can be obtained
Figure GDA0003410412120000062
Is represented by the formula:
Figure GDA0003410412120000071
in the embodiment, two frames of images with overlapped areas in the aerial image are selected for estimating the atmospheric background light A, SIFT operators are used for carrying out feature matching, the same points in the two images are found, the optical path difference from the same object to the airborne camera is different due to the movement of the unmanned aerial vehicle, and t (x) is known to be different by the formula (2), and is respectively counted as tx1And tx2At this time, the color vector of the same object on the two images is represented as:
Figure GDA0003410412120000072
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space as
Figure GDA0003410412120000073
pi=txi,qi=(1-txi) (ii) a Wherein
Figure GDA0003410412120000074
Lie in a spatial plane; i is a reference number, i is a symbol,
Figure GDA0003410412120000075
an unattenuated color vector representing a scene
Figure GDA0003410412120000076
Vector of ambient light
Figure GDA0003410412120000077
In the formed plane, all the synthesized different scene actual imaging color vectors caused by different transmittances; here, two possible scene imaging color vectors, the scene self color vector and the atmospheric background light vector are selected and respectively represented as shown in FIG. 2
Figure GDA0003410412120000078
The vector triangle synthesis process is shown as dashed line type 1 in fig. 2.
In two adjacent images, an image matching algorithm can be used to obtain a plurality of pairs of point pairs according to the description of the formula (5), and each pair of point pairs generates
Figure GDA0003410412120000079
In a space plane, there is a perpendicular line for each space plane
Figure GDA00034104121200000710
Perpendicular to
Figure GDA00034104121200000711
As shown in fig. 3, there are:
Figure GDA00034104121200000712
for several pairs of normal vectors to respective planes
Figure GDA00034104121200000713
Always have
Figure GDA00034104121200000714
Then the equation can be obtained:
Figure GDA00034104121200000715
by using the rule of formula (7), the optimized formula can be written and fitted
Figure GDA00034104121200000716
The direction of (a) is as follows:
Figure GDA00034104121200000717
wherein
Figure GDA00034104121200000718
For atmospheric background light
Figure GDA00034104121200000719
A unit vector of (a);
due to the definition of the relation p1+q21, so that when the color of the scene itself is combined with the atmospheric background light according to equation (5), the vector
Figure GDA0003410412120000081
The vector ends in the vector
Figure GDA0003410412120000082
The line of termination is shown above the dashed line 2 in FIG. 2.
The equations can be written in terms of geometric relationships:
Figure GDA0003410412120000083
wherein a and b are constants, and a and b can be obtained by solving the formula (9), wherein
Figure GDA0003410412120000084
Get the atmospheric background light
Figure GDA0003410412120000085
At this time, t (x) is estimated
Figure GDA0003410412120000086
The right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtained
Figure GDA0003410412120000087
Carrying out deformation by using the formula (1) to obtain a fog-free image recovery formula:
Figure GDA0003410412120000088
and finishing the defogging work of the image.
From pi+qiWork-up of formula (5) to 1 gives the following formula:
Figure GDA0003410412120000089
the ratio is obtained by using two formulas in formula (11):
Figure GDA00034104121200000810
make a vector
Figure GDA00034104121200000811
Parallel line A' I of2Crossing OA at A', shown as the dotted line 3 in FIG. 1, indicates Δ OI1A1∽ΔA’I2A2(ii) a Then there are:
Figure GDA00034104121200000812
solving OA' I and non-woven through sine theoremA’I2L, p is obtained from the relationship in the formula (13)1/p2A value of (d); further, the self color of the scene can be solved by the formula (12)
Figure GDA00034104121200000813
To this end, each pair of matched pairs is in the plane
Figure GDA00034104121200000814
Are all known; the corresponding transmittance t (x) at each point pair can be solved by equation (5).
The formula (2) can be obtained by comparing at different distances:
Figure GDA0003410412120000091
as can be seen from the formula (14), p is used1/p2In combination with the change d of the scene relative to the distance of the drone1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the atmospheric extinction coefficient and visibility is utilized:
Figure GDA0003410412120000092
the visibility V in the image at that time can be calculated.
The transmittance t (x) and the extinction coefficient β are known, and the depth d of the subject is obtained by the formula (2).
Fig. 4 shows the depth estimation and visibility estimation results, with the depth estimation result on the left and the visibility estimation result on the right. In fig. 5, the original image is on the left, the defogging result (middle) by the classical algorithm is in the middle, and the defogging result on the right is on the right. As can be seen from fig. 4 and 5, the method is helpful for improving aerial photography effects in weather such as haze and the like, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection, so as to realize estimation of atmospheric visibility and depth of field.
The embodiment provides an image processing method for an unmanned aerial vehicle, but is not limited to unmanned aerial vehicle scenes.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.

Claims (1)

1. The integrated processing method of defogging imaging, visibility extraction and depth of field estimation is characterized by comprising the following steps of: the method comprises the following steps:
selecting a classical fogging model I (x) t (x) D (x) plus (1-t (x)) A, wherein x is pixel coordinate, I (x) is observed foggy image, D (x) is natural color of the scene in non-attenuation state, A is atmosphere light background light, t (x) is transmissivity, wherein t (x) e-βd(x)β is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
secondly, searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving a value A of atmospheric background light by using a geometric relation between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration;
thirdly, obtaining the estimated value of the transmissivity t (x) according to the dark channel prior
Figure FDA0003410412110000011
Figure FDA0003410412110000012
Wherein: Ω is a square window centered at x, c is the color channel of D (x);
and fourthly, a deformation form of the fogging model:
Figure FDA0003410412110000013
completing the restoration of the fog-free image;
expressing the fogging model in a color space in a vector form, constructing an auxiliary line and solving the transmittance ratio t of the same scene in two frame imagesx1/tx2Further solving the real color D (x) and the corresponding transmittance t (x), t (x) including tx1And tx2
Sixthly, writing the transmittance column as a ratio formula:
Figure FDA0003410412110000014
wherein x1, x2 represent the same scene in both frames; combined with the change value d of the scenery relative to the distance of the unmanned aerial vehicle1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the extinction coefficient and the visibility can be utilized
Figure FDA0003410412110000015
Visibility V can be calculated;
seventhly, the transmittance t (x) and the extinction coefficient β are known, and t (x) is equal to e-βd(x)Obtaining the depth d of the scenery;
in the fifth step, two frames of images with overlapped areas in the aerial images are selected for estimating the atmospheric background light A, SIFT operators are used for carrying out feature matching, the same points in the two images are found, and the color vectors of the same object on the two frames of images are represented as follows:
Figure FDA0003410412110000021
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space as
Figure FDA0003410412110000022
pi=txi,qi=(1-txi) Wherein
Figure FDA0003410412110000023
Lie in a spatial plane; i is a reference number, i is a symbol,
Figure FDA0003410412110000024
an unattenuated color vector representing a scene
Figure FDA0003410412110000025
Vector of ambient light
Figure FDA0003410412110000026
In the formed plane, all the synthesized different scene actual imaging color vectors caused by different transmittances;
in two adjacent images, an image matching algorithm can be used to obtain a plurality of pairs of point pairs which are in accordance with the description of the formula, and each pair of point pairs generates
Figure FDA0003410412110000027
In a space plane, there is a perpendicular line for each space plane
Figure FDA0003410412110000028
Perpendicular to
Figure FDA0003410412110000029
Then there are:
Figure FDA00034104121100000210
for several pairs of normal vectors to respective planes
Figure FDA00034104121100000211
Always have
Figure FDA00034104121100000212
Then the equation can be obtained:
Figure FDA00034104121100000213
By utilizing the rule of the above formula, the optimized formula and the fitting can be written
Figure FDA00034104121100000214
The direction of (a) is as follows:
Figure FDA00034104121100000215
wherein
Figure FDA00034104121100000216
For atmospheric background light
Figure FDA00034104121100000217
A unit vector of (a);
when the color of the scenery is combined with the atmospheric background light, the vector
Figure FDA00034104121100000218
The vector ends in the vector
Figure FDA00034104121100000219
Above the line connecting the endpoints, the equations can be written as follows using the geometrical relationship:
Figure FDA00034104121100000220
wherein a and b are constants, and a and b can be obtained by solving the above formula, wherein
Figure FDA00034104121100000221
Get the atmospheric background light
Figure FDA00034104121100000222
At this time, t (x) is estimated
Figure FDA00034104121100000223
The right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtained
Figure FDA0003410412110000031
From pi+qiWork up to 1 gives the following formula:
Figure FDA0003410412110000032
the ratio of two formulas in the above formula is used for obtaining:
Figure FDA0003410412110000033
make a vector
Figure FDA0003410412110000034
Parallel line A' I of2Cross OA with A', known as Δ OI1A1∽ΔA’I2A2(ii) a Then there are:
Figure FDA0003410412110000035
solving OA 'I and A' I by sine theorem2L, p is obtained from the relationship in the above formula1/p2A value of (d); further, the self color of the scenery can be solved
Figure FDA0003410412110000036
In the plane of each pair of matched pairs
Figure FDA0003410412110000037
Are all known; the corresponding transmittance t (x) at each point pair can be solved.
CN202110518861.9A 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation Active CN113379619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110518861.9A CN113379619B (en) 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110518861.9A CN113379619B (en) 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation

Publications (2)

Publication Number Publication Date
CN113379619A CN113379619A (en) 2021-09-10
CN113379619B true CN113379619B (en) 2022-02-01

Family

ID=77572622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110518861.9A Active CN113379619B (en) 2021-05-12 2021-05-12 Integrated processing method for defogging imaging, visibility extraction and depth of field estimation

Country Status (1)

Country Link
CN (1) CN113379619B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114280056A (en) * 2021-12-20 2022-04-05 北京普测时空科技有限公司 Visibility measurement system
CN116664448B (en) * 2023-07-24 2023-10-03 南京邮电大学 Medium-high visibility calculation method and system based on image defogging

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931220B (en) * 2016-04-13 2018-08-21 南京邮电大学 Traffic haze visibility detecting method based on dark channel prior Yu minimum image entropy
CN107301623B (en) * 2017-05-11 2020-02-14 北京理工大学珠海学院 Traffic image defogging method and system based on dark channel and image segmentation
CN107194924A (en) * 2017-05-23 2017-09-22 重庆大学 Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning
CN107680054B (en) * 2017-09-26 2021-05-18 长春理工大学 Multi-source image fusion method in haze environment
CN111161167B (en) * 2019-12-16 2024-05-07 天津大学 Single image defogging method based on middle channel compensation and self-adaptive atmospheric light estimation
CN111292258B (en) * 2020-01-15 2023-03-10 长安大学 Image defogging method based on dark channel prior and bright channel prior
CN111553862B (en) * 2020-04-29 2023-10-13 大连海事大学 Defogging and binocular stereoscopic vision positioning method for sea and sky background image

Also Published As

Publication number Publication date
CN113379619A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
Mehra et al. ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions
CN106651938B (en) A kind of depth map Enhancement Method merging high-resolution colour picture
CN113379619B (en) Integrated processing method for defogging imaging, visibility extraction and depth of field estimation
CN106548461B (en) Image defogging method
CN110766024B (en) Deep learning-based visual odometer feature point extraction method and visual odometer
CN105225230A (en) A kind of method and device identifying foreground target object
Choi et al. Safenet: Self-supervised monocular depth estimation with semantic-aware feature extraction
CN111553862B (en) Defogging and binocular stereoscopic vision positioning method for sea and sky background image
CN103198459A (en) Haze image rapid haze removal method
CN104050637A (en) Quick image defogging method based on two times of guide filtration
CN111753739B (en) Object detection method, device, equipment and storage medium
Saur et al. Change detection in UAV video mosaics combining a feature based approach and extended image differencing
CN106023108A (en) Image defogging algorithm based on boundary constraint and context regularization
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
CN106657948A (en) low illumination level Bayer image enhancing method and enhancing device
CN110503609B (en) Image rain removing method based on hybrid perception model
CN107085830B (en) Single image defogging method based on propagation filtering
CN110910457B (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
Le Besnerais et al. Dense height map estimation from oblique aerial image sequences
Xiao et al. Research on uav multi-obstacle detection algorithm based on stereo vision
CN115965531A (en) Model training method, image generation method, device, equipment and storage medium
CN106846260B (en) Video defogging method in a kind of computer
CN111583131B (en) Defogging method based on binocular image
CN112598777B (en) Haze fusion method based on dark channel prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant