CN113379619B - Integrated processing method for defogging imaging, visibility extraction and depth of field estimation - Google Patents
Integrated processing method for defogging imaging, visibility extraction and depth of field estimation Download PDFInfo
- Publication number
- CN113379619B CN113379619B CN202110518861.9A CN202110518861A CN113379619B CN 113379619 B CN113379619 B CN 113379619B CN 202110518861 A CN202110518861 A CN 202110518861A CN 113379619 B CN113379619 B CN 113379619B
- Authority
- CN
- China
- Prior art keywords
- visibility
- color
- vector
- atmospheric
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 23
- 238000000605 extraction Methods 0.000 title claims abstract description 10
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 239000013598 vector Substances 0.000 claims abstract description 42
- 238000002834 transmittance Methods 0.000 claims abstract description 24
- 230000008033 biological extinction Effects 0.000 claims abstract description 17
- 230000008859 change Effects 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 10
- 230000005540 biological transmission Effects 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 6
- 230000004075 alteration Effects 0.000 claims description 3
- 238000010626 work up procedure Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 7
- 238000001514 detection method Methods 0.000 abstract description 6
- 230000007613 environmental effect Effects 0.000 abstract description 5
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000012545 processing Methods 0.000 abstract description 2
- 238000011084 recovery Methods 0.000 abstract description 2
- 238000012544 monitoring process Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 241001421808 Theorema Species 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to an integrated processing method for defogging imaging, visibility extraction and depth of field estimation, which comprises the following steps: firstly, selecting a classical fogging model; secondly, obtaining the value of the atmospheric background light; thirdly, obtaining an estimated value of the transmissivity according to dark channel prior; fourthly, completing the recovery of the fog-free image; expressing the fogging model in a color space in a vector form, constructing an auxiliary line to solve the transmittance ratio of the same scene in two frames of images, and further solving the real color and the corresponding transmittance of the scene; sixthly, combining the change value of the distance between the scenery and the unmanned aerial vehicle to solve an atmospheric extinction coefficient, and calculating the visibility by using an empirical formula of the extinction coefficient and the visibility; and seventhly, solving the depth of the scene. The invention is beneficial to improving the aerial photography effect in the weather of haze and the like, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an integrated processing method for defogging imaging, visibility extraction and depth of field estimation.
Background
Along with the development of unmanned aerial technology, unmanned aerial vehicles increasingly replace traditional manned spacecrafts by virtue of the advantages of safety, maneuverability and the like, and are applied to various scenes such as patrol, investigation and rescue, survey and the like. Unmanned aerial vehicle image quality of taking photo by plane easily receives the influence of haze weather, leads to the image quality of taking photo by plane to descend. The unmanned aerial vehicle aerial photography task effect is influenced, the working scene and time of the unmanned aerial vehicle are limited, and therefore important practical requirements are brought to the research of the image defogging algorithm. Subsequent scholars have proposed methods such as enhancing image contrast, mean defogging, median defogging, etc. successively, which have a certain defogging effect, but the result is often serious in color distortion. In 2009, a dark channel prior is proposed by hokeming doctor and a defogging algorithm based on the dark channel prior is designed, and the algorithm still is a mainstream and classical defogging method by virtue of excellent effect and stability.
The dark channel defogging algorithm explains the principle that imaging colors are synthesized by scene real colors and atmospheric background light by utilizing atmospheric transmittance based on a physical model of atmospheric fogging, so that the real colors of the fogging are restored only by solving two unknown parameters of the transmittance of an image and the atmospheric background light, wherein the transmittance is obtained by knowing the atmospheric background light. The algorithm considers the sky area in the image to be the bright area of the dark channel image, so the dark channel defogging algorithm considers the first 1% of the brightest pixels of the dark channel image to be the atmospheric background light; and estimating the transmittance by using the prior statistical rule of the dark channel and the value of the atmospheric background light. And utilizing the estimated value of the atmospheric background light and the estimated transmissivity to finish the defogging of the image. Because the dark channel defogging algorithm determines that the dark channel defogging algorithm depends on the sky area in the image when the atmospheric background light is estimated in the algorithm design, but the aerial image does not contain the sky area, the atmospheric background light in the aerial image is not estimated accurately, the defogging result is dim, the color is distorted, the detail identification degree is low, and a good result cannot be obtained, so that the design of the defogging algorithm suitable for the aerial image has urgent requirements.
Nowadays, unmanned aerial vehicle application demand presents the trend that the function is diversified, the scene is diversified, the environment is complicated, and unmanned aerial vehicle technical development needs to move towards all-weather, multi-functional, wide field. In further research, it is found that in addition to trying to improve aerial image defogging imaging effects, it is only rarely studied whether more functions can be explored for the physical process of atmospheric fog penetration imaging. According to the attenuation rule of light in atmospheric transmission, the distance between the atmospheric extinction coefficient and the light transmission directly influences the transmissivity of the scenery, so whether the atmospheric extinction coefficient and the scenery depth can be calculated by utilizing the process is worthy of being researched. Among the parameters of environment detection, the monitoring of visibility (affected by haze, dust and smog, and corresponding relation between atmospheric extinction coefficient and visibility) belongs to important parameters in atmospheric environment. Carry on visibility detection device and must consume the limited load of unmanned aerial vehicle, reduce the efficiency of unmanned aerial vehicle flight. Therefore, if visibility can be estimated from the aerial images, the visibility meter can be omitted, environment-friendly monitoring can be carried out, unmanned aerial vehicle load is saved, and unmanned aerial vehicle flight efficiency is improved; unmanned aerial vehicles all have the demand to scene depth measurement in applications such as survey and drawing exploration, target tracking. If the depth of field can be estimated from the aerial image, a passive distance detection means can be directly provided without carrying equipment such as a binocular/multi-view camera, a radar and the like. As above-mentioned, be worth excavating visibility and distance information that bring to penetrating fog formation of image physical process, this will make unmanned aerial vehicle more worth in environmental protection monitoring field, provides the helping hand to unmanned aerial vehicle intelligent driving technical development simultaneously.
In summary, typical defogging algorithms in aerial image defogging applications are problematic and need to be improved. Meanwhile, the visibility and the scene depth are calculated by utilizing the atmospheric transmission process of light, so that the method has research and application values.
Disclosure of Invention
It is an object of the present invention to provide an integrated processing method for defogging imaging, visibility extraction and depth of field estimation that overcomes some or all of the deficiencies of the prior art.
The integrated processing method of defogging imaging, visibility extraction and depth of field estimation comprises the following steps:
selecting a classical fogging model I (x) t (x) D (x) plus (1-t (x)) A, wherein x is pixel coordinate, I (x) is observed foggy image, D (x) is natural color of the scene in non-attenuation state, A is atmosphere light background light, t (x) is transmissivity, wherein t (x) e-βd(x)β is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
secondly, searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving a value A of atmospheric background light by using a geometric relation between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration;
thirdly, obtaining the estimated value of the transmissivity t (x) according to the dark channel prior
Wherein: Ω is a square window centered at x, c is the color channel of D (x);
and fourthly, a deformation form of the fogging model:
completing the restoration of the fog-free image;
expressing the fogging model in a color space in a vector form, constructing an auxiliary line and solving the transmittance ratio t of the same scene in two frame imagesx1/tx2Further solving the real color D (x) and the corresponding transmittance t (x), t (x) including tx1And tx2;
Sixthly, writing the transmittance column as a ratio formula:
wherein x1, x2 represent the same scene in both frames; combined with the change value d of the scenery relative to the distance of the unmanned aerial vehicle1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the extinction coefficient and the visibility can be utilizedVisibility V can be calculated;
seventhly, the transmittance t (x) and the extinction coefficient β are known, and t (x) is equal to e-βd(x)And obtaining the depth d of the scenery.
Preferably, in step three, in the dark channel prior, the dark channel map is defined as a grayscale image composed of the values of the lowest channel of all pixels:
wherein D (x)darkIs a dark channel image, consisting of a dark channel prior D (x)dark→ 0, from which an estimated value of t (x) can be obtainedIs given by the following formula.
Preferably, in the step five, the two frames of images with overlapped areas in the aerial image are selected to estimate the atmospheric background light a, the SIFT operator is used for feature matching, the same point in the two images is found, and the color vector of the same object on the two frames of images is represented as:
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space aspi=txi,qi=(1-txi) (ii) a WhereinLie in a spatial plane; i is a reference number, i is a symbol,an unattenuated color vector representing a sceneVector of ambient lightIn the formed plane, all the synthesized different scene actual imaging color vectors caused by different transmittances;
in two adjacent images, a plurality of pairs of point pairs which are in accordance with the description of the formula can be obtained by utilizing an image matching algorithmEach pair of point pairs being generatedIn a space plane, there is a perpendicular line for each space planePerpendicular toThen there are:
for several pairs of normal vectors to respective planesAlways haveThen the equation can be obtained:
by utilizing the rule of the above formula, the optimized formula and the fitting can be writtenThe direction of (a) is as follows:
when the color of the scenery is combined with the atmospheric background light, the vectorThe vector ends in the vectorAbove the line connecting the endpoints, the equations can be written as follows using the geometrical relationship:
wherein a and b are constants, and a and b can be obtained by solving the above formula, whereinGet the atmospheric background lightAt this time, t (x) is estimatedThe right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtained
From pi+qiWork up to 1 gives the following formula:
the ratio of two formulas in the above formula is used for obtaining:
solving OA 'I and A' I by sine theorem2L, p is obtained from the relationship in the above formula1/p2A value of (d); further, the self color of the scenery can be solved
In the plane of each pair of matched pairsAre all known; the corresponding transmittance t (x) at each point pair can be solved.
The invention provides a set of algorithm which can simultaneously realize three functions of defogging imaging, visibility extraction and depth of field estimation of the unmanned aerial vehicle without adding extra hardware equipment, so that unmanned aerial vehicle imaging can meet more diverse requirements of traffic monitoring, resource surveying, environmental protection monitoring and the like. Aiming at the condition that the traditional defogging algorithm is not suitable for aerial images, the invention provides an aerial video defogging imaging algorithm, and on the basis, the estimation of atmospheric visibility and depth of field is realized by further combining the flight data of an unmanned aerial vehicle and the process data of defogging imaging calculation. The invention is beneficial to improving aerial photography effect in haze and other weather, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection.
Drawings
Fig. 1 is a flowchart of an integrated processing method of defogging imaging, visibility extraction, and depth of field estimation in embodiment 1;
FIG. 2 is a schematic plan view of the derived vector and the auxiliary lines in embodiment 1;
FIG. 3 shows each pair of the embodiments 1Normal vector of the plane formedAnda schematic vertical view;
FIG. 4 is a diagram illustrating the depth estimation and visibility estimation results in embodiment 1;
FIG. 5 is a graph showing the defogging results in example 1.
Detailed Description
For a further understanding of the invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings and examples. It is to be understood that the examples are illustrative of the invention and not limiting.
Example 1
As shown in fig. 1, the present embodiment provides an integrated processing method for defogging imaging, visibility extraction and depth of field estimation, which includes:
in computer vision, selecting a classic fogging model:
I(x)=t(x)·D(x)+(1-t(x)).A (1)
where x is the pixel coordinate, I (x) is the observed foggy image, D (x) is the true color of the scene in the unattenuated state, A is the ambient background light, t (x) is the transmission, where t (x) has:
t(x)=e-βd(x),t(x)∈(0,1)(2)
beta is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
as shown in the formula (1), the color I (x) of the scene after being attenuated by atmospheric transmission is a linear superposition of the scene natural color D (x) and the atmospheric background light A about t (x).
And searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving the value A of the atmospheric background light by using the geometric relationship between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration.
Dark channels a priori assume that there is a channel with a low or even near zero intensity value in the RGB three color channels for most pixels in a statistically fog-free image. Based on the statistical rule, the dark channel map is defined as the gray image composed of the values of the lowest channel of all pixels:
wherein: Ω is a square window centered at x, c is d (x) a defined color channel; d (x)darkIs a dark channel image, consisting of a dark channel prior D (x)dark→ 0, from which an estimated value of t (x) can be obtainedIs represented by the formula:
in the embodiment, two frames of images with overlapped areas in the aerial image are selected for estimating the atmospheric background light A, SIFT operators are used for carrying out feature matching, the same points in the two images are found, the optical path difference from the same object to the airborne camera is different due to the movement of the unmanned aerial vehicle, and t (x) is known to be different by the formula (2), and is respectively counted as tx1And tx2At this time, the color vector of the same object on the two images is represented as:
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space aspi=txi,qi=(1-txi) (ii) a WhereinLie in a spatial plane; i is a reference number, i is a symbol,an unattenuated color vector representing a sceneVector of ambient lightIn the formed plane, all the synthesized different scene actual imaging color vectors caused by different transmittances; here, two possible scene imaging color vectors, the scene self color vector and the atmospheric background light vector are selected and respectively represented as shown in FIG. 2
The vector triangle synthesis process is shown as dashed line type 1 in fig. 2.
In two adjacent images, an image matching algorithm can be used to obtain a plurality of pairs of point pairs according to the description of the formula (5), and each pair of point pairs generatesIn a space plane, there is a perpendicular line for each space planePerpendicular toAs shown in fig. 3, there are:
for several pairs of normal vectors to respective planesAlways haveThen the equation can be obtained:
by using the rule of formula (7), the optimized formula can be written and fittedThe direction of (a) is as follows:
due to the definition of the relation p1+q21, so that when the color of the scene itself is combined with the atmospheric background light according to equation (5), the vectorThe vector ends in the vectorThe line of termination is shown above the dashed line 2 in FIG. 2.
The equations can be written in terms of geometric relationships:
wherein a and b are constants, and a and b can be obtained by solving the formula (9), whereinGet the atmospheric background lightAt this time, t (x) is estimatedThe right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtainedCarrying out deformation by using the formula (1) to obtain a fog-free image recovery formula:
and finishing the defogging work of the image.
From pi+qiWork-up of formula (5) to 1 gives the following formula:
the ratio is obtained by using two formulas in formula (11):
make a vectorParallel line A' I of2Crossing OA at A', shown as the dotted line 3 in FIG. 1, indicates Δ OI1A1∽ΔA’I2A2(ii) a Then there are:
solving OA' I and non-woven through sine theoremA’I2L, p is obtained from the relationship in the formula (13)1/p2A value of (d); further, the self color of the scene can be solved by the formula (12)
To this end, each pair of matched pairs is in the planeAre all known; the corresponding transmittance t (x) at each point pair can be solved by equation (5).
The formula (2) can be obtained by comparing at different distances:
as can be seen from the formula (14), p is used1/p2In combination with the change d of the scene relative to the distance of the drone1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the atmospheric extinction coefficient and visibility is utilized:
the visibility V in the image at that time can be calculated.
The transmittance t (x) and the extinction coefficient β are known, and the depth d of the subject is obtained by the formula (2).
Fig. 4 shows the depth estimation and visibility estimation results, with the depth estimation result on the left and the visibility estimation result on the right. In fig. 5, the original image is on the left, the defogging result (middle) by the classical algorithm is in the middle, and the defogging result on the right is on the right. As can be seen from fig. 4 and 5, the method is helpful for improving aerial photography effects in weather such as haze and the like, and can also be used for visibility detection and reconnaissance ranging in traffic and environmental protection, so as to realize estimation of atmospheric visibility and depth of field.
The embodiment provides an image processing method for an unmanned aerial vehicle, but is not limited to unmanned aerial vehicle scenes.
The present invention and its embodiments have been described above schematically, without limitation, and what is shown in the drawings is only one of the embodiments of the present invention, and the actual structure is not limited thereto. Therefore, if the person skilled in the art receives the teaching, without departing from the spirit of the invention, the person skilled in the art shall not inventively design the similar structural modes and embodiments to the technical solution, but shall fall within the scope of the invention.
Claims (1)
1. The integrated processing method of defogging imaging, visibility extraction and depth of field estimation is characterized by comprising the following steps of: the method comprises the following steps:
selecting a classical fogging model I (x) t (x) D (x) plus (1-t (x)) A, wherein x is pixel coordinate, I (x) is observed foggy image, D (x) is natural color of the scene in non-attenuation state, A is atmosphere light background light, t (x) is transmissivity, wherein t (x) e-βd(x)β is the atmospheric extinction coefficient, d (x) is the distance of light transmission;
secondly, searching scenes which commonly appear in the two frames of changed images by using an image matching algorithm, and solving a value A of atmospheric background light by using a geometric relation between the difference of imaging colors of the same scenes in the two frames of images and chromatic aberration;
thirdly, obtaining the estimated value of the transmissivity t (x) according to the dark channel prior
Wherein: Ω is a square window centered at x, c is the color channel of D (x);
and fourthly, a deformation form of the fogging model:
completing the restoration of the fog-free image;
expressing the fogging model in a color space in a vector form, constructing an auxiliary line and solving the transmittance ratio t of the same scene in two frame imagesx1/tx2Further solving the real color D (x) and the corresponding transmittance t (x), t (x) including tx1And tx2;
Sixthly, writing the transmittance column as a ratio formula:
wherein x1, x2 represent the same scene in both frames; combined with the change value d of the scenery relative to the distance of the unmanned aerial vehicle1-d2The atmospheric extinction coefficient beta can be solved, and an empirical formula of the extinction coefficient and the visibility can be utilizedVisibility V can be calculated;
seventhly, the transmittance t (x) and the extinction coefficient β are known, and t (x) is equal to e-βd(x)Obtaining the depth d of the scenery;
in the fifth step, two frames of images with overlapped areas in the aerial images are selected for estimating the atmospheric background light A, SIFT operators are used for carrying out feature matching, the same points in the two images are found, and the color vectors of the same object on the two frames of images are represented as follows:
wherein, I isi(x)、Di(x) A is expressed as a space vector in the RGB color space aspi=txi,qi=(1-txi) WhereinLie in a spatial plane; i is a reference number, i is a symbol,an unattenuated color vector representing a sceneVector of ambient lightIn the formed plane, all the synthesized different scene actual imaging color vectors caused by different transmittances;
in two adjacent images, an image matching algorithm can be used to obtain a plurality of pairs of point pairs which are in accordance with the description of the formula, and each pair of point pairs generatesIn a space plane, there is a perpendicular line for each space planePerpendicular toThen there are:
for several pairs of normal vectors to respective planesAlways haveThen the equation can be obtained:
By utilizing the rule of the above formula, the optimized formula and the fitting can be writtenThe direction of (a) is as follows:
when the color of the scenery is combined with the atmospheric background light, the vectorThe vector ends in the vectorAbove the line connecting the endpoints, the equations can be written as follows using the geometrical relationship:
wherein a and b are constants, and a and b can be obtained by solving the above formula, whereinGet the atmospheric background lightAt this time, t (x) is estimatedThe right-hand variables of the expression (2) are known, and an estimated value of the transmittance of each point of the image is obtained
From pi+qiWork up to 1 gives the following formula:
the ratio of two formulas in the above formula is used for obtaining:
solving OA 'I and A' I by sine theorem2L, p is obtained from the relationship in the above formula1/p2A value of (d); further, the self color of the scenery can be solved
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110518861.9A CN113379619B (en) | 2021-05-12 | 2021-05-12 | Integrated processing method for defogging imaging, visibility extraction and depth of field estimation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110518861.9A CN113379619B (en) | 2021-05-12 | 2021-05-12 | Integrated processing method for defogging imaging, visibility extraction and depth of field estimation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113379619A CN113379619A (en) | 2021-09-10 |
CN113379619B true CN113379619B (en) | 2022-02-01 |
Family
ID=77572622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110518861.9A Active CN113379619B (en) | 2021-05-12 | 2021-05-12 | Integrated processing method for defogging imaging, visibility extraction and depth of field estimation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113379619B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114280056A (en) * | 2021-12-20 | 2022-04-05 | 北京普测时空科技有限公司 | Visibility measurement system |
CN116664448B (en) * | 2023-07-24 | 2023-10-03 | 南京邮电大学 | Medium-high visibility calculation method and system based on image defogging |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105931220B (en) * | 2016-04-13 | 2018-08-21 | 南京邮电大学 | Traffic haze visibility detecting method based on dark channel prior Yu minimum image entropy |
CN107301623B (en) * | 2017-05-11 | 2020-02-14 | 北京理工大学珠海学院 | Traffic image defogging method and system based on dark channel and image segmentation |
CN107194924A (en) * | 2017-05-23 | 2017-09-22 | 重庆大学 | Expressway foggy-dog visibility detecting method based on dark channel prior and deep learning |
CN107680054B (en) * | 2017-09-26 | 2021-05-18 | 长春理工大学 | Multi-source image fusion method in haze environment |
CN111161167B (en) * | 2019-12-16 | 2024-05-07 | 天津大学 | Single image defogging method based on middle channel compensation and self-adaptive atmospheric light estimation |
CN111292258B (en) * | 2020-01-15 | 2023-03-10 | 长安大学 | Image defogging method based on dark channel prior and bright channel prior |
CN111553862B (en) * | 2020-04-29 | 2023-10-13 | 大连海事大学 | Defogging and binocular stereoscopic vision positioning method for sea and sky background image |
-
2021
- 2021-05-12 CN CN202110518861.9A patent/CN113379619B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113379619A (en) | 2021-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Mehra et al. | ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions | |
CN106651938B (en) | A kind of depth map Enhancement Method merging high-resolution colour picture | |
CN113379619B (en) | Integrated processing method for defogging imaging, visibility extraction and depth of field estimation | |
CN106548461B (en) | Image defogging method | |
CN110766024B (en) | Deep learning-based visual odometer feature point extraction method and visual odometer | |
CN105225230A (en) | A kind of method and device identifying foreground target object | |
Choi et al. | Safenet: Self-supervised monocular depth estimation with semantic-aware feature extraction | |
CN111553862B (en) | Defogging and binocular stereoscopic vision positioning method for sea and sky background image | |
CN103198459A (en) | Haze image rapid haze removal method | |
CN104050637A (en) | Quick image defogging method based on two times of guide filtration | |
CN111753739B (en) | Object detection method, device, equipment and storage medium | |
Saur et al. | Change detection in UAV video mosaics combining a feature based approach and extended image differencing | |
CN106023108A (en) | Image defogging algorithm based on boundary constraint and context regularization | |
CN103914820A (en) | Image haze removal method and system based on image layer enhancement | |
CN112561996A (en) | Target detection method in autonomous underwater robot recovery docking | |
CN106657948A (en) | low illumination level Bayer image enhancing method and enhancing device | |
CN110503609B (en) | Image rain removing method based on hybrid perception model | |
CN107085830B (en) | Single image defogging method based on propagation filtering | |
CN110910457B (en) | Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics | |
Le Besnerais et al. | Dense height map estimation from oblique aerial image sequences | |
Xiao et al. | Research on uav multi-obstacle detection algorithm based on stereo vision | |
CN115965531A (en) | Model training method, image generation method, device, equipment and storage medium | |
CN106846260B (en) | Video defogging method in a kind of computer | |
CN111583131B (en) | Defogging method based on binocular image | |
CN112598777B (en) | Haze fusion method based on dark channel prior |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |