CN109214993B - Visual enhancement method for intelligent vehicle in haze weather - Google Patents

Visual enhancement method for intelligent vehicle in haze weather Download PDF

Info

Publication number
CN109214993B
CN109214993B CN201810906690.5A CN201810906690A CN109214993B CN 109214993 B CN109214993 B CN 109214993B CN 201810906690 A CN201810906690 A CN 201810906690A CN 109214993 B CN109214993 B CN 109214993B
Authority
CN
China
Prior art keywords
image
pixel
haze
following
haze concentration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810906690.5A
Other languages
Chinese (zh)
Other versions
CN109214993A (en
Inventor
刘斌
陈勇
王东强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Academy Of Big Data Co ltd
Original Assignee
Chongqing Academy Of Big Data Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Academy Of Big Data Co ltd filed Critical Chongqing Academy Of Big Data Co ltd
Priority to CN201810906690.5A priority Critical patent/CN109214993B/en
Publication of CN109214993A publication Critical patent/CN109214993A/en
Application granted granted Critical
Publication of CN109214993B publication Critical patent/CN109214993B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a visual enhancement method for an intelligent vehicle in haze weather, which comprises the following steps: 1) converting an image from an RGB color space to an HSV color space; 2) evaluating haze concentration; 3) calculating atmospheric coverage parameters; 4) obtaining a foreground image Ivis(x) (ii) a 5) Acquiring the current haze concentration as a background image by using a guiding filtering algorithm of a brightness channel, removing image noise by using a Gaussian filter, and increasing the influence of atmosphere coverage parameters on a foreground image; 6) according to step 5), the visual enhancement result is output. The invention can effectively improve the contrast and definition of the video, has higher calculation efficiency, effectively improves the image quality of the video signal, and improves the sensitivity, thereby ensuring the safety and the comfort of unmanned control.

Description

Visual enhancement method for intelligent vehicle in haze weather
Technical Field
The invention relates to the field of visual enhancement, in particular to a visual enhancement method for an intelligent vehicle in haze weather.
Background
Current on-road computer vision systems are very sensitive to weather conditions, among which haze weather conditions are one of the most visually-critical of the various weather conditions. Under the haze weather condition, visibility is greatly reduced, visibility of a road environment system is poor, the obtained image is seriously degraded, blurring and contrast are reduced, serious color deviation and distortion can occur, many characteristics contained in the image are covered, the application value of the image is greatly reduced, and the visual systems cannot normally work.
In recent years, the sharpening processing of images in haze days has become a research hotspot in the fields of computer vision and image processing, attracts a large number of researchers at home and abroad, and has a plurality of methods proposed. However, due to the complexity and randomness of weather conditions, some algorithms proposed at present have certain limitations, and some research results and research methods obtained are still in continuous development and still need to be further improved and improved. In the prior art, a relatively effective haze weather image enhancement method is not found for the degradation problem of the image collected in the haze weather of the road vision system, so that the image enhancement can be effectively realized in the haze weather, the image quality of a video signal is improved in an effect increasing manner, and the sensitivity is improved.
Disclosure of Invention
In view of the above-mentioned defects in the prior art, the technical problem to be solved by the present invention is to provide an intelligent vehicle machine vision enhancement algorithm based on guided filtering. The method can effectively improve the image quality of the video signal and improve the sensitivity, thereby ensuring the safety and comfort of unmanned control.
In order to achieve the purpose, the invention provides a visual enhancement method for an intelligent vehicle in haze weather, which is characterized by comprising the following steps of: the method comprises the following steps:
1) converting an image from an RGB color space to an HSV color space;
2) evaluating haze concentration;
3) calculating atmospheric coverage parameters;
4) obtaining a foreground image Ivis(x);
5) Acquiring the current haze concentration as a background image by using a guiding filtering algorithm of a brightness channel, removing image noise by using a Gaussian filter, and increasing the influence of atmosphere coverage parameters on a foreground image;
6) according to step 5), the visual enhancement result is output.
Further, in the step 2), the haze concentration is calculated according to the following steps:
21) the evaluation factor of haze concentration is defined according to the following formula:
Figure BDA0001760801270000021
wherein D isdarkIs a dark channel prior;
u, v is the Fourier transform of the image pixel (x, y);
vdark(x, y) is that one of the at least three RGB color channels of pixel (x, y) has a lowA pixel value;
22) the haze concentration was calculated according to the following formula:
fβ=k×Ddark
where k is a linear scaling factor;
in order to ensure a non-distorted image, fβIs the haze concentration; f. ofβThe limited range of parameters is limited to [2,8]]。
Further, in the step 3), a current atmosphere coverage parameter is evaluated by using a guided filtering algorithm.
Further, calculating the atmosphere coverage parameter according to the following steps:
321) the kernel of the linear filter is built according to the following formula:
Figure BDA0001760801270000031
wherein, wkIs the kth kernel window; (a)k,bk) Is a linear transformation coefficient within a given cell window; i is a pixel index within the window; q. q.siIs an output image;
322) the subtle differences between the input image p and the output image q are distinguished according to the following formula:
Figure BDA0001760801270000032
wherein p isiIs an input image; q. q.siIs an output image;
323) calculating a solution of equation (12) according to the following equation;
Figure BDA0001760801270000033
wherein the luminance guide image I is taken to be equal to the input image p;
at this time, covk(I,p)=vark(I),
Figure BDA0001760801270000034
We further obtained:
Figure BDA0001760801270000035
where ε is the regularized smoothing factor; covk(I, p) is that the guide map I and the input image are covariance matrixes; vark(I) Is the variance matrix of the guide map I;
Figure BDA0001760801270000036
is the mean of the input image;
Figure BDA0001760801270000037
is the mean of the guide graph I; ε is the smoothing factor.
324) Applying the guided filtering to the whole image region on the basis of retaining the hierarchical structure of the original image, and calculating an atmospheric coverage parameter I according to the following formula
IIs defined as
Figure BDA0001760801270000041
Where | w | is the number of pixels in the kernel window; m is the index number of the pixel; p is a radical ofmIs a pixel of the input image; w is amIs a reaction of with pmA kernel window for the center pixel;
Figure BDA0001760801270000042
refers to the processing step of each pixel.
Further, in the step 4), a foreground image I is obtained according to the following formulavis(x):
Ivis(x)=E-I (18)
Where E is atmospheric light at infinity.
The invention has the beneficial effects that: the invention provides a haze video enhancement algorithm based on a guide filtering method in order to improve low visibility and poor contrast of a vehicle-mounted video. First, the atmospheric attenuation model is simplified. Then, haze concentration is evaluated based on a dark prior theory. And then, acquiring the current haze concentration as a background image by using a guiding filtering algorithm of a brightness channel, and increasing the influence of atmosphere coverage parameters on the foreground image. The method can effectively improve the contrast and definition of the video, has higher calculation efficiency, effectively improves the image quality of the video signal, and improves the sensitivity, thereby ensuring the safety and the comfort of driving and controlling in haze weather.
Drawings
FIG. 1 is a flow chart illustrating the steps of one embodiment of the present invention.
Detailed Description
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
as shown in fig. 1, the method for enhancing the vision of the intelligent vehicle in the haze weather comprises the following steps:
1) converting an image from an RGB color space to an HSV color space;
2) evaluating haze concentration;
3) calculating atmospheric coverage parameters;
4) obtaining a foreground image Ivis(x);
5) The current haze concentration is obtained by using a guiding filtering algorithm of a brightness channel and is used as a background image, image noise is removed by using a Gaussian filter, and the influence of atmosphere coverage parameters on a foreground image is increased.
6) According to step 5), the visual enhancement result is output.
He, according to the dark prior theory of dr, indicates that a sharp image has low pixel values in one of at least three RGB color channels, which can be expressed as:
Jdark(x)=min(min(Jc(y)))c∈{r,g,b},y∈Ω(x) (9)
here, JcIs the color channel of the RGB image J; Ω (x) is the statistical region where x is off center.
JdarkAre always small, even close to zero, and, in sharp images,except for a large number of clear image statistical analyses in sky regions. The dark tone image and the image under different haze concentrations are compared, and the average pixel gray scale of the dark tone image can be found through the darker tone partial image so as to be used for evaluating the haze concentration of the current environment.
Therefore, in particular, in the step 2), the haze concentration is calculated according to the following steps:
21) the evaluation factor of haze concentration is defined according to the following formula:
Figure BDA0001760801270000051
wherein D isdarkIs a dark channel prior;
u, v is the Fourier transform of the image pixel (x, y);
vdark(x, y) is that one of the at least three RGB color channels of pixel (x, y) has a low pixel value;
22) the haze concentration was calculated according to the following formula:
fβ=k×Ddark
where k is a linear scaling factor; f. ofβIs the concentration of the haze,
to ensure a non-distorted image, the limited range of parameters is defined as [2,8 ].
In particular, in the step 3), the current atmosphere coverage parameter is evaluated by using a guided filtering algorithm.
In particular, the calculation of the atmospheric coverage parameter is carried out according to the following steps:
321) the kernel of the linear filter is built according to the following formula:
Figure BDA0001760801270000061
wherein, wkIs the kth kernel window; (a)k,bk) Is a linear transformation coefficient within a given cell window; i is a pixel index within the window; q. q.siIs an output image;
322) the subtle differences between the input image p and the output image q are distinguished according to the following formula:
Figure BDA0001760801270000062
wherein p isiIs an input image; q. q.siIs an output image;
323) calculating a solution of equation (12) according to the following equation;
Figure BDA0001760801270000063
wherein the luminance guide image I is taken to be equal to the input image p;
at this time, covk(I,p)=vark(I),
Figure BDA0001760801270000071
We further obtained:
Figure BDA0001760801270000072
where ε is the regularized smoothing factor; covk(I, p) is that the guide map I and the input image are covariance matrixes; vark(I) Is the variance matrix of the guide map I;
Figure BDA0001760801270000073
is the mean of the input image;
Figure BDA0001760801270000074
is the mean of the guide graph I; ε is the smoothing factor.
324) Applying the guided filtering to the whole image region on the basis of retaining the hierarchical structure of the original image, and calculating an atmospheric coverage parameter I according to the following formula
IIs defined as
Figure BDA0001760801270000075
Where | w | is the number of pixels in the kernel window; m is the index number of the pixel; p is a radical ofmIs a pixel of the input image; w is amIs a reaction of with pmA kernel window for the center pixel;
Figure BDA0001760801270000076
refers to the processing step of each pixel. In this embodiment, the smoothing factor epsilon is set to 0.3, in other embodiments, the smoothing factor can also be set according to different requirements to achieve the same technical effect, and the width r of the kernel window is set to 30 pixel points.
Specifically, in the step 4), the foreground image I is obtained according to the following formulavis(x):
Ivis(x)=E-I(18) Wherein E is atmospheric light at infinity
With regard to the selection of filters, most computer vision and computer graphics image filtering currently involves suppressing or extracting the content of the image. Simple nucleated linear translation invariant filters (LTI), such as average, gaussian, laplacian, and Sobel filters, are widely used for image restoration, blurring/sharpening, edge detection, feature extraction, and so on. Alternatively, LTI filters can explicitly perform image stitching, image matting, and gradient domain operations by solving a poisson equation compressed at High Dynamic Range (HDR), with filter kernels explicitly defined by the transpose of a homogeneous laplace matrix.
The LTI filtering kernel is spatially invariant and independent of the image content, but usually sometimes needs to take into account additional information that leads to the image. The precursor work of the anisotropic diffusion guides the diffusion process by the gradient of the image to be filtered, and the phenomenon that the image is smoothed to the edge is avoided. The sum of squares minimum weighting filter uses the input image to be filtered to guide and selects a quadratic function that is equivalent to the anisotropic diffusion of a non-normal stable state. In other applications, the guide image can also be another image than the input image from the moment. The guided filter output results in a local linear transformation of the guided image. On the one hand, the steering filter has a good edge-preserving smoothing effect, like the bilateral filter, but it does not have the effect of gradient inversion artifacts. On the other hand, the guided filtering can be much more than just smooth, which allows the filtering output to be more structured and less smooth than the input with the aid of a guided graph.
In the technical scheme, the selected guiding filtering is mainly realized by the following modes:
we first define a generic linear translation transform filter routine, associated with the guide image I, a filtered input image P and an output image q. I and p are given in advance according to the application and they may be identical. The result of the filtering at a pixel i is expressed as a weighted average:
Figure BDA0001760801270000081
where i and j are both pixel indices. The filter kernel Wij is a function that directs the image I and is independent of p. This filter is linearly related to p.
An example of one such filter is a cascaded filter and the bilateral filtering kernel Wbf is given by the following equation:
Figure BDA0001760801270000082
wherein X is the pixel coordinate; ki is a normalization parameter ensuring that sigma jWijbf is 1; parameter sigmasAnd σrAnd respectively adjusting the sensitivity of the spatial similarity and the color brightness range similarity. When P and I are equal, the cascaded filter degrades into an initial bilateral filter.
The explicit weighted average filter optimizes a quadratic function and solves a linear system of equations of the form:
Aq=P (3)
q and p are the column vectors N-1, respectively { qi } and { pi }; a is an N-N matrix associated only with I. The solution q of formula (3) is a-1p, and has the same form as (1), and Wij is (a-1) ij.
We now define a guided filter, the key assumption being that this guided filter is a locally linear model between the guided image I and the filtered output q. We assume that q is a linear transformation of I centered in the window Wk of pixel k:
Figure BDA0001760801270000091
(ak, bk) is a square window of radius r, assuming the same linear coefficient as Wk. This local linear model determines that if I has an edge, then q has an edge because
Figure BDA0001760801270000092
In order to determine the linear coefficients (ak, bk), the input filtered image P needs to be constrained. We define the output q as the input p minus some undesired content n, e.g. noise/texture:
qi=pi-ni (5)
we sought a solution that minimizes the difference between q and p while maintaining a linear model (4). In particular, we minimize the cost function of the following window Wk:
Figure RE-GDA0001861450420000093
here, e is a regularization parameter that penalizes large ak.
Equation (6) is a linear ridge regression model, which is solved by equation (7) given below:
Figure BDA0001760801270000101
Figure BDA0001760801270000102
here,. mu.kAnd
Figure BDA0001760801270000103
is the window Wk mean and variance of the guide image I; i W is the number of pixels in Wk;
Figure BDA0001760801270000104
is the average value of P over Wk. Linear coefficients (ak, bk) are obtained and we can compute the filtered output qi root.
However, one pixel i is associated with all overlapping windows Wk covering i, so the value of qi in equation (5) is not the same but is calculated in different windows. One simple strategy is to average all possible values of qi. Therefore, the (ak, bk) values of Wk windows in all images are computed, and we compute the output by filtering:
Figure BDA0001760801270000105
attention is paid to
Figure BDA0001760801270000106
Since the box window is symmetric, we rewrite equation (8) to:
Figure BDA0001760801270000107
Figure RE-GDA0001861450420000108
and
Figure RE-GDA0001861450420000109
is the average coefficient of all overlapping windows at i.
This overlapping window averaging strategy is popular in image denoising and is very successful.
Equations (6), (7), (8) are definitions of the steering filter.
By adopting the guiding filtering, the technical scheme simplifies the atmospheric attenuation model. And evaluating the haze concentration based on a dark prior theory, then obtaining the current haze concentration as a background image by utilizing a guide filtering algorithm of a brightness channel, and increasing the influence of atmosphere coverage parameters on the foreground image. The method provided by the invention can effectively improve the contrast and definition of the video and has higher calculation efficiency.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (3)

1. A haze weather intelligent vehicle vision enhancement method is characterized by comprising the following steps: the method comprises the following steps:
1) converting an image from an RGB color space to an HSV color space;
2) evaluating haze concentration;
3) calculating atmospheric coverage parameter I
4) Obtaining a foreground image Ivis(x),Ivis(x)=E-IWhere E is atmospheric light at infinity;
5) acquiring the current haze concentration as a background image by using a guiding filtering algorithm of a brightness channel, removing image noise by using a Gaussian filter, and increasing the influence of atmosphere coverage parameters on a foreground image;
in the step 2), the haze concentration is calculated according to the following steps:
21) the evaluation factor of haze concentration is defined according to the following formula:
Figure FDA0003059302030000011
wherein D isdarkIs a dark channel prior;
uxv is the Fourier transform of the image pixel (x, y);
vdark(x, y) is that one of the at least three RGB color channels of pixel (x, y) has a low pixel value;
22) the haze concentration was calculated according to the following formula:
fβ=k×Ddark
where k is a linear scaling factor; wherein f isβIs the haze concentration;
to ensure a non-distorted image, fβThe limited range of parameters is defined as [2,8]]。
2. The haze weather intelligent vehicle vision enhancement method as claimed in claim 1, wherein: in the step 3), the current atmosphere coverage parameter is evaluated by using a guided filtering algorithm.
3. The haze weather intelligent vehicle vision enhancement method as claimed in claim 2, wherein:
in the step 3), calculating the atmosphere coverage parameter according to the following steps:
321) the kernel of the linear filter is built according to the following formula:
Figure FDA0003059302030000021
wherein, wkIs the kth kernel window; (a)k,bk) Is a linear transform coefficient within a given unit window; i is a pixel index within the window; q. q.siIs an output image; i isiIs the intensity of the observed image;
322) the subtle differences between the input image p and the output image q are distinguished according to the following formula:
Figure FDA0003059302030000022
wherein p isiIs an input image; q. q.siIs an output image;
323) calculating a solution of equation (12) according to the following equation;
Figure FDA0003059302030000023
wherein the luminance guide image I is taken to be equal to the input image p;
at this time, covk(I,p)=vark(I),
Figure FDA0003059302030000031
We further obtained:
Figure FDA0003059302030000032
where ε is the regularized smoothing factor; covk(I, p) is the covariance matrix of the guide map I and the input image; vark(I) Is the variance matrix of the guide map I;
Figure FDA0003059302030000033
is the mean of the input image;
Figure FDA0003059302030000034
is the mean of the guide graph I;
324) applying guided filtering to the whole image area on the basis of preserving the hierarchical structure of the original image, and calculating an atmospheric coverage parameter I according to the following formula
IIs defined as
Figure FDA0003059302030000035
Where | w | is the number of pixels in the kernel window; m is the index number of the pixel; p is a radical ofmIs a pixel of the input image; w is amIs a reaction of with pmA kernel window for the center pixel;
Figure FDA0003059302030000036
refers to the processing step of each pixel.
CN201810906690.5A 2018-08-10 2018-08-10 Visual enhancement method for intelligent vehicle in haze weather Active CN109214993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810906690.5A CN109214993B (en) 2018-08-10 2018-08-10 Visual enhancement method for intelligent vehicle in haze weather

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810906690.5A CN109214993B (en) 2018-08-10 2018-08-10 Visual enhancement method for intelligent vehicle in haze weather

Publications (2)

Publication Number Publication Date
CN109214993A CN109214993A (en) 2019-01-15
CN109214993B true CN109214993B (en) 2021-07-16

Family

ID=64989118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810906690.5A Active CN109214993B (en) 2018-08-10 2018-08-10 Visual enhancement method for intelligent vehicle in haze weather

Country Status (1)

Country Link
CN (1) CN109214993B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188680B (en) * 2019-05-29 2021-08-24 南京林业大学 Tea tree tender shoot intelligent identification method based on factor iteration
CN111353953B (en) * 2020-02-07 2022-07-05 天津大学 Image moire removing method based on direction total variation minimization and guiding filtering
CN112659190B (en) * 2020-12-17 2022-08-26 天津默纳克电气有限公司 Industrial robot safety protection system
CN113419257A (en) * 2021-06-29 2021-09-21 深圳市路卓科技有限公司 Positioning calibration method, device, terminal equipment, storage medium and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683767A (en) * 2015-02-10 2015-06-03 浙江宇视科技有限公司 Fog penetrating image generation method and device
US9349170B1 (en) * 2014-09-04 2016-05-24 The United States Of America As Represented By The Secretary Of The Navy Single image contrast enhancement method using the adaptive wiener filter
CN105827976A (en) * 2016-04-26 2016-08-03 北京博瑞空间科技发展有限公司 GPU (graphics processing unit)-based video acquisition and processing device and system
CN106127706A (en) * 2016-06-20 2016-11-16 华南理工大学 A kind of single image defogging method based on non-linear cluster
CN107133927A (en) * 2017-04-21 2017-09-05 汪云飞 Single image to the fog method based on average mean square deviation dark under super-pixel framework

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9349170B1 (en) * 2014-09-04 2016-05-24 The United States Of America As Represented By The Secretary Of The Navy Single image contrast enhancement method using the adaptive wiener filter
CN104683767A (en) * 2015-02-10 2015-06-03 浙江宇视科技有限公司 Fog penetrating image generation method and device
CN105827976A (en) * 2016-04-26 2016-08-03 北京博瑞空间科技发展有限公司 GPU (graphics processing unit)-based video acquisition and processing device and system
CN106127706A (en) * 2016-06-20 2016-11-16 华南理工大学 A kind of single image defogging method based on non-linear cluster
CN107133927A (en) * 2017-04-21 2017-09-05 汪云飞 Single image to the fog method based on average mean square deviation dark under super-pixel framework

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于透射率估计模型和引导滤波的雾气图像去雾算法;夏玉琪等;《西华大学学报(自然科学版)》;20170531;第36卷(第3期);第1-7页 *
夏玉琪等.基于透射率估计模型和引导滤波的雾气图像去雾算法.《西华大学学报(自然科学版)》.2017,第36卷(第3期), *

Also Published As

Publication number Publication date
CN109214993A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109214993B (en) Visual enhancement method for intelligent vehicle in haze weather
CN107527332B (en) Low-illumination image color retention enhancement method based on improved Retinex
CN108765325B (en) Small unmanned aerial vehicle blurred image restoration method
CN109636766B (en) Edge information enhancement-based polarization difference and light intensity image multi-scale fusion method
CN109919859B (en) Outdoor scene image defogging enhancement method, computing device and storage medium thereof
CN104574293A (en) Multiscale Retinex image sharpening algorithm based on bounded operation
CN112116536A (en) Low-illumination image enhancement method and system
CN112200746A (en) Defogging method and device for traffic scene image in foggy day
CN114897753A (en) Low-illumination image enhancement method
CN107451986B (en) Single infrared image enhancement method based on fusion technology
CN113793278A (en) Improved remote sensing image denoising method with minimized weighted nuclear norm and selectively enhanced Laplace operator
Mu et al. Low and non-uniform illumination color image enhancement using weighted guided image filtering
Liu et al. Single image haze removal via depth-based contrast stretching transform
CN109345479B (en) Real-time preprocessing method and storage medium for video monitoring data
Pandey et al. A fast and effective vision enhancement method for single foggy image
CN116823686B (en) Night infrared and visible light image fusion method based on image enhancement
CN116862809A (en) Image enhancement method under low exposure condition
CN116579953A (en) Self-supervision water surface image enhancement method and related equipment
CN115797205A (en) Unsupervised single image enhancement method and system based on Retinex fractional order variation network
CN110647843B (en) Face image processing method
Kehar et al. Efficient single image dehazing model using metaheuristics-based brightness channel prior
CN109255804A (en) A kind of haze concentration sealing method
Banerjee et al. Fuzzy logic based vision enhancement using sigmoid function
CN113487496B (en) Image denoising method, system and device based on pixel type inference
Khmag Image dehazing and defogging based on second-generation wavelets and estimation of transmission map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant