CN116309146A - Retinex-based edge-preserving color low-illumination image enhancement method - Google Patents

Retinex-based edge-preserving color low-illumination image enhancement method Download PDF

Info

Publication number
CN116309146A
CN116309146A CN202310231193.0A CN202310231193A CN116309146A CN 116309146 A CN116309146 A CN 116309146A CN 202310231193 A CN202310231193 A CN 202310231193A CN 116309146 A CN116309146 A CN 116309146A
Authority
CN
China
Prior art keywords
image
component
retinex
edge
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310231193.0A
Other languages
Chinese (zh)
Inventor
张闻文
刘昊洲
何伟基
陈钱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310231193.0A priority Critical patent/CN116309146A/en
Publication of CN116309146A publication Critical patent/CN116309146A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a Retinex-based edge-preserving color low-illumination image enhancement method, and belongs to the field of digital image processing. According to the invention, an RGB image is converted into an HSV color space for processing, and a bilateral filter is used for replacing Gaussian filter in a traditional multi-scale Retinex algorithm to process a V channel so as to acquire edge information; meanwhile, an edge retaining layer consisting of self-adaptive histogram equalization and guided filtering is introduced to further optimize the edge; stretching the processed V channel, and adaptively adjusting the S channel along with the V channel to enable the image to meet human eye sense; and finally, returning to the RGB color space to acquire the enhanced image. The invention can improve brightness and retain the edge detail of the image.

Description

Retinex-based edge-preserving color low-illumination image enhancement method
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a Retinex-based edge preservation color low-illumination image enhancement method.
Background
In modern military warfare, the visual system is one of the important ways that resources and strategic information are acquired on the military battlefield. Since the second world war explosion, many war, battle and military operations have been initiated at night, and some operations have been completed at night. In the field of military operations, unmanned aerial vehicles provided with video acquisition equipment can explore enemy conditions at a distance to acquire information, and the method has important strategic significance in modern intelligence warfare.
Under the condition of night or insufficient illumination conditions, images acquired by people often have the characteristics of low color contrast, poor resolution, large noise, small dynamic range, poor visual effect and the like. In this case, the low quality image cannot meet the military requirements for mission critical target reconnaissance, topographic survey, etc. for the battlefield.
Image enhancement methods based on Retinex theory are one of the most important methods in the field of low-luminance enhancement. In 1964, edwin Land et al proposed the Retinex theory, also known as retinal-cortex theory. The theory holds that an image can be divided into an illumination component and a reflection component, wherein the reflection component has no relation to the illumination component and is an inherent property of an object. Therefore, it is a conventional step of the Retinex algorithm to extrapolate the reflection characteristics back by estimating the illumination characteristics. Earliest, land et al calculated illumination values by using a randomly selected approach to estimate illumination components, which compares adjacent two pixels. Afterwards, he proposes to estimate the luminance value by using a center-surround method, i.e. a center Retinex algorithm, which determines the weights of the pixels by selecting a suitable center-surround function. Jobson et al propose a Single-Scale image Retinex algorithm (SSR) on the basis of the central Retinex algorithm. The algorithm is then further optimized by the relevant scholars as a Multi-Scale Retinex algorithm (MSR) and a Multi-Scale Retinex algorithm with color recovery (Multi-Scale Retinex with Color Restoration, MSRCR). The MSR algorithm adopts three Gaussian kernels with different standard deviations to convolve with the image, and the filtering of three different scales improves the overall quality of the image. The MSRCR algorithm focuses on the condition that the MSR algorithm has color distortion, introduces a color recovery factor, and adjusts the proportional relation among three color channels. However, the low-illumination enhancement method adopting the Retinex method often brings about the problems of edge loss, uneven color enhancement of RGB three channels and the like, and the invention provides the low-illumination image enhancement method based on the Retinex edge preservation color aiming at the phenomenon, so that the problem of edge blurring is effectively solved.
Disclosure of Invention
The invention aims to provide a Retinex-based edge-preserving color low-illumination image enhancement method, which aims to solve the problems that the edge is blurred and does not accord with natural sense officials of human eyes in the existing low-illumination image enhancement method.
The technical scheme for realizing the purpose of the invention is as follows: the method for enhancing the edge-preserving color low-illumination image based on Retinex comprises the following specific steps:
step 1: converting an RGB color space of an input low-illumination image representing three primary colors of red, green and blue into an HSV color space representing hue, saturation and brightness;
step 2: for the obtained original V component, respectively carrying out improved Retinex enhancement processing and edge preservation layer processing consisting of self-adaptive histogram equalization and guided filtering, and carrying out weighted average on the enhanced results of the two modes;
step 3: performing dynamic range expansion on the processed V channel;
step 4: according to the new enhanced V component, carrying out self-adaptive adjustment on the saturation component S;
step 5: the processed image is reconverted from the HSV color space to the RGB color space.
Preferably, the specific method for performing the modified Retinex enhancement processing on the obtained original V component is as follows:
based on the Retinex theory, the V-channel image is expressed as the product of the illumination component and the reflection component as:
I(x,y)=L(x,y)gR(x,y)
wherein, I (x, y) is expressed as an acquired V-channel original image; l (x, y) is expressed as illumination in the current environment, i.e., an illumination component; r (x, y) represents the reflectance inherent to the object, i.e., the reflection component;
the original image is an initial V-channel image, the illumination component is obtained by bilateral filtering of the V-channel, gaussian kernels are convolved with the image, the neighborhood pixels of each pixel of the input image are weighted and averaged, and a smooth and clear edge output image is obtained by controlling the size of the value; the specific formula of the bilateral filtering is as follows:
W(i,j,k,l)=W s (i,j)*W r (i,j)
wherein W is s (i, j) and W r (i, j) represents the spatial proximity factor, respectively, (i, j) represents the pixel point of the current position, and (k, l) represents the pixel point of the center.
Performing logarithmic transformation on the above formula, and performing difference on the logarithm of the input image I (x, y) and the logarithm of the illumination component L (x, y);
log(R(x,y))=log(I(x,y))-log(L(x,y))
the obtained result is converted back to the real number domain by the exponential operation to obtain the V-channel enhancement result obtained by the improved Retinex algorithm.
Preferably, the specific method for performing edge preservation processing on the obtained original V component is as follows:
traversing each pixel point of the original image, calculating histogram transformation by using windows around the pixel points, mapping the pixel points, and obtaining a self-adaptive histogram equalization result;
conducting guide filtering operation on the reinforced result of the self-adaptive histogram equalization method, wherein the result is a guide image, the window size is set to be 3 multiplied by 3, and the V flux after the edge preservation layer is processed is obtained;
the specific formula of the guided filtering is as follows:
Figure BDA0004120569000000031
wherein O is i Is an output image, G i Is a guide image, a k And b k Is a local window omega k Linear coefficients of (c);
the specific linear coefficients are:
Figure BDA0004120569000000032
Figure BDA0004120569000000033
wherein ε is the linear coefficient of prevention a k Regularization coefficient, μ that becomes too large k Represents G i In a local window omega k The average value of the inner part is calculated,
Figure BDA0004120569000000034
represented at ω k Average value of the inner original image.
Preferably, the dynamic range expansion is performed on the V channel, and the specific method is as follows:
the piecewise function is adopted to expand the dynamic range of gray scale, specifically:
Figure BDA0004120569000000035
wherein V is amp Representing the amplified V component from the previous operation, and V' (x, y) represents the dynamic range extended V component.
Preferably, the saturation S component is adaptively adjusted, and the specific method is as follows:
step 4.1: calculating a difference VS between the enhanced V-flux and the original S-flux;
step 4.2: and correcting the saturation by combining the obtained difference between the two components through bilateral gamma transformation.
Preferably, the specific method for calculating the difference VS between the enhanced V-flux and the original S-flux is:
the average value of the V component and the average value of the S component are calculated, and the specific method is as follows:
Figure BDA0004120569000000041
Figure BDA0004120569000000042
where i represents the gray level, V' (i) is the number of pixels of the enhanced V-channel at gray level i, M and N represent the length and width of the processed image, respectively, and S (i) represents the number of pixels of the S-channel at gray level i;
the difference between the mean of the V component and the mean of the S component is taken as the difference VS between the enhanced V flux and the original S flux.
Preferably, the specific method for correcting the saturation by using the bilateral gamma variation and the acquired difference between the two components is as follows:
Figure BDA0004120569000000043
S 1 (x,y)=S(x,y) γ
S 2 (x,y)=1-(1-S(x,y)) γ
S'(x,y)=αS 1 (x,y)+(1-α)S 2 (x,y)
wherein, gamma is a parameter of gamma correction, S (x, y) is a value of an original saturation component, alpha is an adaptive parameter, and a calculation formula is:
Figure BDA0004120569000000044
s' (x, y) is the adaptively adjusted saturation.
Preferably, step 5 converts the HSV color space to an RGB color space, in the following way:
Figure BDA0004120569000000045
wherein,,
Figure BDA0004120569000000046
p=V×(1-S),q=v×(1-f×S),t=V×(1-(1-f)×S)。
compared with the prior art, the invention has the remarkable advantages that: the invention processes the image in HSV color space, only focusing on brightness component V, compared with the method of respectively strengthening red, green and blue three channels of the image in RGB color space, the invention saves processing time and avoids color difference phenomenon caused by uneven strengthening of three channels; the invention simultaneously introduces an improved Retinex method and an edge preservation layer algorithm, solves the problem of edge blurring effect brought by the traditional Retinex method, and the dynamic range expansion of the V channel avoids excessive enhancement of the image, and the saturation component S is adaptively adjusted along with the processed V component so that the image is more in line with the natural sense of human eyes; the invention maintains the edge information of the low-illumination image while enhancing, and effectively solves the problems of low brightness, edge loss, natural sensory difference and the like of the low-illumination image.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of the segmentation result of the edge preserving layer.
Fig. 3 is a comparison chart of the visualization effect of the bookcase scene after different processing methods.
Fig. 4 is a graph comparing the visual effects of kitchen scenes after different processing methods.
Detailed Description
Examples of the present invention are further described below with reference to the accompanying drawings.
The method for enhancing the edge-preserving color low-illumination image based on Retinex comprises the following specific steps as shown in fig. 1:
step 1: the RGB color space of the input low-illumination image representing three primary colors of red, green and blue is converted into HSV color space representing tone, saturation and brightness, and the conversion formula is as follows:
Figure BDA0004120569000000051
wherein T is max And T min The maximum value and the minimum value in the three red, green and blue channels are respectively, and the range of the value of H is [0,360 ]]V has a value of [0,1 ]]。
Step 2: for the obtained original V component, respectively performing an improved Retinex enhancement method and edge preservation layer processing consisting of adaptive histogram equalization and guided filtering, and then performing weighted average on the enhanced results of the two modes, wherein the specific process is as follows:
(1) Based on the Retinex theory, the V-channel image is expressed as the product of the illumination component and the reflection component as:
I(x,y)=L(x,y)gR(x,y)
wherein, I (x, y) is expressed as an acquired V-channel original image; l (x, y) is expressed as illumination in the current environment, i.e., an illumination component; r (x, y) represents the reflectance inherent to the object, i.e., the reflection component;
the original image is an initial V-channel image, the illumination component is obtained by bilateral filtering of the V-channel, gaussian kernels are convolved with the image, the neighborhood pixels of each pixel of the input image are weighted and averaged, and a smooth and clear edge output image is obtained by controlling the size of the value; the specific formula of the bilateral filtering is as follows:
W(i,j,k,l)=W s (i,j)*W r (i,j)
wherein W is s (i, j) and W r (i, j) represents the spatial proximity factor, respectively, (i, j) represents the pixel point of the current position, and (k, l) represents the pixel point of the center.
Wherein W is s (i, j) and W r (i, j) represent spatial proximity factors, respectively, which are all calculated as follows:
Figure BDA0004120569000000061
Figure BDA0004120569000000062
wherein delta s Is the standard deviation of the spatial domain, delta r Is the standard deviation in the range;
performing logarithmic transformation on the above formula, and performing difference on the logarithm of the input image I (x, y) and the logarithm of the illumination component L (x, y);
log(R(x,y))=log(I(x,y))-log(L(x,y))
the obtained result is converted back to the real number domain by the exponential operation to obtain the V-channel enhancement result obtained by the improved Retinex algorithm.
(2) Traversing each pixel point of the original image, calculating the histogram transformation by using windows around the pixel points, and then mapping the pixel points to obtain a self-adaptive histogram equalization result. And (3) conducting guided filtering operation on the reinforced result of the self-adaptive histogram equalization method, wherein the result is a guided image, and V flux after edge preservation layer processing is obtained. The expression for the guided filtering is as follows:
Figure BDA0004120569000000071
wherein O is i Is an output image, G i Is a guide image, a k And b k Is a local window omega k Linear coefficients of (c). The specific method for determining the linear coefficient is as follows:
the expression of the cost function is as follows:
Figure BDA0004120569000000072
cost function
Figure BDA0004120569000000073
The value of (2) can be minimized by the least squares method, so that an expression of the linear coefficient can be obtained:
Figure BDA0004120569000000074
Figure BDA0004120569000000075
wherein ε is the linear coefficient of prevention a k The larger the regularization coefficient becomes, the more significant the smoothing effect is when the input image is taken as a guide image. Mu (mu) k Represents G i In a local windowω k Average of the inside at the same time
Figure BDA0004120569000000076
Represented at ω k Average value of the inner original image.
Since the pixel points in the output image can be generated from the linear coefficients in various windows, the following expression can be obtained:
Figure BDA0004120569000000077
the processing result and details of the edge preservation layer are shown in fig. 2, the edge of the V-component image processed by the edge preservation layer is clear, and the noise level is low.
(4) The V channel result obtained by the improved Retinex method and the V channel result obtained by the edge retaining layer are weighted and averaged to obtain the enhanced V flux, and the formula is as follows:
V amp =αV 1 +(1-α)V 2
where V1 represents the V component obtained by the improved SSR method and V2 represents the V component obtained based on pilot filtering.
Step 3: and (3) carrying out dynamic range expansion on the processed V channel, avoiding the situation that gray values are concentrated in a higher gray area in a large quantity to generate excessive enhancement, and expanding the dynamic range of gray by adopting a piecewise function. The idea of extending the dynamic range is: for components greater than 0.5 after enhancement, they are not processed; for the components with the intensity less than 0.5, the components are expanded by a quadratic equation, and the corresponding expression is as follows:
Figure BDA0004120569000000081
wherein V is amp Representing the amplified V component from the previous operation, and V' (x, y) represents the dynamic range extended V component.
Step 4: according to the new enhanced V component, the saturation component S is adaptively adjusted, and the specific method is as follows:
(1) Calculating a difference VS between the enhanced V-flux and the original S-flux;
VS=V'mean-Smean
where VS represents the average difference between the two components, V' mean is the average of the enhanced V components, and V is the average of the S components.
The expressions for calculating V' mean and Smean are as follows:
Figure BDA0004120569000000082
Figure BDA0004120569000000083
(2) A bilateral gamma transformation is used to combine the difference between the two components obtained to correct for saturation. If it is desired to increase the value of the S component, it is necessary to ensure that the gamma parameter is in the range of (0, 1). Conversely, it is necessary to ensure that the parameter value is greater than 1 to reduce the value of the S component. This is accomplished using a piecewise function, the expression of which is as follows:
Figure BDA0004120569000000084
S 1 (x,y)=S(x,y) γ
S 2 (x,y)=1-(1-S(x,y)) θ
S'(x,y)=αS 1 (x,y)+(1-α)S 2 (x,y)
where θ is a parameter of gamma correction, S (x, y) is a value of an original saturation component, α is an adaptive parameter, and a calculation formula is:
Figure BDA0004120569000000091
s' (x, y) is the adaptively adjusted saturation. The S-channel is adaptively adjusted using a piecewise function.
Step 5: the processed image is reconverted from the HSV color space to the RGB color space. The specific method comprises the following steps:
Figure BDA0004120569000000092
wherein,,
Figure BDA0004120569000000093
p=V×(1-S),q=v×(1-f×S),t=V×(1-(1-f)×S)。
examples
The effect of the invention can be further illustrated by the following simulation experiments:
1. experimental conditions:
hardware platform: AMD Ryzen 74800H with Radeon Graphics 2.90GHz processor
Software simulation platform: matlab R2018a
2. Simulation contrast experiment:
to verify the effectiveness of the present invention, a comparison test was performed with the BIMEF method, the Dong method, the LIME method, the MF method, the NPE method and the SRIE method. Fig. 3 and 4 are a bookcase scene effect map and a kitchen scene effect map, respectively. (a) is a low-illuminance image, (b) is a reference image of normal light, (c) is a result of a BIMEF method, (d) is a result of a Dong method, (e) is a result of a LIME method, (f) is a result of an MF method, (g) is a result of an NPE method, (h) is a result of an SRIE method, and (i) is a result of the method of the present invention. As shown in the figure, the result treated by the method of the invention looks more natural visually, avoids the conditions of excessive enhancement and insufficient enhancement, effectively improves the color saturation and contrast of the low-illumination image, and has clear visual effect. The low-illumination images processed by the BIMEF algorithm, the MF algorithm and the SRIE algorithm are still darker in color, the effect on image enhancement is weaker, and details such as books in a bookcase and electric rice cookers in a kitchen can not be recovered well in the darker environment. The results processed by the LIME algorithm look natural as a whole, but there are cases of overenhancement in part of the scene, such as in a bookcase. The Dong algorithm amplifies noise while performing image enhancement, and obvious 'dark spots' appear in the scene of the kitchen, which are inconsistent with the reference image. The image processed by the NPE algorithm has the color difference phenomenon, and compared with a window in a kitchen scene, the image color has deviation with the reference image color.
Objective evaluations were performed using peak signal-to-noise ratio (PSNR), structural Similarity (SSIM) and Natural Image Quality Estimator (NIQE), and experimental results are shown in table 1. PSNR is the most commonly used index for evaluating image quality, and is used to evaluate the signal-to-noise ratio resistance of a processed image, which is proportional to the value of PSNR. SSIM is used to evaluate whether the structure of an image is distorted, the larger the value of SSIM, the higher the similarity between the processed image and the reference image. The NIQE is an unreferenced evaluation index for evaluating whether an image accords with natural sense, and the lower the value of the NIQE is, the closer the image is to a natural image. As can be seen from table 1, the PSNR values of the BIMEF algorithm and the SRIE algorithm are low, the noise immunity of the two methods is weak, and the value of the image SSIM processed by the SRIE is the lowest, so that the structure retaining effect of the image is general, and a certain distortion phenomenon exists. The method herein achieves the best PSNR, SSIM, and NIQE compared to other classical conventional image enhancement algorithms. The contrast discovers that the invention can clearly and naturally enhance the image, retain the detail information of the image under the condition of not changing the original structure of the image, and simultaneously meet the visual model of human eyes.
Table 1 quantitative comparison of different algorithms on test charts
Figure BDA0004120569000000101

Claims (8)

1. The method for enhancing the edge-preserving color low-illumination image based on Retinex is characterized by comprising the following specific steps of:
step 1: converting an RGB color space of an input low-illumination image representing three primary colors of red, green and blue into an HSV color space representing hue, saturation and brightness;
step 2: for the obtained original V component, respectively carrying out improved Retinex enhancement processing and edge preservation layer processing consisting of self-adaptive histogram equalization and guided filtering, and carrying out weighted average on the enhanced results of the two modes;
step 3: performing dynamic range expansion on the processed V channel;
step 4: according to the new enhanced V component, carrying out self-adaptive adjustment on the saturation component S;
step 5: the processed image is reconverted from the HSV color space to the RGB color space.
2. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 1, wherein the specific method for performing improved Retinex enhancement processing on the obtained original V component is:
based on the Retinex theory, the V-channel image is expressed as the product of the illumination component and the reflection component as:
I(x,y)=L(x,y)gR(x,y)
wherein, I (x, y) is expressed as an acquired V-channel original image; l (x, y) is expressed as illumination in the current environment, i.e., an illumination component; r (x, y) represents the reflectance inherent to the object, i.e., the reflection component;
the original image is an initial V-channel image, the illumination component is obtained by bilateral filtering of the V-channel, gaussian kernels are convolved with the image, the neighborhood pixels of each pixel of the input image are weighted and averaged, and a smooth and clear edge output image is obtained by controlling the size of the value; the specific formula of the bilateral filtering is as follows:
W(i,j,k,l)=W s (i,j)*W r (i,j)
wherein W is s (i, j) and W r (i, j) represents the spatial proximity factor, respectively, (i, j) represents the pixel point of the current position, and (k, l) represents the pixel point of the center.
Performing logarithmic transformation on the above formula, and performing difference on the logarithm of the input image I (x, y) and the logarithm of the illumination component L (x, y);
log(R(x,y))=log(I(x,y))-log(L(x,y))
the obtained result is converted back to the real number domain by the exponential operation to obtain the V-channel enhancement result obtained by the improved Retinex algorithm.
3. The Retinex-based edge preserving color low-illumination image enhancement method according to claim 1, wherein the specific method for performing edge preserving layer processing on the obtained original V component is as follows:
traversing each pixel point of the original image, calculating histogram transformation by using windows around the pixel points, mapping the pixel points, and obtaining a self-adaptive histogram equalization result;
conducting guide filtering operation on the reinforced result of the self-adaptive histogram equalization method, wherein the result is a guide image, the window size is set to be 3 multiplied by 3, and the V flux after the edge preservation layer is processed is obtained;
the specific formula of the guided filtering is as follows:
Figure FDA0004120568990000021
wherein O is i Is an output image, G i Is a guide image, a k And b k Is a local window omega k Linear coefficients of (c);
the specific linear coefficients are:
Figure FDA0004120568990000022
Figure FDA0004120568990000023
wherein ε is the linear coefficient of prevention a k Regularization coefficient, μ that becomes too large k Represents G i In a local window omega k The average value of the inner part is calculated,
Figure FDA0004120568990000024
represented at ω k Average value of the inner original image.
4. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 1, wherein the dynamic range expansion is performed for the V-channel, specifically:
the piecewise function is adopted to expand the dynamic range of gray scale, specifically:
Figure FDA0004120568990000025
wherein V is amp Representing the amplified V component from the previous operation, and V' (x, y) represents the dynamic range extended V component.
5. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 1, wherein the saturation S component is adaptively adjusted, specifically by:
step 4.1: calculating a difference VS between the enhanced V-flux and the original S-flux;
step 4.2: and correcting the saturation by combining the obtained difference between the two components through bilateral gamma transformation.
6. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 5, wherein the specific method for calculating the difference VS between the enhanced V-flux and the original S-flux is:
the average value of the V component and the average value of the S component are calculated, and the specific method is as follows:
Figure FDA0004120568990000031
Figure FDA0004120568990000032
where i represents the gray level, V' (i) is the number of pixels of the enhanced V-channel at gray level i, M and N represent the length and width of the processed image, respectively, and S (i) represents the number of pixels of the S-channel at gray level i;
the difference between the mean of the V component and the mean of the S component is taken as the difference VS between the enhanced V flux and the original S flux.
7. The Retinex-based edge-preserving color low-intensity image enhancement method according to claim 5, wherein the specific method for correcting saturation by using the bilateral gamma variation in combination with the obtained difference between the two components is as follows:
Figure FDA0004120568990000033
S 1 (x,y)=S(x,y) γ
S 2 (x,y)=1-(1-S(x,y)) γ
S'(x,y)=αS 1 (x,y)+(1-α)S 2 (x,y)
wherein, gamma is a parameter of gamma correction, S (x, y) is a value of an original saturation component, alpha is an adaptive parameter, and a calculation formula is:
Figure FDA0004120568990000034
s' (x, y) is the adaptively adjusted saturation.
8. The Retinex-based edge-preserving color low-luminance image enhancement method of claim 1, wherein step 5 converts the HSV color space to an RGB color space, specifically by:
Figure FDA0004120568990000035
wherein,,
Figure FDA0004120568990000041
p=V×(1-S),q=v×(1-f×S),t=V×(1-(1-f)×S)。
CN202310231193.0A 2023-03-13 2023-03-13 Retinex-based edge-preserving color low-illumination image enhancement method Pending CN116309146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310231193.0A CN116309146A (en) 2023-03-13 2023-03-13 Retinex-based edge-preserving color low-illumination image enhancement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310231193.0A CN116309146A (en) 2023-03-13 2023-03-13 Retinex-based edge-preserving color low-illumination image enhancement method

Publications (1)

Publication Number Publication Date
CN116309146A true CN116309146A (en) 2023-06-23

Family

ID=86816301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310231193.0A Pending CN116309146A (en) 2023-03-13 2023-03-13 Retinex-based edge-preserving color low-illumination image enhancement method

Country Status (1)

Country Link
CN (1) CN116309146A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173642A (en) * 2023-11-03 2023-12-05 昊金海建设管理有限公司 Building construction video real-time monitoring and early warning method based on big data

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173642A (en) * 2023-11-03 2023-12-05 昊金海建设管理有限公司 Building construction video real-time monitoring and early warning method based on big data
CN117173642B (en) * 2023-11-03 2024-02-02 昊金海建设管理有限公司 Building construction video real-time monitoring and early warning method based on big data

Similar Documents

Publication Publication Date Title
CN107527332B (en) Low-illumination image color retention enhancement method based on improved Retinex
CN104156921B (en) Self-adaptive low-illuminance or non-uniform-brightness image enhancement method
CN106846282B (en) A kind of enhancement method of low-illumination image using adaptively correcting
CN101783012B (en) Automatic image defogging method based on dark primary colour
KR100771158B1 (en) Method AND System For Enhancement Color Image Quality
CN107730475A (en) Image enchancing method and system
CN111968065B (en) Self-adaptive enhancement method for image with uneven brightness
CN109087254B (en) Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method
CN110473152B (en) Image enhancement method based on improved Retinex algorithm
CN109886885B (en) Image enhancement method and system based on Lab color space and Retinex
CN109816608B (en) Low-illumination image self-adaptive brightness enhancement method based on noise suppression
CN111968041A (en) Self-adaptive image enhancement method
CN109325918B (en) Image processing method and device and computer storage medium
CN105205794A (en) Synchronous enhancement de-noising method of low-illumination image
CN104318529A (en) Method for processing low-illumination images shot in severe environment
Kim et al. Single image haze removal using hazy particle maps
CN116309146A (en) Retinex-based edge-preserving color low-illumination image enhancement method
CN116681606A (en) Underwater uneven illumination image enhancement method, system, equipment and medium
CN110580690B (en) Image enhancement method for identifying peak value transformation nonlinear curve
Tohl et al. Contrast enhancement by multi-level histogram shape segmentation with adaptive detail enhancement for noise suppression
CN116188339A (en) Retinex and image fusion-based scotopic vision image enhancement method
CN114972102A (en) Underwater image enhancement method based on global variable contrast enhancement and local correction
CN114037641A (en) Low-illumination image enhancement method, device, equipment and medium
Wen et al. A survey of image dehazing algorithm based on retinex theory
CN109859138A (en) A kind of infrared image enhancing method based on human-eye visual characteristic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination