CN116309146A - Retinex-based edge-preserving color low-illumination image enhancement method - Google Patents
Retinex-based edge-preserving color low-illumination image enhancement method Download PDFInfo
- Publication number
- CN116309146A CN116309146A CN202310231193.0A CN202310231193A CN116309146A CN 116309146 A CN116309146 A CN 116309146A CN 202310231193 A CN202310231193 A CN 202310231193A CN 116309146 A CN116309146 A CN 116309146A
- Authority
- CN
- China
- Prior art keywords
- image
- component
- retinex
- edge
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 238000005286 illumination Methods 0.000 title claims abstract description 40
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 25
- 238000001914 filtration Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 15
- 230000002146 bilateral effect Effects 0.000 claims abstract description 12
- 238000004321 preservation Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 9
- 230000004907 flux Effects 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000002265 prevention Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 8
- 230000014509 gene expression Effects 0.000 description 7
- 230000000007 visual effect Effects 0.000 description 5
- 235000008733 Citrus aurantifolia Nutrition 0.000 description 3
- 235000011941 Tilia x europaea Nutrition 0.000 description 3
- 239000004571 lime Substances 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000011084 recovery Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000005728 strengthening Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000036039 immunity Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a Retinex-based edge-preserving color low-illumination image enhancement method, and belongs to the field of digital image processing. According to the invention, an RGB image is converted into an HSV color space for processing, and a bilateral filter is used for replacing Gaussian filter in a traditional multi-scale Retinex algorithm to process a V channel so as to acquire edge information; meanwhile, an edge retaining layer consisting of self-adaptive histogram equalization and guided filtering is introduced to further optimize the edge; stretching the processed V channel, and adaptively adjusting the S channel along with the V channel to enable the image to meet human eye sense; and finally, returning to the RGB color space to acquire the enhanced image. The invention can improve brightness and retain the edge detail of the image.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a Retinex-based edge preservation color low-illumination image enhancement method.
Background
In modern military warfare, the visual system is one of the important ways that resources and strategic information are acquired on the military battlefield. Since the second world war explosion, many war, battle and military operations have been initiated at night, and some operations have been completed at night. In the field of military operations, unmanned aerial vehicles provided with video acquisition equipment can explore enemy conditions at a distance to acquire information, and the method has important strategic significance in modern intelligence warfare.
Under the condition of night or insufficient illumination conditions, images acquired by people often have the characteristics of low color contrast, poor resolution, large noise, small dynamic range, poor visual effect and the like. In this case, the low quality image cannot meet the military requirements for mission critical target reconnaissance, topographic survey, etc. for the battlefield.
Image enhancement methods based on Retinex theory are one of the most important methods in the field of low-luminance enhancement. In 1964, edwin Land et al proposed the Retinex theory, also known as retinal-cortex theory. The theory holds that an image can be divided into an illumination component and a reflection component, wherein the reflection component has no relation to the illumination component and is an inherent property of an object. Therefore, it is a conventional step of the Retinex algorithm to extrapolate the reflection characteristics back by estimating the illumination characteristics. Earliest, land et al calculated illumination values by using a randomly selected approach to estimate illumination components, which compares adjacent two pixels. Afterwards, he proposes to estimate the luminance value by using a center-surround method, i.e. a center Retinex algorithm, which determines the weights of the pixels by selecting a suitable center-surround function. Jobson et al propose a Single-Scale image Retinex algorithm (SSR) on the basis of the central Retinex algorithm. The algorithm is then further optimized by the relevant scholars as a Multi-Scale Retinex algorithm (MSR) and a Multi-Scale Retinex algorithm with color recovery (Multi-Scale Retinex with Color Restoration, MSRCR). The MSR algorithm adopts three Gaussian kernels with different standard deviations to convolve with the image, and the filtering of three different scales improves the overall quality of the image. The MSRCR algorithm focuses on the condition that the MSR algorithm has color distortion, introduces a color recovery factor, and adjusts the proportional relation among three color channels. However, the low-illumination enhancement method adopting the Retinex method often brings about the problems of edge loss, uneven color enhancement of RGB three channels and the like, and the invention provides the low-illumination image enhancement method based on the Retinex edge preservation color aiming at the phenomenon, so that the problem of edge blurring is effectively solved.
Disclosure of Invention
The invention aims to provide a Retinex-based edge-preserving color low-illumination image enhancement method, which aims to solve the problems that the edge is blurred and does not accord with natural sense officials of human eyes in the existing low-illumination image enhancement method.
The technical scheme for realizing the purpose of the invention is as follows: the method for enhancing the edge-preserving color low-illumination image based on Retinex comprises the following specific steps:
step 1: converting an RGB color space of an input low-illumination image representing three primary colors of red, green and blue into an HSV color space representing hue, saturation and brightness;
step 2: for the obtained original V component, respectively carrying out improved Retinex enhancement processing and edge preservation layer processing consisting of self-adaptive histogram equalization and guided filtering, and carrying out weighted average on the enhanced results of the two modes;
step 3: performing dynamic range expansion on the processed V channel;
step 4: according to the new enhanced V component, carrying out self-adaptive adjustment on the saturation component S;
step 5: the processed image is reconverted from the HSV color space to the RGB color space.
Preferably, the specific method for performing the modified Retinex enhancement processing on the obtained original V component is as follows:
based on the Retinex theory, the V-channel image is expressed as the product of the illumination component and the reflection component as:
I(x,y)=L(x,y)gR(x,y)
wherein, I (x, y) is expressed as an acquired V-channel original image; l (x, y) is expressed as illumination in the current environment, i.e., an illumination component; r (x, y) represents the reflectance inherent to the object, i.e., the reflection component;
the original image is an initial V-channel image, the illumination component is obtained by bilateral filtering of the V-channel, gaussian kernels are convolved with the image, the neighborhood pixels of each pixel of the input image are weighted and averaged, and a smooth and clear edge output image is obtained by controlling the size of the value; the specific formula of the bilateral filtering is as follows:
W(i,j,k,l)=W s (i,j)*W r (i,j)
wherein W is s (i, j) and W r (i, j) represents the spatial proximity factor, respectively, (i, j) represents the pixel point of the current position, and (k, l) represents the pixel point of the center.
Performing logarithmic transformation on the above formula, and performing difference on the logarithm of the input image I (x, y) and the logarithm of the illumination component L (x, y);
log(R(x,y))=log(I(x,y))-log(L(x,y))
the obtained result is converted back to the real number domain by the exponential operation to obtain the V-channel enhancement result obtained by the improved Retinex algorithm.
Preferably, the specific method for performing edge preservation processing on the obtained original V component is as follows:
traversing each pixel point of the original image, calculating histogram transformation by using windows around the pixel points, mapping the pixel points, and obtaining a self-adaptive histogram equalization result;
conducting guide filtering operation on the reinforced result of the self-adaptive histogram equalization method, wherein the result is a guide image, the window size is set to be 3 multiplied by 3, and the V flux after the edge preservation layer is processed is obtained;
the specific formula of the guided filtering is as follows:
wherein O is i Is an output image, G i Is a guide image, a k And b k Is a local window omega k Linear coefficients of (c);
the specific linear coefficients are:
wherein ε is the linear coefficient of prevention a k Regularization coefficient, μ that becomes too large k Represents G i In a local window omega k The average value of the inner part is calculated,represented at ω k Average value of the inner original image.
Preferably, the dynamic range expansion is performed on the V channel, and the specific method is as follows:
the piecewise function is adopted to expand the dynamic range of gray scale, specifically:
wherein V is amp Representing the amplified V component from the previous operation, and V' (x, y) represents the dynamic range extended V component.
Preferably, the saturation S component is adaptively adjusted, and the specific method is as follows:
step 4.1: calculating a difference VS between the enhanced V-flux and the original S-flux;
step 4.2: and correcting the saturation by combining the obtained difference between the two components through bilateral gamma transformation.
Preferably, the specific method for calculating the difference VS between the enhanced V-flux and the original S-flux is:
the average value of the V component and the average value of the S component are calculated, and the specific method is as follows:
where i represents the gray level, V' (i) is the number of pixels of the enhanced V-channel at gray level i, M and N represent the length and width of the processed image, respectively, and S (i) represents the number of pixels of the S-channel at gray level i;
the difference between the mean of the V component and the mean of the S component is taken as the difference VS between the enhanced V flux and the original S flux.
Preferably, the specific method for correcting the saturation by using the bilateral gamma variation and the acquired difference between the two components is as follows:
S 1 (x,y)=S(x,y) γ
S 2 (x,y)=1-(1-S(x,y)) γ
S'(x,y)=αS 1 (x,y)+(1-α)S 2 (x,y)
wherein, gamma is a parameter of gamma correction, S (x, y) is a value of an original saturation component, alpha is an adaptive parameter, and a calculation formula is:s' (x, y) is the adaptively adjusted saturation.
Preferably, step 5 converts the HSV color space to an RGB color space, in the following way:
compared with the prior art, the invention has the remarkable advantages that: the invention processes the image in HSV color space, only focusing on brightness component V, compared with the method of respectively strengthening red, green and blue three channels of the image in RGB color space, the invention saves processing time and avoids color difference phenomenon caused by uneven strengthening of three channels; the invention simultaneously introduces an improved Retinex method and an edge preservation layer algorithm, solves the problem of edge blurring effect brought by the traditional Retinex method, and the dynamic range expansion of the V channel avoids excessive enhancement of the image, and the saturation component S is adaptively adjusted along with the processed V component so that the image is more in line with the natural sense of human eyes; the invention maintains the edge information of the low-illumination image while enhancing, and effectively solves the problems of low brightness, edge loss, natural sensory difference and the like of the low-illumination image.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
Fig. 2 is a schematic diagram of the segmentation result of the edge preserving layer.
Fig. 3 is a comparison chart of the visualization effect of the bookcase scene after different processing methods.
Fig. 4 is a graph comparing the visual effects of kitchen scenes after different processing methods.
Detailed Description
Examples of the present invention are further described below with reference to the accompanying drawings.
The method for enhancing the edge-preserving color low-illumination image based on Retinex comprises the following specific steps as shown in fig. 1:
step 1: the RGB color space of the input low-illumination image representing three primary colors of red, green and blue is converted into HSV color space representing tone, saturation and brightness, and the conversion formula is as follows:
wherein T is max And T min The maximum value and the minimum value in the three red, green and blue channels are respectively, and the range of the value of H is [0,360 ]]V has a value of [0,1 ]]。
Step 2: for the obtained original V component, respectively performing an improved Retinex enhancement method and edge preservation layer processing consisting of adaptive histogram equalization and guided filtering, and then performing weighted average on the enhanced results of the two modes, wherein the specific process is as follows:
(1) Based on the Retinex theory, the V-channel image is expressed as the product of the illumination component and the reflection component as:
I(x,y)=L(x,y)gR(x,y)
wherein, I (x, y) is expressed as an acquired V-channel original image; l (x, y) is expressed as illumination in the current environment, i.e., an illumination component; r (x, y) represents the reflectance inherent to the object, i.e., the reflection component;
the original image is an initial V-channel image, the illumination component is obtained by bilateral filtering of the V-channel, gaussian kernels are convolved with the image, the neighborhood pixels of each pixel of the input image are weighted and averaged, and a smooth and clear edge output image is obtained by controlling the size of the value; the specific formula of the bilateral filtering is as follows:
W(i,j,k,l)=W s (i,j)*W r (i,j)
wherein W is s (i, j) and W r (i, j) represents the spatial proximity factor, respectively, (i, j) represents the pixel point of the current position, and (k, l) represents the pixel point of the center.
Wherein W is s (i, j) and W r (i, j) represent spatial proximity factors, respectively, which are all calculated as follows:
wherein delta s Is the standard deviation of the spatial domain, delta r Is the standard deviation in the range;
performing logarithmic transformation on the above formula, and performing difference on the logarithm of the input image I (x, y) and the logarithm of the illumination component L (x, y);
log(R(x,y))=log(I(x,y))-log(L(x,y))
the obtained result is converted back to the real number domain by the exponential operation to obtain the V-channel enhancement result obtained by the improved Retinex algorithm.
(2) Traversing each pixel point of the original image, calculating the histogram transformation by using windows around the pixel points, and then mapping the pixel points to obtain a self-adaptive histogram equalization result. And (3) conducting guided filtering operation on the reinforced result of the self-adaptive histogram equalization method, wherein the result is a guided image, and V flux after edge preservation layer processing is obtained. The expression for the guided filtering is as follows:
wherein O is i Is an output image, G i Is a guide image, a k And b k Is a local window omega k Linear coefficients of (c). The specific method for determining the linear coefficient is as follows:
the expression of the cost function is as follows:
cost functionThe value of (2) can be minimized by the least squares method, so that an expression of the linear coefficient can be obtained:
wherein ε is the linear coefficient of prevention a k The larger the regularization coefficient becomes, the more significant the smoothing effect is when the input image is taken as a guide image. Mu (mu) k Represents G i In a local windowω k Average of the inside at the same timeRepresented at ω k Average value of the inner original image.
Since the pixel points in the output image can be generated from the linear coefficients in various windows, the following expression can be obtained:
the processing result and details of the edge preservation layer are shown in fig. 2, the edge of the V-component image processed by the edge preservation layer is clear, and the noise level is low.
(4) The V channel result obtained by the improved Retinex method and the V channel result obtained by the edge retaining layer are weighted and averaged to obtain the enhanced V flux, and the formula is as follows:
V amp =αV 1 +(1-α)V 2
where V1 represents the V component obtained by the improved SSR method and V2 represents the V component obtained based on pilot filtering.
Step 3: and (3) carrying out dynamic range expansion on the processed V channel, avoiding the situation that gray values are concentrated in a higher gray area in a large quantity to generate excessive enhancement, and expanding the dynamic range of gray by adopting a piecewise function. The idea of extending the dynamic range is: for components greater than 0.5 after enhancement, they are not processed; for the components with the intensity less than 0.5, the components are expanded by a quadratic equation, and the corresponding expression is as follows:
wherein V is amp Representing the amplified V component from the previous operation, and V' (x, y) represents the dynamic range extended V component.
Step 4: according to the new enhanced V component, the saturation component S is adaptively adjusted, and the specific method is as follows:
(1) Calculating a difference VS between the enhanced V-flux and the original S-flux;
VS=V'mean-Smean
where VS represents the average difference between the two components, V' mean is the average of the enhanced V components, and V is the average of the S components.
The expressions for calculating V' mean and Smean are as follows:
(2) A bilateral gamma transformation is used to combine the difference between the two components obtained to correct for saturation. If it is desired to increase the value of the S component, it is necessary to ensure that the gamma parameter is in the range of (0, 1). Conversely, it is necessary to ensure that the parameter value is greater than 1 to reduce the value of the S component. This is accomplished using a piecewise function, the expression of which is as follows:
S 1 (x,y)=S(x,y) γ
S 2 (x,y)=1-(1-S(x,y)) θ
S'(x,y)=αS 1 (x,y)+(1-α)S 2 (x,y)
where θ is a parameter of gamma correction, S (x, y) is a value of an original saturation component, α is an adaptive parameter, and a calculation formula is:s' (x, y) is the adaptively adjusted saturation. The S-channel is adaptively adjusted using a piecewise function.
Step 5: the processed image is reconverted from the HSV color space to the RGB color space. The specific method comprises the following steps:
examples
The effect of the invention can be further illustrated by the following simulation experiments:
1. experimental conditions:
hardware platform: AMD Ryzen 74800H with Radeon Graphics 2.90GHz processor
Software simulation platform: matlab R2018a
2. Simulation contrast experiment:
to verify the effectiveness of the present invention, a comparison test was performed with the BIMEF method, the Dong method, the LIME method, the MF method, the NPE method and the SRIE method. Fig. 3 and 4 are a bookcase scene effect map and a kitchen scene effect map, respectively. (a) is a low-illuminance image, (b) is a reference image of normal light, (c) is a result of a BIMEF method, (d) is a result of a Dong method, (e) is a result of a LIME method, (f) is a result of an MF method, (g) is a result of an NPE method, (h) is a result of an SRIE method, and (i) is a result of the method of the present invention. As shown in the figure, the result treated by the method of the invention looks more natural visually, avoids the conditions of excessive enhancement and insufficient enhancement, effectively improves the color saturation and contrast of the low-illumination image, and has clear visual effect. The low-illumination images processed by the BIMEF algorithm, the MF algorithm and the SRIE algorithm are still darker in color, the effect on image enhancement is weaker, and details such as books in a bookcase and electric rice cookers in a kitchen can not be recovered well in the darker environment. The results processed by the LIME algorithm look natural as a whole, but there are cases of overenhancement in part of the scene, such as in a bookcase. The Dong algorithm amplifies noise while performing image enhancement, and obvious 'dark spots' appear in the scene of the kitchen, which are inconsistent with the reference image. The image processed by the NPE algorithm has the color difference phenomenon, and compared with a window in a kitchen scene, the image color has deviation with the reference image color.
Objective evaluations were performed using peak signal-to-noise ratio (PSNR), structural Similarity (SSIM) and Natural Image Quality Estimator (NIQE), and experimental results are shown in table 1. PSNR is the most commonly used index for evaluating image quality, and is used to evaluate the signal-to-noise ratio resistance of a processed image, which is proportional to the value of PSNR. SSIM is used to evaluate whether the structure of an image is distorted, the larger the value of SSIM, the higher the similarity between the processed image and the reference image. The NIQE is an unreferenced evaluation index for evaluating whether an image accords with natural sense, and the lower the value of the NIQE is, the closer the image is to a natural image. As can be seen from table 1, the PSNR values of the BIMEF algorithm and the SRIE algorithm are low, the noise immunity of the two methods is weak, and the value of the image SSIM processed by the SRIE is the lowest, so that the structure retaining effect of the image is general, and a certain distortion phenomenon exists. The method herein achieves the best PSNR, SSIM, and NIQE compared to other classical conventional image enhancement algorithms. The contrast discovers that the invention can clearly and naturally enhance the image, retain the detail information of the image under the condition of not changing the original structure of the image, and simultaneously meet the visual model of human eyes.
Table 1 quantitative comparison of different algorithms on test charts
Claims (8)
1. The method for enhancing the edge-preserving color low-illumination image based on Retinex is characterized by comprising the following specific steps of:
step 1: converting an RGB color space of an input low-illumination image representing three primary colors of red, green and blue into an HSV color space representing hue, saturation and brightness;
step 2: for the obtained original V component, respectively carrying out improved Retinex enhancement processing and edge preservation layer processing consisting of self-adaptive histogram equalization and guided filtering, and carrying out weighted average on the enhanced results of the two modes;
step 3: performing dynamic range expansion on the processed V channel;
step 4: according to the new enhanced V component, carrying out self-adaptive adjustment on the saturation component S;
step 5: the processed image is reconverted from the HSV color space to the RGB color space.
2. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 1, wherein the specific method for performing improved Retinex enhancement processing on the obtained original V component is:
based on the Retinex theory, the V-channel image is expressed as the product of the illumination component and the reflection component as:
I(x,y)=L(x,y)gR(x,y)
wherein, I (x, y) is expressed as an acquired V-channel original image; l (x, y) is expressed as illumination in the current environment, i.e., an illumination component; r (x, y) represents the reflectance inherent to the object, i.e., the reflection component;
the original image is an initial V-channel image, the illumination component is obtained by bilateral filtering of the V-channel, gaussian kernels are convolved with the image, the neighborhood pixels of each pixel of the input image are weighted and averaged, and a smooth and clear edge output image is obtained by controlling the size of the value; the specific formula of the bilateral filtering is as follows:
W(i,j,k,l)=W s (i,j)*W r (i,j)
wherein W is s (i, j) and W r (i, j) represents the spatial proximity factor, respectively, (i, j) represents the pixel point of the current position, and (k, l) represents the pixel point of the center.
Performing logarithmic transformation on the above formula, and performing difference on the logarithm of the input image I (x, y) and the logarithm of the illumination component L (x, y);
log(R(x,y))=log(I(x,y))-log(L(x,y))
the obtained result is converted back to the real number domain by the exponential operation to obtain the V-channel enhancement result obtained by the improved Retinex algorithm.
3. The Retinex-based edge preserving color low-illumination image enhancement method according to claim 1, wherein the specific method for performing edge preserving layer processing on the obtained original V component is as follows:
traversing each pixel point of the original image, calculating histogram transformation by using windows around the pixel points, mapping the pixel points, and obtaining a self-adaptive histogram equalization result;
conducting guide filtering operation on the reinforced result of the self-adaptive histogram equalization method, wherein the result is a guide image, the window size is set to be 3 multiplied by 3, and the V flux after the edge preservation layer is processed is obtained;
the specific formula of the guided filtering is as follows:
wherein O is i Is an output image, G i Is a guide image, a k And b k Is a local window omega k Linear coefficients of (c);
the specific linear coefficients are:
4. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 1, wherein the dynamic range expansion is performed for the V-channel, specifically:
the piecewise function is adopted to expand the dynamic range of gray scale, specifically:
wherein V is amp Representing the amplified V component from the previous operation, and V' (x, y) represents the dynamic range extended V component.
5. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 1, wherein the saturation S component is adaptively adjusted, specifically by:
step 4.1: calculating a difference VS between the enhanced V-flux and the original S-flux;
step 4.2: and correcting the saturation by combining the obtained difference between the two components through bilateral gamma transformation.
6. The Retinex-based edge-preserving color low-luminance image enhancement method according to claim 5, wherein the specific method for calculating the difference VS between the enhanced V-flux and the original S-flux is:
the average value of the V component and the average value of the S component are calculated, and the specific method is as follows:
where i represents the gray level, V' (i) is the number of pixels of the enhanced V-channel at gray level i, M and N represent the length and width of the processed image, respectively, and S (i) represents the number of pixels of the S-channel at gray level i;
the difference between the mean of the V component and the mean of the S component is taken as the difference VS between the enhanced V flux and the original S flux.
7. The Retinex-based edge-preserving color low-intensity image enhancement method according to claim 5, wherein the specific method for correcting saturation by using the bilateral gamma variation in combination with the obtained difference between the two components is as follows:
S 1 (x,y)=S(x,y) γ
S 2 (x,y)=1-(1-S(x,y)) γ
S'(x,y)=αS 1 (x,y)+(1-α)S 2 (x,y)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310231193.0A CN116309146A (en) | 2023-03-13 | 2023-03-13 | Retinex-based edge-preserving color low-illumination image enhancement method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310231193.0A CN116309146A (en) | 2023-03-13 | 2023-03-13 | Retinex-based edge-preserving color low-illumination image enhancement method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116309146A true CN116309146A (en) | 2023-06-23 |
Family
ID=86816301
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310231193.0A Pending CN116309146A (en) | 2023-03-13 | 2023-03-13 | Retinex-based edge-preserving color low-illumination image enhancement method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116309146A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173642A (en) * | 2023-11-03 | 2023-12-05 | 昊金海建设管理有限公司 | Building construction video real-time monitoring and early warning method based on big data |
-
2023
- 2023-03-13 CN CN202310231193.0A patent/CN116309146A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117173642A (en) * | 2023-11-03 | 2023-12-05 | 昊金海建设管理有限公司 | Building construction video real-time monitoring and early warning method based on big data |
CN117173642B (en) * | 2023-11-03 | 2024-02-02 | 昊金海建设管理有限公司 | Building construction video real-time monitoring and early warning method based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107527332B (en) | Low-illumination image color retention enhancement method based on improved Retinex | |
CN104156921B (en) | Self-adaptive low-illuminance or non-uniform-brightness image enhancement method | |
CN106846282B (en) | A kind of enhancement method of low-illumination image using adaptively correcting | |
CN101783012B (en) | Automatic image defogging method based on dark primary colour | |
KR100771158B1 (en) | Method AND System For Enhancement Color Image Quality | |
CN107730475A (en) | Image enchancing method and system | |
CN111968065B (en) | Self-adaptive enhancement method for image with uneven brightness | |
CN109087254B (en) | Unmanned aerial vehicle aerial image haze sky and white area self-adaptive processing method | |
CN110473152B (en) | Image enhancement method based on improved Retinex algorithm | |
CN109886885B (en) | Image enhancement method and system based on Lab color space and Retinex | |
CN109816608B (en) | Low-illumination image self-adaptive brightness enhancement method based on noise suppression | |
CN111968041A (en) | Self-adaptive image enhancement method | |
CN109325918B (en) | Image processing method and device and computer storage medium | |
CN105205794A (en) | Synchronous enhancement de-noising method of low-illumination image | |
CN104318529A (en) | Method for processing low-illumination images shot in severe environment | |
Kim et al. | Single image haze removal using hazy particle maps | |
CN116309146A (en) | Retinex-based edge-preserving color low-illumination image enhancement method | |
CN116681606A (en) | Underwater uneven illumination image enhancement method, system, equipment and medium | |
CN110580690B (en) | Image enhancement method for identifying peak value transformation nonlinear curve | |
Tohl et al. | Contrast enhancement by multi-level histogram shape segmentation with adaptive detail enhancement for noise suppression | |
CN116188339A (en) | Retinex and image fusion-based scotopic vision image enhancement method | |
CN114972102A (en) | Underwater image enhancement method based on global variable contrast enhancement and local correction | |
CN114037641A (en) | Low-illumination image enhancement method, device, equipment and medium | |
Wen et al. | A survey of image dehazing algorithm based on retinex theory | |
CN109859138A (en) | A kind of infrared image enhancing method based on human-eye visual characteristic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |