CN113989145A - Image enhancement method, system and storage medium - Google Patents

Image enhancement method, system and storage medium Download PDF

Info

Publication number
CN113989145A
CN113989145A CN202111248324.3A CN202111248324A CN113989145A CN 113989145 A CN113989145 A CN 113989145A CN 202111248324 A CN202111248324 A CN 202111248324A CN 113989145 A CN113989145 A CN 113989145A
Authority
CN
China
Prior art keywords
image
contrast
identified
enhancement
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111248324.3A
Other languages
Chinese (zh)
Inventor
徐超
朱雨哲
李正平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202111248324.3A priority Critical patent/CN113989145A/en
Publication of CN113989145A publication Critical patent/CN113989145A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image enhancement method, an image enhancement system and a storage medium, and relates to the field of image processing. The invention comprises the following steps: inputting an image to be recognized; acquiring an edge contour enhancement image of the image to be identified and a contrast limiting histogram of the image to be identified; calculating the local contrast difference between the edge contour enhancement image and the image limit contrast histogram to be identified; calculating a weight fusion function of the contrast difference through an inverse trigonometric function; and obtaining a final picture through a weight fusion function. The invention can generate high-quality fused images, and keeps the nature of hue, saturation and brightness while having most of the details of the original input images.

Description

Image enhancement method, system and storage medium
Technical Field
The invention relates to the technical field of image processing, in particular to an image enhancement method, an image enhancement system and a storage medium.
Background
The current endoscope image enhancement methods mainly comprise: histogram enhancement based methods; a machine learning based approach; a homomorphic filtering based approach. The above methods have respective advantages and disadvantages, and none of them can completely replace other methods. The endoscope image enhancement algorithm has the very difficult problem of having good enhancement effect on all endoscope images.
1. Histogram enhancement based methods;
the gray histogram of the image represents the number of pixels with each gray level in the gray image, reflects the frequency of occurrence of each gray level in the image, and is one of the basic statistical characteristics of the image. The histogram equalization method is the most common method for enhancing image contrast because of its effectiveness and simplicity, and its basic idea is to determine its corresponding output gray value according to the gray probability distribution of the input image, and achieve the purpose of improving image contrast by expanding the dynamic range of the image. The histogram equalization is a method for automatically regulating and adjusting the contrast quality of an image by utilizing gray scale transformation, and the basic idea is to obtain a gray scale transformation function through a probability density function of gray scale, which is a histogram correction method based on an accumulative distribution function transformation method: firstly, a histogram of a given image to be processed is solved; then, transforming the statistical histogram of the original image by using an accumulative distribution function to obtain new image gray; and finally, carrying out approximation processing, replacing the old gray with the new gray, and simultaneously combining all gray histograms with equal or similar gray values together. The enhancement of the grayscale image by the histogram has two disadvantages: firstly, the gray level of the processed image is reduced to some extent, so that some details disappear; secondly, some images, such as histograms with peaks, etc., are easy to produce unnatural excessive enhancement of contrast after being processed. For example, some endoscopic images are too concentrated in gradation distribution, and when histogram equalization processing is performed on such images, the result is often too bright or too dark, and the purpose of enhancing the visual effect is not achieved. In addition, for the limited gray level of the image, the quantization error also often causes information loss, so that some sensitive edges disappear due to combination with adjacent pixel points, which is an unavoidable problem of histogram correction enhancement.
2. A machine learning based approach;
in recent years, with the rise of artificial intelligence, the enhancement of images by applying deep learning gradually becomes the mainstream, and the image restoration method has good restoration effect and good real-time performance. However, for endoscopic images, it is difficult to find enough images containing and corresponding to no special lesions and blood vessel contours to meet the training requirements of deep learning, which is theoretically feasible, but the practical implementation difficulty is too great.
3. Homomorphic filtering based method
Homomorphic filtering is a special filtering technique that can be used to compress the dynamic range of image gray and enhance contrast. This processing method is rather mathematical in that the human visual system has a non-linear characteristic similar to a logarithmic operation on the brightness of the image. The image is represented by an illumination component and a reflection component. The illumination component is related to the light source and is typically used to indicate slow dynamic changes, determining the dynamic range that a pixel can achieve in an image. The reflection component is determined by the characteristics of the object itself, and represents a sharp change portion of the gray scale, such as an edge portion of the object or the like. The illumination component is associated with the low frequency component after fourier transformation and the reflection component is associated with the high frequency component. The fourier transform of the product of the two functions is inseparable, so that the image cannot be directly subjected to fourier transform, and therefore, the logarithm of the image needs to be taken firstly, then the fourier transform is multiplied by the transfer function of the designed filter, then the fourier inverse transform is carried out, and finally the result is subjected to exponential to obtain the final processing result. The method has high calculation speed, but has some defects, firstly, a halo phenomenon easily occurs in a strong light shadow transition region; secondly, processing the brighter image is not good enough; finally, the color retention capability is weak. It is not suitable for endoscopic surgery.
How to solve the problem of too bright or too dark of an image in the image processing process and how to ensure the normal color tone is a technical problem which needs to be solved urgently in the field.
Disclosure of Invention
Aiming at respective characteristics of the existing endoscope image processing algorithms, a local contrast ratio weight fusion image enhancement method fusing two different image processing algorithms is provided.
In order to achieve the purpose, the invention adopts the following technical scheme:
an image enhancement method comprising the steps of:
inputting an image to be recognized;
acquiring an edge contour enhancement image of the image to be identified and a contrast limiting histogram of the image to be identified;
calculating the local contrast difference between the edge contour enhancement image and the image limit contrast histogram to be identified;
calculating a weight fusion function of the contrast difference through an inverse trigonometric function;
and obtaining a final picture through a weight fusion function.
Optionally, the specific steps of obtaining the edge contour enhancement map of the image to be identified are as follows:
carrying out n times of iterative mean filtering operation on the image to be identified, and outputting a mean-filtered low-frequency image after each time of mean filtering operation;
subtracting each low-frequency image from the image to be identified to obtain n high-frequency images;
selecting an optimal high-frequency image from the n high-frequency images, and multiplying the optimal high-frequency image by a sharpening factor to obtain an enhanced high-frequency image;
and adding the low-frequency image output after the nth mean filtering operation and the enhanced high-frequency image to obtain an edge contour enhancement image.
Optionally, the specific steps of obtaining the edge contour enhancement map of the image to be identified are as follows:
fsharpen(x,y)=dn(x,y)+λ*gn(x,y);
wherein d isn(x, y) is the nth low frequency image, gn(x, y) is the nth high frequency image, and λ is a sharpening factor.
Optionally, the specific steps of obtaining the edge contour enhancement map of the image to be identified are as follows:
acquiring a point in the image, wherein the point is represented as x (i, j), the window size is (2n +1) (2n +1) by taking (i, j) as a center;
obtaining a low frequency image d in an imageiLocal mean m of (x, y) (i ═ 1,2,3.. n)x(i, j) and local variance
Figure BDA0003321580910000041
Comprises the following steps:
Figure BDA0003321580910000042
Figure BDA0003321580910000043
Figure BDA0003321580910000044
where D is the low frequency image Di(x, y) (i ═ 1,2,3.. n). (k, l) is the position within the sliding window.
Optionally, a specific method for calculating a local contrast difference between the edge contour enhancement map and the limited contrast histogram of the image to be identified is as follows:
Iii is 1,2 is the edge profile enhancement map and the limiting contrast histogram equalization map, respectively, for each pixel (x, y) at IiThe local contrast image is defined as:
Ci(x,y)=max(Ni(x,y))-min(Ni(x,y));
Ni(x, y) represents I centered on (x, y)i3X3 local pixels; max (.) and min (.) represent the maximum and minimum pixel values of the local image, respectively;
let the difference between C1 and C2 be E ═ C2-C1, i.e., E (x, y) ═ C2(x, y) -C1(x, y) represents the difference in local contrast for each pixel (x, y). C1 and C2 refer to the local contrast of the first and second images, respectively.
Optionally, the weight fusion function formula for calculating the contrast difference through the inverse trigonometric function is as follows:
Figure BDA0003321580910000045
d represents the difference in local contrast for each pixel.
Optionally, the final picture calculation formula obtained by the weight fusion function is as follows:
R=w·I2+(1-w)·I1
wherein w is a fusion weight function, I1,I2Respectively an edge profile enhancement map and a limited contrast histogram equalization map.
An image enhancement system comprises an image processing module, an image local contrast difference calculation module, a weight fusion function calculation module and an image fusion module; the image processing module is used for acquiring an edge contour enhancement image of the image to be identified and a limited contrast histogram of the image to be identified; the image local contrast difference calculation module is used for calculating the local contrast difference between the edge contour enhancement image and the limited contrast histogram of the image to be identified; the weight fusion function calculation module is used for calculating a weight fusion function of the contrast difference through an inverse trigonometric function; and the image fusion module is used for obtaining a final picture through a weight fusion function.
A computer storage medium having stored thereon a computer program, the computer program being for performing the steps of an image enhancement method when executed by a processor.
Compared with the prior art, the image enhancement method, the image enhancement system and the storage medium are provided, and the inverse trigonometric function weight calculation method is innovatively provided. And finally, comprehensively calculating the fusion weight of each pixel point by matching with an image fusion function, and finally generating a high-quality fusion image which has most of the details of the original input image and keeps the nature of hue, saturation and brightness. In addition, the invention innovatively uses multiple iterations of the low-frequency image and the high-frequency image to enhance details and contours under the conditions that the colors are not distorted and the input picture is either dark or bright overall.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a local contrast fusion image enhancement algorithm based on different image algorithms, which comprises the following steps:
inputting an image to be recognized;
acquiring an edge contour enhancement image of an image to be identified and a contrast ratio limiting histogram of the image to be identified;
calculating the local contrast difference between the edge contour enhancement image and the image limit contrast histogram to be identified;
calculating a weight fusion function of the contrast difference through an inverse trigonometric function;
and obtaining a final picture through a weight fusion function.
In the present embodiment, the basic idea is as shown in fig. 1. Taking into account the spatial continuity of the pixel values, a sliding window of dimension m (m is an odd number) is adopted, and the boundary pixels are expanded by (m-1)/2 circles around the operated image, and the whole image is traversed by moving one pixel at a time. The method for calculating the details of the low-frequency image and the contours of the high-frequency image by multiple iterations and the local contrast difference weighting algorithm of the 2 different algorithm images are innovatively proposed in the window area according to the method. And finally, fusing by adopting an inverse trigonometric function of the synthetic weight to obtain a final fused image.
It is to be understood that: a picture is always made up of a low frequency part, which can be obtained by low pass filtering of the image, and a high frequency part, which can be subtracted from the original.
It is an object of embodiments of the invention to enhance the high frequency part representing the contour, i.e. to multiply the high frequency part by a coefficient and then recombine to obtain an enhanced image.
Here, the use of an improved sharpening factor λ can enhance the pattern of edges and pits of mucosal structures, tissue and vascular features, etc. on each organ. Edges and other high frequency components may be enhanced by a process of subtracting an unsharp or blurred version of the image from the original grayscale image.
The original image f (x, y) is subjected to mean value filtering of 15x15 to obtain a first low-frequency image d1(x, y). Subtracting the first low-frequency image d from the original image f (x, y)1(x, y) enhancing edges and other first order high frequency components g1(x,y)。
g1(x,y)=f(x,y)-d1(x,y) (1)
By aligning the first low-frequency image d1(x, y) carrying out 15x15 mean filtering to obtain a second low-frequency image d2(x, y). By subtracting the second low-frequency image d from the original image f (x, y)2(x, y) enhancing edges and other secondary high frequency components g2(x,y)。
g2(x,y)=f(x,y)-d2(x,y) (2)
After n iterations, the low-frequency image d passes through the (n-1) th timen-1(x, y) carrying out 15x15 mean filtering to obtain the nth low-frequency image dn(x, y). By subtracting the nth low-frequency image d from the original image f (x, y)n(x, y) enhancing edges and other nth order high frequency components gn(x,y)。
gn(x,y)=f(x,y)-dn(x,y) (3)
After n iterations, the low-frequency image at the nth time is more blurred than the first time and the high-frequency image at the nth time is cleaner than the first edgeThe process is clear. By applying a sharpening factor lambda to the nth order high frequency image gn(x, y) to enhance these high frequency components. Finally, it is added to the nth order low frequency image dn(x, y) to generate a sharp image f using the following formulasharpen(x,y)。
fsharpen(x,y)=dn(x,y)+λ*gn(x,y) (4)
In general λ is always larger than 1, so that the high frequency part can be enhanced. The value of λ is 2, one is to directly take a constant (λ >1), in this case, all high frequency parts in the image are amplified equally, and some high frequency parts may be over-enhanced.
Further, another method is to use different gains for each position, assuming that a certain point in the image is represented as x (i, j), then with (i, j) as the center, the window size is (2n +1) (2n +1), and its low-frequency image d isiLocal mean m of (x, y) (i ═ 1,2,3 … n)x(i, j) and local variance
Figure BDA0003321580910000081
Comprises the following steps:
Figure BDA0003321580910000082
Figure BDA0003321580910000083
substituting (5) and (6) into (4) to obtain:
Figure BDA0003321580910000084
where D is the low frequency image Di(x, y) (i 1,2,3.. n) such that the coefficient λ is spatially adaptive and inversely proportional to the local standard deviation, and where the edge of the image or other changes are severe, the local standard deviation is larger and therefore the value of λ is smaller, so that no generation of global standard deviations occursThe ringing effect. However, in the smooth region, the local mean square error is small, so that the value of λ is large, thereby causing amplification of noise.
After contrast enhancement, the edge contour of the focus and the blood vessel is more obvious, but noise exists, and the whole person is uncomfortable, so that the image needs to be denoised. However, the contrast histogram is limited to the original image because the contrast of the image is reduced and the sharpness of the image is reduced by directly performing the contrast histogram limitation to the image whose edge contour has been enhanced.
In order to further optimize the above technical solution, the image fusion based on local contrast is specifically as follows:
let IiAnd i is 1, and 2 is an edge contour enhancement map and a limiting contrast histogram equalization map respectively. For each pixel (x, y) at IiThe local contrast image is defined as:
Ci(x,y)=max(Ni(x,y))-min(Ni(x,y)) (8)
Ni(x, y) represents I centered on (x, y)i3X3 local pixels; max (.) and min (.) represent the maximum and minimum pixel values of the local image, respectively. Therefore, the local contrast image is composed of pixel differences for each 3X3 local image.
Based on the above local contrast image, the following image fusion method is proposed. Let the difference between C1 and C2 be E ═ C2-C1, i.e., E (x, y) ═ C2(x, y) -C1(x, y) represents the local contrast difference for each pixel (x, y). The fusion weight function is then defined as the arctangent function
Figure BDA0003321580910000091
The purpose of using the arctangent function w as the fusion weight function is to overcome this drawback well, since the arctangent function retains the good properties of the two images better than the linear function. The fused image may be obtained by:
R=w·I2+(1-w)·I1 (10)
where w is the normalized fusion weight function.
The embodiment of the invention has obvious improvement on noise, halation and fog.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. An image enhancement method, characterized by comprising the steps of:
inputting an image to be recognized;
acquiring an edge contour enhancement image of the image to be identified and a limited contrast histogram of the image to be identified;
calculating a local contrast difference between the edge contour enhancement map and the contrast-limited histogram;
calculating a weight fusion function of the contrast difference through an inverse trigonometric function;
and obtaining a final picture through a weight fusion function.
2. The image enhancement method according to claim 1, wherein the specific steps of obtaining the edge contour enhancement map of the image to be identified are as follows:
carrying out n times of iterative mean filtering operation on the image to be identified, and outputting a mean-filtered low-frequency image after each time of mean filtering operation;
subtracting each low-frequency image from the image to be identified to obtain n high-frequency images;
selecting an optimal high-frequency image from the n high-frequency images, and multiplying the optimal high-frequency image by a sharpening factor to obtain an enhanced high-frequency image;
and adding the low-frequency image output after the nth mean filtering operation and the enhanced high-frequency image to obtain an edge contour enhancement image.
3. The image enhancement method according to claim 1, wherein the specific steps of obtaining the edge contour enhancement map of the image to be identified are as follows:
fsharpen(x,y)=dn(x,y)+λ*gn(x,y);
wherein d isn(x, y) is the nth low frequency image, gn(x, y) is the nth high frequency image, and λ is a sharpening factor.
4. The image enhancement method according to claim 1, wherein the specific steps of obtaining the edge contour enhancement map of the image to be identified are as follows:
acquiring a point in the image, wherein the point is represented as x (i, j), the window size is (2n +1) (2n +1) by taking (i, j) as a center;
computing a low frequency image d in an imageiLocal mean m of (x, y) (i ═ 1,2,3.. n)x(i, j) and local variance
Figure FDA0003321580900000021
Comprises the following steps:
Figure FDA0003321580900000022
Figure FDA0003321580900000023
Figure FDA0003321580900000024
where D is the low frequency image Di(x, y) (i ═ 1,2,3.. n). (k, l) is the position within the sliding window.
5. The image enhancement method according to claim 1, wherein the specific method for calculating the local contrast difference between the edge contour enhancement map and the limited contrast histogram of the image to be identified is as follows:
Iii is 1,2 is the edge profile enhancement map and the limiting contrast histogram equalization map, respectively, for each pixel (x, y) at IiThe local contrast image is defined as:
Ci(x,y)=max(Ni(x,y))-min(Ni(x,y));
Ni(x, y) represents I centered on (x, y)i3X3 local pixels; max (.) and min (.) represent the maximum and minimum pixel values of the local image, respectively;
let the difference between C1 and C2 be E ═ C2-C1, i.e., E (x, y) ═ C2(x, y) -C1(x, y) represents the difference in local contrast for each pixel (x, y), where C1 and C2 refer to the local contrast of the first and second images, respectively.
6. An image enhancement method according to claim 1, wherein the weight fusion function formula for calculating the contrast difference by the inverse trigonometric function is as follows:
Figure FDA0003321580900000025
e represents the difference in local contrast for each pixel.
7. An image enhancement method according to claim 1 or 6, wherein the final picture calculation formula obtained by the weight fusion function is as follows:
R=w·I2+(1-w)·I1
wherein w is a fusion weight function, I1,I2Respectively an edge profile enhancement map and a limited contrast histogram equalization map.
8. An image enhancement system is characterized by comprising an image processing module, an image local contrast difference calculation module, a weight fusion function calculation module and an image fusion module; the image processing module is used for acquiring an edge contour enhancement image of the image to be identified and a limited contrast histogram of the image to be identified; the image local contrast difference calculation module is used for calculating the local contrast difference between the edge contour enhancement image and the image limit contrast histogram to be identified; the weight fusion function calculation module is used for calculating a weight fusion function of the contrast difference through an inverse trigonometric function; and the image fusion module is used for obtaining a final picture through a weight fusion function.
9. A computer storage medium, characterized in that the computer storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the image enhancement method according to any one of claims 1-7.
CN202111248324.3A 2021-10-26 2021-10-26 Image enhancement method, system and storage medium Pending CN113989145A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111248324.3A CN113989145A (en) 2021-10-26 2021-10-26 Image enhancement method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111248324.3A CN113989145A (en) 2021-10-26 2021-10-26 Image enhancement method, system and storage medium

Publications (1)

Publication Number Publication Date
CN113989145A true CN113989145A (en) 2022-01-28

Family

ID=79741667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111248324.3A Pending CN113989145A (en) 2021-10-26 2021-10-26 Image enhancement method, system and storage medium

Country Status (1)

Country Link
CN (1) CN113989145A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565597A (en) * 2022-03-04 2022-05-31 昆明理工大学 Nighttime road pedestrian detection method based on YOLOv3-tiny-DB and transfer learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114565597A (en) * 2022-03-04 2022-05-31 昆明理工大学 Nighttime road pedestrian detection method based on YOLOv3-tiny-DB and transfer learning
CN114565597B (en) * 2022-03-04 2024-05-14 昆明理工大学 Night road pedestrian detection method based on YOLO v3-tiny-DB and transfer learning

Similar Documents

Publication Publication Date Title
Bai et al. Underwater image enhancement based on global and local equalization of histogram and dual-image multi-scale fusion
Hu et al. Single image defogging based on illumination decomposition for visual maritime surveillance
CN100568279C (en) A kind of fast colourful image enchancing method based on the Retinex theory
CN112734650B (en) Virtual multi-exposure fusion based uneven illumination image enhancement method
CN110889812B (en) Underwater image enhancement method for multi-scale fusion of image characteristic information
Veluchamy et al. Optimized Bezier curve based intensity mapping scheme for low light image enhancement
Ma et al. Underwater image restoration through a combination of improved dark channel prior and gray world algorithms
Zhu et al. Underwater image enhancement based on colour correction and fusion
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Zhang et al. Underwater image enhancement via multi-scale fusion and adaptive color-gamma correction in low-light conditions
Jeon et al. Low-light image enhancement using inverted image normalized by atmospheric light
CN113989145A (en) Image enhancement method, system and storage medium
CN112614063B (en) Image enhancement and noise self-adaptive removal method for low-illumination environment in building
Agrawal et al. A novel contrast and saturation prior for image dehazing
CN117830134A (en) Infrared image enhancement method and system based on mixed filtering decomposition and image fusion
Dixit et al. A review on image contrast enhancement in colored images
Bhandari et al. Gamma corrected reflectance for low contrast image enhancement using guided filter
Srinivas et al. Channel prior based Retinex model for underwater image enhancement
CN110807748A (en) New tone mapping image enhancement method based on high dynamic range
Gautam et al. WMCP-EM: An integrated dehazing framework for visibility restoration in single image
CN114677295A (en) Method and device for enhancing mirror highlight image of real scene and storage medium
Dasgupta Comparative analysis of non-blind deblurring methods for noisy blurred images
Muhammad et al. Matlab Program for Sharpening Image due to Lenses Blurring Effect Simulation with Lucy Richardson Deconvolution
Nan et al. Image defogging algorithm based on Fisher criterion function and dark channel prior
Mu et al. Color image enhancement method based on weighted image guided filtering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination