CN112419210B - Underwater image enhancement method based on color correction and three-interval histogram stretching - Google Patents

Underwater image enhancement method based on color correction and three-interval histogram stretching Download PDF

Info

Publication number
CN112419210B
CN112419210B CN202011444565.0A CN202011444565A CN112419210B CN 112419210 B CN112419210 B CN 112419210B CN 202011444565 A CN202011444565 A CN 202011444565A CN 112419210 B CN112419210 B CN 112419210B
Authority
CN
China
Prior art keywords
image
channel
value
pixel
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011444565.0A
Other languages
Chinese (zh)
Other versions
CN112419210A (en
Inventor
张维石
周景春
庞磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202011444565.0A priority Critical patent/CN112419210B/en
Publication of CN112419210A publication Critical patent/CN112419210A/en
Application granted granted Critical
Publication of CN112419210B publication Critical patent/CN112419210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application provides an underwater image enhancement method based on color correction and three-interval histogram stretching. The method of the application comprises the following steps: and (3) carrying out color correction processing on the source image, adopting a three-interval histogram equalization method to respectively process the source image in R, G, B channels, stretching pixel values of a single channel, carrying out threshold selection and separating three sub-intervals, and completing three-interval equalization operation to obtain an enhanced image of three-interval histogram equalization. And carrying out linear weighted fusion on the image subjected to subinterval linear transformation and the three-interval histogram equalization image, and reconstructing a final defogging image. According to the application, a single-channel histogram of the source image is divided more accurately by using a multi-interval-based histogram equalization method, histogram equalization is performed on a single interval, and the single-channel histogram is simultaneously fused with the image subjected to color correction processing, so that details of dark places of the source image are better displayed, noise is reduced, and defogging of the image is realized.

Description

Underwater image enhancement method based on color correction and three-interval histogram stretching
Technical Field
The application relates to the technical field of image processing, in particular to an underwater image enhancement method based on color correction and three-interval histogram stretching.
Background
The development and utilization of marine resources relies on underwater images, which are typically captured using underwater cameras and underwater robots. Due to the absorption and scattering effects of light, problems such as low contrast and color cast of the underwater image occur, so that image degradation is caused, and the underwater image is difficult to analyze. Common factors affecting the decay rate are water and salinity, and the type and amount of suspended particles in the water. The serious deterioration causes difficulty in restoration of image information. Finding an effective solution to restore underwater image color and contrast is a very challenging task.
Underwater enhancement techniques have been developed to address such issues. The underwater enhancement technique is a simple and quick method, but has a great effect on improving the quality of underwater images. The method can process the red, green and blue channel intensity values through specific rules to improve the image quality.
Disclosure of Invention
According to the technical problems, the application provides an underwater image enhancement method based on color correction and three-interval histogram stretching.
The application adopts the following technical means: the underwater image enhancement method based on color correction and three-interval histogram stretching is characterized by comprising the following steps of:
step S01: acquiring an original RGB dense fog image; performing color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain a color corrected enhanced image;
step S02: decomposing the original RGB foggy image into a R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: respectively stretching pixel values of the R, G, B channel image to be in a range of 0-255 to obtain a stretched single-channel image;
step S04: respectively calculating average pixel values of the R, G, B channel image; subtracting the average pixel value of a single channel from the pixel value of each pixel point as an error and squaring the variance; selecting a pixel point with the maximum error square according to the value of the error square of the pixel value of each pixel point and the average pixel value, wherein the pixel point is the point with the larger difference between the pixel value and the average pixel value, so that the variance of the point, which is about three times of the point, is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of each R, G, B channel to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the balanced R, G, B channel image to obtain a final defogging image.
Further, in the color correction method based on subinterval linear transformation, the total pixel value of the R, G, B channel is calculated as follows:
wherein M and N represent the number of rows and columns, respectively, of the input image, I R (i,j)、I G (i,j)、I B (i, j) representing the pixel values of the R, G, B three-channel image at the (i, j) positions, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
wherein, max represents a function taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; p (P) R ,P G ,P B Respectively representing the ratio of the total pixel value to the maximum total pixel value of any one channel of the R, G, B; to pass each throughThe track is divided into three sections, defining two cut-off ratios and />The expression is as follows:
wherein c represents any one of the channels R, G, B, α 1 and α2 Are all constant between 0 and 1, p c Representing the ratio of the sum of the total pixel value and the maximum total pixel value for any one channel R, G, B; then, cut-off threshold and />Corresponding to two cut-off ratios and />The following functions are determined as formulas (9) and (10):
wherein , and />Representing a cutoff threshold, F is a lower quantile function, I c (x) A pixel value of one of the three of R, G, B channels, +.> and />Is a cut-off ratio;
in order to effectively suppress shadows and highlights, the following is performed for each color channel:
wherein ,representing the pixel value of R, G, B after processing at a certain point of any channel, +.> and />Represents a cutoff threshold, I c (x) A pixel value representing a point of any one of the channels R, G, B;
finally, the following linear operation is performed on the pixel values of the intermediate region:
wherein ,representation of the faceColor corrected image, +.>Representing R, G, B the pixel value after processing at a point in any one channel.
Further, the linear stretching operation is performed on the single-channel image in the step S03, and each gray value is ensured to be between [0,255], so that the expression of linear stretching is defined as:
when c is { R, G, B }, P C (i, j) represents a gradation value of R, G, B after correction of any one channel at the (i, j) position; i C (i, j) represents the gray value of any one channel R, G, B at the (i, j) position; min c Representing the pixel minimum for any one of the R, G, B channels; max (max) c Representing the pixel maximum for any one of the R, G, B channels.
Further, in the step S04 of threshold selection and three-section division, first, the pixel average value of the single channel is calculated as follows:
wherein, mean R ,Mean G ,Mean B Average pixel values of three channels R, G, B and M, N are the number of lines and columns of the input image, respectively, I R (i,j),I G (i,j),I B (i, j) pixels each representing R, G, B three-channel image at the (i, j) positionThe value, M x N, represents the total pixel count for a single channel;
calculating the error between the pixel value of any point of one of the three channels of R, G, B and the average pixel value of the channel corresponding to the point, and squaring the error to obtain the square of the error, wherein the calculation formula is as follows:
wherein ,representing the error between the pixel value of any one point of one of the three channels of R, G, B and the average pixel value of the channel corresponding to that point, ic (i, j) representing the pixel value of any one of R, G, B at the (i, j) position, mean c Representing the average pixel value of any one of the channels R, G, B,/or->Representing the square of the error between the pixel value of any point in one of the three channels of R, G, B and the average pixel value of the channel corresponding to that point;
selecting a point with the maximum square error as a center point through a Max function, and adding up and subtracting three times of pixel value variances left and right according to a 3 sigma criterion to obtain left and right thresholds so as to finish three-interval division;
t 1 =Maxm c -3σ(20)
t 2 =Maxm c +3σ(21)
wherein ,Maxc Represents the maximum square error value, maxm, of one of the three R, G, B channels c R representsPositions of rows corresponding to the point of maximum square error of one of the three channels G, B, t 1 、t 2 Representing the two thresholds chosen, respectively, σ represents the variance of the pixel value for one of the three channels R, G, B.
Further, the histogram equalization processing procedure for the subintervals of each channel in step S05 is as follows:
firstly, dividing gray scale ranges of three subintervals according to a threshold value:
[0,255]=[0,t 1 ]∪(t 1 ,t 2 ]∪(t 2 ,255](22)
where I represents the original image, I (I, j) represents the gray values of pixels located in the I-th row and the j-th column of the image, X 1 ,X 2 ,X 3 Representing a first sub-image, a second sub-image and a last sub-image, respectively;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally, calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X is E X 1 When the frequency of the accumulated gray level of the histogram of the first sub-image from 0 to x is calculated and expressed as CDF 1 (x) The method comprises the steps of carrying out a first treatment on the surface of the When X is E X 2 At this time, the histogram of the second sub-image is calculated from t 1 To x accumulated gray scale frequency, and expressed as CDF 2 (x) The method comprises the steps of carrying out a first treatment on the surface of the When X is E X 3 When calculating the histogram of the last sub-image from t 2 To x accumulated gray scale frequency, and expressed as CDF 3 (x);
Then, according to the histogram normalized pixel frequency of each sub-image, calculating the transformation gray value after the histogram equalization of the three sub-images; referring to the gray level transformation function of the traditional histogram equalization, obtaining a function of sub-histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
wherein a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, and CDF (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t 1 and t2 Respectively representing two thresholds dividing the sub-histogram, x representing the input gray value, CDF 1 (x)、CDF 2 (x)、CDF 3 (x) The frequencies of the accumulated gray levels of the first, second and third sub-histograms are represented, respectively.
Still further, the multi-scale fusion comprises the steps of:
step S071: defining an aggregation weight map and acquiring fusion of an input image and the aggregation weight map; the aggregate weight map is determined from three measurement weights, including: contrast weight, saturation weight, and exposure weight map;
the contrast weight is a contrast weight graph; gray-scale image of input image for estimating global contrast weight W having absolute value La To ensure edge and detail texture information of the image;
W La =|La*F|(29)
where La represents the laplace operator, x represents the convolution, and F represents the input image;
the saturation weight is obtained by taking the standard deviation of each pixel in a channel in an RGB color space as the saturation weight;
wherein R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at the (x, y) position, W sa (x, y) represents the saturation weight at the (x, y) position;
the exposure weight map needs to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
the aggregation weight map is obtained by multiplying three characteristic weight maps in multi-scale fusion; map contrast weight W La Saturation weight map W sa And an exposure weight map W E Multiplying the pixel value of each input image by the corresponding pixel point:
wherein z represents the input z-th image, W i Representing a two-dimensional weight map;
to ensure consistency of images, a weight map W is introduced z
Representing an aggregate weight map;
step S072: fusing the input image and the aggregate weight map; the input image I is decomposed by a laplacian pyramid and defined asAggregate weight map->Decomposed by Gaussian pyramid, defined as +.>Wherein trade mark l represents the first weight graph; laplacian pyramid->And Gaussian pyramid->Pixel-by-pixel fusion is performed according to the following formula:
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
Compared with the prior art, the application has the following advantages:
1. the color correction method based on subinterval linear transformation, provided by the application, has the advantages that the visibility is better improved, the good color correction effect is achieved, the histogram distribution of red, green and blue channels is more uniform, the color cast problem of an underwater image is better solved, and the details of the dark place of the underwater image are improved.
2. The application utilizes the three-interval histogram equalization method, effectively improves the contrast of the image, achieves good effect in enhancing the bright part details of the image, and completes the effective stretching of the image histogram.
3. According to the application, through multi-scale linear fusion, the image with improved contrast and bright part detail through a three-interval histogram equalization method is fused with the image with improved color cast and dark part detail, so that effective enhancement of the underwater image is realized.
For the reasons, the method can be widely popularized in the fields of image processing and the like.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to the drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic flow chart of the present application.
Fig. 2 is a graph comparing the enhancement effect of the application with other algorithms for underwater scene images. 2-1-1 are graphs of results after HEEF algorithm processing; FIGS. 2-1-2 are graphs of results after BBHE algorithm processing; FIGS. 2-1-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-1-4 are graphs of results after processing by the algorithms herein; 2-2-1 are graphs of results after HEEF algorithm processing; 2-2-2 are graphs of results after BBHE algorithm processing; FIGS. 2-2-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-2-4 are graphs of results after processing by the algorithms herein; 2-3-1 are graphs of results after HEEF algorithm processing; FIGS. 2-3-2 are graphs of results after BBHE algorithm processing; FIGS. 2-3-3 are graphs of results after DOTHE algorithm processing; FIGS. 2-3-4 are graphs of results after processing by the algorithms herein; 2-4-1 are graphs of results after HEEF algorithm processing; FIGS. 2-4-2 are graphs of results after BBHE algorithm processing; FIGS. 2-4-3 are graphs of results after DOTHE algorithm processing; fig. 2-4-4 are graphs of results after processing by the algorithms herein.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1-2, the application provides an underwater image enhancement method based on color correction and three-interval histogram stretching, which comprises the following steps:
step S01: acquiring an original RGB dense fog image; performing color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain a color corrected enhanced image;
step S02: decomposing the original RGB foggy image into a R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: respectively stretching pixel values of the R, G, B channel image to be in a range of 0-255 to obtain a stretched single-channel image;
step S04: respectively calculating average pixel values of the R, G, B channel image; subtracting the average pixel value of a single channel from the pixel value of each pixel point as an error and squaring the variance; selecting a pixel point with the maximum error square according to the value of the error square of the pixel value of each pixel point and the average pixel value, wherein the pixel point is the point with the larger difference between the pixel value and the average pixel value, so that the variance of the point, which is about three times of the point, is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
step S05: equalizing the subintervals of each R, G, B channel to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the balanced R, G, B channel image to obtain a final defogging image.
As a preferred embodiment, the total pixel value of R, G, B channel in the color correction method based on subinterval linear transformation is calculated as follows:
wherein M and N represent the number of rows and columns, respectively, of the input image, I R (i,j)、I G (i,j)、I B (i, j) representing the pixel values of the R, G, B three-channel image at the (i, j) positions, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
wherein, max represents a function taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; p (P) R ,P G ,P B Respectively representing the ratio of the total pixel value to the maximum total pixel value of any one channel of the R, G, B; to divide each channel into three intervals, two cut-off ratios are defined and />The expression is as follows:
wherein c represents any one of the channels R, G, B, α 1 and α2 Are all constant between 0 and 1, p c Representing the ratio of the sum of the total pixel value and the maximum total pixel value for any one channel R, G, B; then, cut-off threshold and />Corresponding to two cut-off ratios and />The following functions are determined as formulas (9) and (10):
wherein , and />Representing a cutoff threshold, F is a lower quantile function, I c (x) A pixel value of one of the three of R, G, B channels, +.> and />Is a cut-off ratio;
in order to effectively suppress shadows and highlights, the following is performed for each color channel:
wherein ,representing R, G, B at a point in any one of the channelsThe pixel value after the processing,/> and />Represents a cutoff threshold, I c (x) A pixel value representing a point of any one of the channels R, G, B;
finally, the following linear operation is performed on the pixel values of the intermediate region:
wherein ,representing a color corrected image, ++>Representing R, G, B the pixel value after processing at a point in any one channel.
As a preferred embodiment, in the present application, the single-channel image is subjected to the linear stretching operation in the step S03, and each gray value is ensured to be between [0,255], and therefore, the expression of the linear stretching is defined as:
when c is { R, G, B }, P C (i, j) represents a gradation value of R, G, B after correction of any one channel at the (i, j) position; i C (i, j) represents the gray value of any one channel R, G, B at the (i, j) position; min c Representing the pixel minimum for any one of the R, G, B channels; max (max) c Representing the pixel maximum for any one of the R, G, B channels.
Further, in the step S04 of threshold selection and three-section division, first, the pixel average value of the single channel is calculated as follows:
wherein, mean R ,Mean G ,Mean B Average pixel values of three channels R, G, B and M, N are the number of lines and columns of the input image, respectively, I R (i,j),I G (i,j),I B (i, j) respectively represent the pixel values of the R, G, B three-channel image at the (i, j) position, and m×n represents the total pixel number of the single channel;
calculating the error between the pixel value of any point of one of the three channels of R, G, B and the average pixel value of the channel corresponding to the point, and squaring the error to obtain the square of the error, wherein the calculation formula is as follows:
wherein ,representing the error between the pixel value of any one point of one of the three channels of R, G, B and the average pixel value of the channel corresponding to that point, ic (i, j) representing the pixel value of any one of R, G, B at the (i, j) position, mean c Representing the average pixel value of any one of the channels R, G, B,/or->Representing the square of the error between the pixel value of any point in one of the three channels of R, G, B and the average pixel value of the channel corresponding to that point;
selecting a point with the maximum square error as a center point through a Max function, and adding up and subtracting three times of pixel value variances left and right according to a 3 sigma criterion to obtain left and right thresholds so as to finish three-interval division;
t 1 =Maxm c -3σ(20)
t 2 =Maxm c +3σ(21)
wherein ,Maxc Represents the maximum square error value, maxm, of one of the three R, G, B channels c Representing the position of the row corresponding to the point of maximum square error of one of the three channels R, G, B, t 1 、t 2 Representing the two thresholds chosen, respectively, σ represents the variance of the pixel value for one of the three channels R, G, B.
Further, the histogram equalization processing procedure for the subintervals of each channel in step S05 is as follows:
firstly, dividing gray scale ranges of three subintervals according to a threshold value:
[0,255]=[0,t 1 ]∪(t 1 ,t 2 ]∪(t 2 ,255](22)
where I represents the original image, I (I, j) represents the gray values of pixels located in the I-th row and the j-th column of the image, X 1 ,X 2 ,X 3 Representing a first sub-image, a second sub-image and a last sub-image, respectively;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally, calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X is E X 1 When the frequency of the accumulated gray level of the histogram of the first sub-image from 0 to x is calculated and expressed as CDF 1 (x) The method comprises the steps of carrying out a first treatment on the surface of the When X is E X 2 At this time, the histogram of the second sub-image is calculated from t 1 To x accumulated gray scale frequency, and expressed as CDF 2 (x) The method comprises the steps of carrying out a first treatment on the surface of the When X is E X 3 When calculating the histogram of the last sub-image from t 2 To x accumulated gray scale frequency, and expressed as CDF 3 (x);
Then, according to the histogram normalized pixel frequency of each sub-image, calculating the transformation gray value after the histogram equalization of the three sub-images; referring to the gray level transformation function of the traditional histogram equalization, obtaining a function of sub-histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x)(26)
wherein a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, and CDF (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t 1 and t2 Respectively representing two thresholds dividing the sub-histogram, x representing the input gray value, CDF 1 (x)、CDF 2 (x)、CDF 3 (x) The frequencies of the accumulated gray levels of the first, second and third sub-histograms are represented, respectively.
Still further, the multi-scale fusion comprises the steps of:
step S071: defining an aggregation weight map and acquiring fusion of an input image and the aggregation weight map; the aggregate weight map is determined from three measurement weights, including: contrast weight, saturation weight, and exposure weight map;
the contrast weight is a contrast weight graph; gray-scale image of input image for estimating global contrast weight W having absolute value La To ensure edge and detail texture information of the image;
W La =|La*F|(29)
where La represents the laplace operator, x represents the convolution, and F represents the input image;
the saturation weight is obtained by taking the standard deviation of each pixel in a channel in an RGB color space as the saturation weight;
wherein R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at the (x, y) position, W sa (x, y) represents the saturation weight at the (x, y) position;
the exposure weight map needs to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
the aggregation weight map is obtained by multiplying three characteristic weight maps in multi-scale fusion; map contrast weight W La Saturation weight map W sa And an exposure weight map W E Multiplying the pixel value of each input image by the corresponding pixel point:
wherein z represents the input z-th image, W i Representing a two-dimensional weight map;
to ensure consistency of images, a weight map W is introduced z
Representing an aggregate weight map;
step S072: fusing the input image and the aggregate weight map; the input image I is decomposed by a laplacian pyramid and defined asAggregate weight map->Decomposed by Gaussian pyramid, defined as +.>Wherein trade mark l represents the first weight graph; laplacian pyramid->And Gaussian pyramid->Pixel-by-pixel fusion is performed according to the following formula:
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
Example 1
As shown in fig. 2, a first greenish image is first processed by a plurality of algorithms; fig. 2-1-1 shows the result of the HEEF algorithm, and it can be seen that the result is greenish overall and the details are slightly blurred, so that the expected effect is not achieved. 2-1-2 are result graphs after BBHE algorithm treatment, it can be seen that the whole result graph is still greener, the details are unclear, and the effect is poor. Fig. 2-1-3 are graphs of the results after the DOTHE algorithm, and it can be seen that the overall green color of the graph is reduced and the details are partially improved, but the image is overexposed, so that the brightness of the image is too high, the details are lost, and the expected effect is not achieved. 2-1-4 are result graphs after the algorithm processing, it can be seen that the problem of color cast of the result graphs is solved, the detail is improved obviously, the contrast is improved, and the image enhancement is successful. Firstly, processing a first image through a plurality of algorithms; then, the second greenish image is processed, and fig. 2-2-1 is a result graph after the second greenish image is processed by the HEEF algorithm, so that the problem that the whole result graph is greenish is not solved, details are slightly blurred, and the effect is poor. Fig. 2-2-2 are graphs of the results after BBHE algorithm treatment, and it can be seen that the overall graph of the results is still greener, the details are unclear, and the expected effect is not achieved. Fig. 2-2-3 are graphs of the results after the DOTHE algorithm, and it can be seen that the color cast problem of the graphs is partially improved, and details are partially improved, but the image is overexposed, so that the brightness of the image is overhigh, the details of the image are lost, and the expected effect is not achieved. Fig. 2-2-4 are results of the algorithm processing, and it can be seen that the problem of color cast of the results is solved, the detail is improved obviously, the contrast is improved, and compared with the results, the image enhancement is successful.
Then, for the third bluish image processing, fig. 2-3-1 is a result chart after the processing of the HEEF algorithm, and it can be seen that the problem of bluish overall result chart is not better improved, part of details are blurred, and the effect is poor. Fig. 2-3-2 are graphs of results after BBHE algorithm processing, and it can be seen that the overall graph of the results is still bluish, the details are unclear, and the effect is poor. Fig. 2-3-3 are graphs of the results after the DOTHE algorithm, and it can be seen that the color cast problem of the graphs is partially improved, and details are partially improved, but the image is overexposed, so that the brightness of the image is overhigh, the details of the image are lost, and the expected effect is not achieved. Fig. 2-3-4 are results of the algorithm processing, and it can be seen that the problem of color cast of the results is solved, the details are greatly improved, the contrast is improved, and compared with the results, the image enhancement is more successful. Then, for the fourth bluish image processing, fig. 2-4-1 is a result chart after the processing of the HEEF algorithm, and it can be seen that the problem of bluish overall result chart is not better improved, part of details are blurred, and the effect is poor. Fig. 2-4-2 are graphs of results after BBHE algorithm processing, and it can be seen that the overall graph of the results is still bluish, the details are unclear, and the effect is poor. Fig. 2-4-3 are graphs of the results after the DOTHE algorithm, and it can be seen that the color cast problem of the graphs is partially improved, and details are partially improved, but the image is overexposed, so that the brightness of the image is overhigh, the details of the image are lost, and the expected effect is not achieved. Fig. 2-4-4 are graphs of the results after the algorithm processing, and it can be seen that the problem of color cast of the graphs is solved, the details are greatly improved, the contrast is improved, and compared with the graphs of the results, the graphs of the results are greatly improved, and the image enhancement is successful. The first three algorithms can be observed to have defects in terms of color correction, contrast and detail, while the algorithms proposed herein correct colors, improve contrast and highlight detail for underwater images; the color of the degraded image is reflected to be well corrected.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments. In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.

Claims (5)

1. The underwater image enhancement method based on color correction and three-interval histogram stretching is characterized by comprising the following steps of:
step S01: acquiring an original RGB dense fog image; performing color correction on the original RGB dense fog image by a color correction method based on subinterval linear transformation to obtain a color corrected enhanced image;
step S02: decomposing the original RGB foggy image into a R, G, B channel image, and carrying out the following steps on pixel values of the R, G, B channel image;
step S03: respectively stretching pixel values of the R, G, B channel image to be in a range of 0-255 to obtain a stretched single-channel image;
step S04: respectively calculating average pixel values of the R, G, B channel image; subtracting the average pixel value of a single channel from the pixel value of each pixel point to obtain an error and squaring the error; selecting a pixel point with the maximum error square according to the value of the error square of the pixel value of each pixel point and the average pixel value, wherein the pixel point is the point with the larger difference between the pixel value and the average pixel value, so that the variance of the point, which is about three times of the point, is used as the center to determine two thresholds required by three-interval division, and the whole single-channel histogram is divided into three intervals;
the step of threshold selection and three-interval division in the step S04 is to calculate the pixel average value of the single channel as follows:
wherein, mean R ,Mean G ,Mean B Average pixel values of three channels R, G, B and M, N are the number of lines and columns of the input image, respectively, I R (i,j),I G (i,j),I B (i, j) respectively represent the pixel values of the R, G, B three-channel image at the (i, j) position, and m×n represents the total pixel number of the single channel;
calculating the error between the pixel value of any point of one of the three channels of R, G, B and the average pixel value of the channel corresponding to the point, and squaring the error to obtain the square of the error, wherein the calculation formula is as follows:
wherein ,representing the error between the pixel value of any point of one of the three channels of R, G, B and the average pixel value of the channel corresponding to that point, ic (i, j) representing R, G, B anyPixel value, mean, of a channel at (i, j) position c Representing the average pixel value of any one of the channels R, G, B,/or->Representing the square of the error between the pixel value of any point in one of the three channels of R, G, B and the average pixel value of the channel corresponding to that point;
selecting a point with the maximum square error as a center point through a Max function, and adding up and subtracting three times of pixel value variances left and right according to a 3 sigma criterion to obtain left and right thresholds so as to finish three-interval division;
t 1 =Maxm c -3σ (20)
t 2 =Maxm c +3σ (21)
wherein ,Maxc Represents the maximum square error value, maxm, of one of the three R, G, B channels c Representing the position of the row corresponding to the point of maximum square error of one of the three channels R, G, B, t 1 、t 2 Respectively representing two selected thresholds, and sigma represents the variance of the pixel value of one of the three R, G, B channels;
step S05: equalizing the subintervals of each R, G, B channel to obtain an image after single-channel equalization;
step S06: and carrying out linear weighted fusion on the R, G, B channel image and the balanced R, G, B channel image to obtain a final defogging image.
2. The underwater image enhancement method based on color correction and three-interval histogram stretching according to claim 1, wherein the total pixel value calculation formula of R, G, B channels in the color correction method based on subinterval linear transformation is as follows:
wherein M and N represent the number of rows and columns, respectively, of the input image, I R (i,j)、I G (i,j)、I B (i, j) representing the pixel values of the R, G, B three-channel image at the (i, j) positions, respectively;
meanwhile, the ratio of the red, green and blue channels is calculated as:
wherein, max represents a function taking the maximum value, and the maximum value of the total pixel value of the R, G, B channel is obtained through the Max function; p (P) R ,P G ,P B Respectively representing the ratio of the total pixel value to the maximum total pixel value of any one channel of the R, G, B; to divide each channel into three intervals, two cut-off ratios are defined and />The expression is as follows:
wherein c represents any one of the channels R, G, B, α 1 and α2 Are all constant between 0 and 1, p c Representing the ratio of the sum of the total pixel value and the maximum total pixel value for any one channel R, G, B; then, cut-off threshold and />Corresponding to two cut-off ratios->Andthe following functions are determined as formulas (9) and (10):
wherein , and />Representing a cutoff threshold, F is a lower quantile function, I c (x) A pixel value of one of the three of R, G, B channels, +.> and />Is a cut-off ratio;
in order to effectively suppress shadows and highlights, the following is performed for each color channel:
wherein ,representing the pixel value of R, G, B after processing at a certain point of any channel, +.> and />Represents a cutoff threshold, I c (x) A pixel value representing a point of any one of the channels R, G, B;
finally, the following linear operation is performed on the pixel values of the intermediate region:
wherein ,representing color correctedI is the image of (1) c (x) Representing R, G, B the pixel value after processing at a point in any one channel.
3. The underwater image enhancement method based on color correction and three-bin histogram stretching according to claim 1, wherein the single-channel image is subjected to a linear stretching operation in step S03 and each gray value is ensured to be between [0,255], and therefore, the expression of linear stretching is defined as:
when c is { R, G, B }, P C (i, j) represents a gradation value of R, G, B after correction of any one channel at the (i, j) position; i C (i, j) represents the gray value of any one channel R, G, B at the (i, j) position; min c Representing the pixel minimum for any one of the R, G, B channels; max (max) c Representing the pixel maximum for any one of the R, G, B channels.
4. The underwater image enhancement method based on color correction and three-bin histogram stretching according to claim 1, wherein the histogram equalization processing procedure for the subbins of each channel in step S05 is as follows:
firstly, dividing gray scale ranges of three subintervals according to a threshold value:
[0,255]=[0,t 1 ]∪(t 1 ,t 2 ]∪(t 2 ,255] (22)
where I represents the original image, I (I, j) represents the gray values of pixels located in the I-th row and the j-th column of the image, X 1 ,X 2 ,X 3 Representing a first sub-image, a second sub-image and a last sub-image, respectively;
firstly, calculating the frequency of each pixel of the whole image, calculating the frequencies of three sub-histograms, obtaining the normalized pixel frequency of each sub-histogram, and finally, calculating the cumulative normalized frequency of the three sub-histograms;
when x represents the gray value of the image, three value ranges of x can be obtained according to the interval division; when X is E X 1 When the frequency of the accumulated gray level of the histogram of the first sub-image from 0 to x is calculated and expressed as CDF 1 (x) The method comprises the steps of carrying out a first treatment on the surface of the When X is E X 2 At this time, the histogram of the second sub-image is calculated from t 1 To x accumulated gray scale frequency, and expressed as CDF 2 (x) The method comprises the steps of carrying out a first treatment on the surface of the When X is E X 3 When calculating the histogram of the last sub-image from t 2 To x accumulated gray scale frequency, and expressed as CDF 3 (x);
Then, according to the histogram normalized pixel frequency of each sub-image, calculating the transformation gray value after the histogram equalization of the three sub-images; referring to the gray level transformation function of the traditional histogram equalization, obtaining a function of sub-histogram equalization; the gray scale transformation function of conventional histogram equalization is described as:
f(x)=a+(b-a)CDF(x) (26)
wherein a represents the minimum value of the output gray value, b represents the maximum value of the output gray value, x represents the input gray value, and CDF (x) represents the cumulative density function with respect to x;
the sub-histogram equalization formula is described as follows:
wherein y represents a gray value transformation function of three-interval equalization processing, and a processed result is obtained according to the function y; t is t 1 and t2 Respectively representing two thresholds dividing the sub-histogram, x representing the input gray value, CDF 1 (x)、CDF 2 (x)、CDF 3 (x) The frequencies of the accumulated gray levels of the first, second and third sub-histograms are represented, respectively.
5. The underwater image enhancement method based on color correction and three-bin histogram stretching as claimed in claim 1, wherein,
the fusion comprises the following steps:
step S071: defining an aggregation weight map and acquiring fusion of an input image and the aggregation weight map; the aggregate weight map is determined from three measurement weights, including: contrast weight, saturation weight, and exposure weight map;
the contrast weight is a contrast weight graph; gray-scale image of input image for estimating global contrast weight W having absolute value La To ensure edge and detail texture information of the image;
W La =|La*F| (29)
where La represents the laplace operator, x represents the convolution, and F represents the input image;
the saturation weight is obtained by taking the standard deviation of each pixel in a channel in an RGB color space as the saturation weight;
wherein R (x, y), G (x, y), B (x, y) respectively represent R, G, B channels of the input image, m (x, y) represents an average value of R, G, B channels at the (x, y) position, W sa (x, y) represents saturation at the (x, y) positionA degree weight;
the exposure weight map needs to ensure that the pixel value approaches 0.5, namely the midpoint; the exposure weight of each pixel point is represented by a gaussian curve with an expected value of 0.5:
the aggregation weight map is obtained by multiplying three characteristic weight maps in multi-scale fusion; map contrast weight W La Saturation weight map W sa And an exposure weight map W E Multiplying the pixel value of each input image by the corresponding pixel point:
W Z =WL az ×WS az ×WE z (32)
wherein z represents the input z-th image, W i Representing a two-dimensional weight map;
to ensure consistency of images, a weight map W is introduced z
Representing an aggregate weight map;
step S072: fusing the input image and the aggregate weight map; the input image I is decomposed by a laplacian pyramid and defined asAggregate weight map->Decomposed by Gaussian pyramid, defined as +.>Wherein trade mark l represents the first weight graph; laplacian pyramid->And Gaussian pyramid->Pixel-by-pixel fusion is performed according to the following formula:
where L { F } represents the laplacian pyramid representing the fused image, which is reconstructed to obtain the fused image.
CN202011444565.0A 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching Active CN112419210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011444565.0A CN112419210B (en) 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011444565.0A CN112419210B (en) 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching

Publications (2)

Publication Number Publication Date
CN112419210A CN112419210A (en) 2021-02-26
CN112419210B true CN112419210B (en) 2023-09-22

Family

ID=74775554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011444565.0A Active CN112419210B (en) 2020-12-08 2020-12-08 Underwater image enhancement method based on color correction and three-interval histogram stretching

Country Status (1)

Country Link
CN (1) CN112419210B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114022397B (en) * 2022-01-06 2022-04-19 广东欧谱曼迪科技有限公司 Endoscope image defogging method and device, electronic equipment and storage medium
CN114445300A (en) * 2022-01-29 2022-05-06 赵恒� Nonlinear underwater image gain algorithm for hyperbolic tangent deformation function transformation
CN114494084B (en) * 2022-04-14 2022-07-26 广东欧谱曼迪科技有限公司 Image color homogenizing method and device, electronic equipment and storage medium
CN117078561B (en) * 2023-10-13 2024-01-19 深圳市东视电子有限公司 RGB-based self-adaptive color correction and contrast enhancement method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127359A (en) * 2019-12-19 2020-05-08 大连海事大学 Underwater image enhancement method based on selective compensation color and three-interval balance

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411938B2 (en) * 2007-11-29 2013-04-02 Sri International Multi-scale multi-camera adaptive fusion with contrast normalization

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127359A (en) * 2019-12-19 2020-05-08 大连海事大学 Underwater image enhancement method based on selective compensation color and three-interval balance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于自适应动态限幅的水下图像增强算法改进;于君霞;;西部皮革(第22期);全文 *

Also Published As

Publication number Publication date
CN112419210A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112419210B (en) Underwater image enhancement method based on color correction and three-interval histogram stretching
Zheng et al. Image dehazing by an artificial image fusion method based on adaptive structure decomposition
CN110175964B (en) Retinex image enhancement method based on Laplacian pyramid
Ancuti et al. Night-time dehazing by fusion
Rivera et al. Content-aware dark image enhancement through channel division
Ancuti et al. Enhancing underwater images and videos by fusion
CN105894484B (en) A kind of HDR algorithm for reconstructing normalized based on histogram with super-pixel segmentation
Li et al. Color correction based on cfa and enhancement based on retinex with dense pixels for underwater images
CN105913396A (en) Noise estimation-based image edge preservation mixed de-noising method
CN110889812B (en) Underwater image enhancement method for multi-scale fusion of image characteristic information
CN110232670B (en) Method for enhancing visual effect of image based on high-low frequency separation
Hong et al. Single image dehazing via atmospheric scattering model-based image fusion
CN112085673A (en) Multi-exposure image fusion method for removing strong ghost
Majeed et al. Adaptive entropy index histogram equalization for poor contrast images
Rahman et al. Efficient contrast adjustment and fusion method for underexposed images in industrial cyber-physical systems
Mageshwari et al. Underwater image re-enhancement with blend of simplest colour balance and contrast limited adaptive histogram equalization algorithm
CN117830134A (en) Infrared image enhancement method and system based on mixed filtering decomposition and image fusion
Dixit et al. Image contrast optimization using local color correction and fuzzy intensification
CN116563133A (en) Low-illumination color image enhancement method based on simulated exposure and multi-scale fusion
CN114998173A (en) High dynamic range imaging method for space environment based on local area brightness adjustment
CN104809712B (en) A kind of image fast repairing method based on rough set
Wang et al. Video enhancement using adaptive spatio-temporal connective filter and piecewise mapping
Haouassi et al. An efficient image haze removal algorithm based on new accurate depth and light estimation algorithm
Sujitha et al. Underwater image enhancement by Multiscale fusion technique and Dehazing
Fan et al. Underwater image enhancement algorithm combining color correction and multi-scale fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant