CN111768355B - Method for enhancing image of refrigeration type infrared sensor - Google Patents

Method for enhancing image of refrigeration type infrared sensor Download PDF

Info

Publication number
CN111768355B
CN111768355B CN202010506218.XA CN202010506218A CN111768355B CN 111768355 B CN111768355 B CN 111768355B CN 202010506218 A CN202010506218 A CN 202010506218A CN 111768355 B CN111768355 B CN 111768355B
Authority
CN
China
Prior art keywords
image
pix
template
skipping
executing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010506218.XA
Other languages
Chinese (zh)
Other versions
CN111768355A (en
Inventor
李博
王勇吉
张准
刘赏
王浩
王晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Realect Electronic Development Co ltd
Original Assignee
Xi'an Realect Electronic Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Realect Electronic Development Co ltd filed Critical Xi'an Realect Electronic Development Co ltd
Priority to CN202010506218.XA priority Critical patent/CN111768355B/en
Publication of CN111768355A publication Critical patent/CN111768355A/en
Application granted granted Critical
Publication of CN111768355B publication Critical patent/CN111768355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for enhancing an image of a refrigeration type infrared sensor, which comprises the steps of correcting a histogram of an original image through the distribution characteristics of a pixel region, then correcting the histogram again by using a self-adaptive platform histogram, calculating a probability distribution function by using the corrected histogram, balancing the original image by using the probability distribution function, filtering and layering the balanced image by using a layered filtering template generated by a Gaussian space confidence template and a low-pass spatial template, separating a basic layer image and a detail layer image, and further amplifying the detail image layer to obtain a final enhanced image. The invention can not only keep the overall characteristics of the image, but also enlarge the local characteristics and simultaneously keep the image details; the invention is suitable for various scenes and shows excellent detail display capability in various scenes; the invention has high efficiency and low complexity, is easy to realize by software and hardware, and is particularly suitable for being applied to embedded equipment with limited resources.

Description

Method for enhancing image of refrigeration type infrared sensor
Technical Field
The invention belongs to the technical field of infrared image detail enhancement, and particularly relates to a method for enhancing an image of a refrigeration type infrared sensor.
Background
The refrigeration type infrared detector has the characteristics that the contrast of the collected image is low, the details are not obvious, the noise is large, and the infrared image enhancement technology is developed according to the characteristics. Infrared image enhancement techniques have achieved extensive success over the years. The implementation method is more and more abundant, algorithms based on histogram enhancement comprise HE (histogram equalization), PHE (platform histogram equalization), APHE (adaptive platform histogram equalization) and CLAHE (contrast-limited adaptive histogram equalization), the algorithms distribute the contrast of the histograms according to probability characteristics of the histograms, the subsequent improved algorithms also prevent the problems of contrast over-enhancement and detail loss by limiting the contrast, and the CLAHE performs a local contrast enhancement technology aiming at the problem of detail loss, but loses the integral light and shade characteristics of the image although the details are increased a lot. The histogram equalization algorithm (adaptive platform histogram equalization for spatial block distribution correction) adopted by the invention considers the characteristics of spatial distribution, can inhibit uniform strong background and can also consider local details. The method has stronger adaptability in a plurality of scenes, and the comprehensive sensory effect of human eyes is obvious because the characteristics of spatial distribution of the method are considered, such as monotonous background of sky texture or rich background of ground texture, the integral contrast characteristic can be ensured, and the detailed characteristic can be amplified as much as possible. Due to the basic characteristics of the histogram enhancement technology, even if the contrast can be enhanced, the amplification capability of the edge contour detail information of the image is still limited, so that image detail enhancement algorithms based on frequency domain and spatial domain are proposed in succession, and the spatial domain method comprises improved unsharp masking method, retinex, HF (homomorphism filtering), mathematical morphology and other methods. The frequency domain method includes fourier transform, wavelet transform, contourlet transform, and the like. The biggest obstacles limiting the application of the methods in small-sized equipment are the operation complexity and the space complexity, and although the methods such as wavelet transformation, contour transformation and the like have great advantages in the aspect of contour detail enhancement, the algorithm complexity and the space complexity are great, so that the application of the methods in the equipment is severely limited. The spatial domain method based on filtering hierarchy has limited performance although the algorithm complexity and the spatial complexity are not high.
Disclosure of Invention
The invention aims to provide a method for enhancing an image of a refrigeration type infrared sensor, which solves the problems of excessive contrast enhancement and detail loss caused by an infrared image enhancement technology in the prior art.
The technical scheme adopted by the invention is that a method for enhancing the image of a refrigeration type infrared sensor is implemented according to the following steps:
step 1, counting a histogram of the whole image;
step 2, correcting a block distribution histogram;
step 3, correcting the histogram of the self-adaptive platform;
step 4, image equalization;
and 5, enhancing image details.
The present invention is also characterized in that,
the step 1 is as follows:
step 1.1, defining x as an abscissa variable of an image, the range is 1-N, N is the maximum value of the abscissa of the image, defining y as an ordinate variable of the image, the range is 1-M, M is the maximum value of the ordinate of the image, defining image as an original image, and representing the pixel value of x in the abscissa and y in the original image by image (x, y), wherein the value range is 0-2 PBW -1, PBW represents the pixel bit width, and pix is defined as a pixel value variable, the value ranging from 0 to 2 PBW -1, defining F (pix) as the number of occurrences of pix pixel values in the original image;
step 1.2, let x =1,y =1,F (pix) =0, 0-2 in pix range PBW All F (pix) in-1 are initialized to 0;
step 1.3, executing F (image (x, y)) = F (image (x, y)) +1;
step 1.4, judging whether y is equal to M, if y is equal to M, executing y =1, and skipping to step 1.5, otherwise, executing y = y +1, and skipping to step 1.3;
step 1.5, judging whether x is equal to N, if x is equal to N, skipping to step 1.6, otherwise, executing x = x +1, and skipping to step 1.3;
and 1.6, completing the F (pix) statistics, and carrying out the next operation.
The step 2 is as follows:
step 2.1, partitioning the image:
dividing an original image into equal blocks, wherein Wblock represents the division number of a vertical coordinate, hblock represents the division number of a horizontal coordinate, the number of the blocks divided by the original image is Hblock Wblock, the vertical coordinate size of the divided blocks is M = M/Wblock, and the horizontal coordinate size is N = N/Hblock;
step 2.2, counting block distribution:
step 2.2.1, defining B (pix) as a block statistic value representing the number of blocks with pixel values pix, and the pix range is 0-2 PBW -1, all B (pix) values are initialized to 0, B (pix) values range from 0 to Hblock xWblock,defining Hblock as an abscissa block variable, wherein the value range is 1-Hblock, and defining Wblock as an ordinate block variable, wherein the value range is 1-Wblock; defining xblock as the horizontal coordinate variable of pixels in one block in the range of 1-n, and defining yblock as the vertical coordinate variable of pixels in one block in the range of 1-m; skipping to step 2.2.2;
step 2.2.2, firstly, let pix =0, hblock =1, wblock =1, xblock =1, yblock =1; skipping to the step 2.2.3;
step 2.2.3, referring to F (pix) counted in step 1, if F (pix) >0, jumping to step 2.2.5, and if F (pix) =0, jumping to step 2.2.4;
step 2.2.4, judge if pix equals 2 PBW 1, if equal to 2 PBW 1, jump to step 2.2.10, otherwise execute pix = pix +1, jump to step 2.2.3;
step 2.2.5, a block of the hBlock row and the wBlock column is taken, a pixel value image ((hBlock-1) × N + xBlock, (wBlock-1) × M + yBlock) of the block with an abscissa of xBlock and an ordinate of yblock is extracted, if the pixel value is equal to pix, B (pix) = B (pix) +1, xblock =1, yblock =1 is executed, the step 2.2.8 is skipped, and if not, the step 2.2.6 is skipped;
step 2.2.6, judging whether the yblock is equal to m, if the yblock is equal to m, executing the yblock =1, and jumping to the step 2.2.7, otherwise, executing the yblock = yblock +1, and jumping to the step 2.2.5;
step 2.2.7, judging whether xblock is equal to n, if so, executing xblock =1, and skipping to step 2.2.8, otherwise, executing xblock = xblock +1, and skipping to step 2.2.5;
step 2.2.8, judging whether Wblock is equal to Wblock, if so, executing Wblock =1, and jumping to step 2.2.9, otherwise, executing Wblock = Wblock +1, and jumping to step 2.2.5;
step 2.2.9, judging whether Hblock is equal to Hblock, if yes, executing Hblock =1, and skipping to step 2.2.4, otherwise, executing Hblock = Hblock +1, and skipping to step 2.2.5;
step 2.2.10, completing block counting, wherein B (pix) is the counted pixel block number;
step 2.3, correcting the histogram in step 1:
generating a new histogram by the formula N (pix) = F (pix)/B (pix), wherein N (pix) is the modified histogram, and traversing all valid pixel values of pix, namely pix epsilon (0-2) PBW -1),B(pix)>0。
The step 3 is specifically as follows:
step 3.1, calculating the number of all the appeared pixel values of the original image:
step 3.1.1, defining nz _ pix as the number of non-zero pixels, wherein the initial value is nz _ pix =0; let pix =0;
step 3.1.2, judging whether the F (pix) is larger than 0, if so, executing nz _ pix = nz _ pix +1, and jumping to step 3.1.3;
step 3.1.3, judge if pix equals 2 PBW 1, if equal to 2 PBW 1, jump to step 3.1.4, otherwise execute pix = pix +1, jump to step 3.1.2;
step 3.1.4, when nz _ pix is already calculated, the value is the number of all the appeared pixel values of the original image;
step 3.2, calculate
Figure GDA0004017509330000051
L is the plateau value that will be used for the correction of the underlying histogram;
step 3.3, correcting the N (pix) generated in the step 2.3, and performing plateau histogram topping by using the plateau value L obtained in the step 3.2, namely { P (pix) = L, N (pix) > L; p (pix) = N (pix), N (pix) < = L }, where P (pix) is the histogram after the platform correction;
step 3.4, calculate the total count of the histogram, using the sum of the SP tables P (pix), with the formula
Figure GDA0004017509330000052
Figure GDA0004017509330000053
Step 3.5, calculating probability by using P (pix) generated in step 3.3 and SP generated in step 3.4The cumulative function CDF (pix) has the formula
Figure GDA0004017509330000054
The step 4 is as follows:
step 4.1, defining image _ eq as an equalized image, defining image _ eq (x, y) as a pixel value of the equalized image with x as an abscissa and y as an ordinate, and defining ROUND (pix) as rounding operation on the pix value, wherein x =1 and y =1;
step 4.2, calculate image _ eq (x, y) = ROUND (CDF (pix (x, y)) (2) PBW -1));
4.3, judging whether y is equal to M, if so, executing y =1, and skipping to the step 4.4; otherwise, executing y = y +1, and skipping to the step 4.2;
4.4, judging whether x is equal to N, and if x is equal to N, skipping to the step 4.5; otherwise, executing x = x +1, and skipping to the step 4.2;
and 4.5, completing calculation, wherein the newly generated image _ eq is the balanced image.
The step 5 is as follows:
step 5.1, firstly generating a Gaussian space confidence coefficient template:
the window size of the Gaussian space confidence coefficient template is selected to be W x W, W is the maximum value of the horizontal and vertical coordinates of the Gaussian space confidence coefficient template, and the Gaussian space confidence coefficient template is generated according to a formula
Figure GDA0004017509330000061
Wherein (x ', y') is the pixel coordinate of the central point of the Gaussian space confidence coefficient template, the value should be x '= y' = (W + 1)/2, the (x, y) is the coordinate position adjacent to (x ', y'), the value range is 1-W, sigma S The value is generally σ for the distance standard deviation S = W/2, gs (x, y) is a coefficient of which the abscissa is x and the ordinate is y in the Gaussian space confidence coefficient template;
step 5.2, generating a low-pass spatial domain template:
firstly, setting the window size of the low-pass frequency domain template to be W x W, wherein W is the maximum value of the horizontal and vertical coordinates of the low-pass frequency domain template, FLpass (x, y) is the coefficient of the low-pass frequency domain template with the horizontal coordinate of x and the vertical coordinate of y. The template has the following values that when x, y belongs to (1, 2, 1W-1, W-1), FLpass (x, y) =1, otherwise, FLpass (x, y) =0; only direct current and low-frequency components are reserved in the template, and then inverse Fourier transform is carried out on the template to obtain a low-pass spatial domain template KLpass (x, y); KLpass (x, y) is a coefficient with x as the abscissa and y as the ordinate in the low-pass airspace template;
step 5.3, performing dot multiplication on the Gaussian space confidence template Gs (x, y) generated in the step 5.1 and the low-pass spatial domain template KLpass (x, y) generated in the step 5.2 to generate a layered filtering template GL, GL (x, y) = Gs (x, y) · KLpass (x, y); GL (x, y) is a coefficient with x as the abscissa and y as the ordinate in the layered filtering template;
and 5.4, filtering and layering the equalized image _ eq generated in the step 4, wherein the filtering method comprises the following steps:
Figure GDA0004017509330000071
image _ eq (x, y) is a pixel value with x as the abscissa and y as the ordinate in the equalized image generated in step 4, and image _ base is defined as a filtered base layer image, image _ base (x, y) is a pixel value with x as the abscissa and y as the ordinate in the filtered base layer image, image _ detail is defined as a detail image, image _ detail (x, y) is a pixel value with x as the abscissa and y as the ordinate in the detail image, and a calculation formula is image _ detail (x, y) = image _ eq (x, y) -image _ base (x, y);
and 5.5, amplifying the detail image _ detail to generate a final enhanced image _ en hand (x, y):
image_enhance(x,y)=image_base(x,y)+K*image_detail(x,y);
and K is used as a detail gain coefficient, and K is used as a manual adjustment variable of a user to adjust the detail enhancement amplitude.
The method has the advantages that the original image is enhanced by utilizing a histogram modification and histogram equalization method, the enhanced image is filtered and layered by utilizing a new layered filtering template generated by a Gaussian space confidence coefficient template and a Fourier low-pass filtering airspace template, the background and the details of the image are separated, and the layered details are further amplified to obtain the final enhanced image. The method of the invention simultaneously considers some excellent characteristics of the spatial domain and frequency domain processing methods, and improves the capability of enhancing details and outlines on the premise of not increasing the operation complexity and the space complexity. The method has the advantages of good scene adaptability, good contrast performance in various scenes, finer and clearer detail information and low algorithm complexity, and is particularly suitable for being applied to embedded equipment with limited resources.
Drawings
Fig. 1 (a) is the original, fig. 1 (b) is HE enhancement, fig. 1 (c) is APHE enhancement, fig. 1 (d) is CLAHE enhancement, fig. 1 (e) is APHE + EM, fig. 1 (f) is the algorithm of the present invention;
fig. 2 (a) is an original, fig. 2 (b) is HE enhancement, fig. 2 (c) is APHE enhancement, fig. 2 (d) is CLAHE enhancement, fig. 2 (e) is APHE + EM, and fig. 2 (f) is the algorithm of the present invention;
fig. 3 (a) is an original, fig. 3 (b) is HE enhancement, fig. 3 (c) is APHE enhancement, fig. 3 (d) is CLAHE enhancement, fig. 3 (e) is APHE + EM, and fig. 3 (f) is the algorithm of the present invention;
fig. 4 (a) is an original, fig. 4 (b) is HE enhancement, fig. 4 (c) is APHE enhancement, fig. 4 (d) is CLAHE enhancement, fig. 4 (e) is APHE + EM, and fig. 4 (f) is the algorithm of the present invention;
fig. 5 (a) shows original, fig. 5 (b) shows HE enhancement, fig. 5 (c) shows APHE enhancement, fig. 5 (d) shows CLAHE enhancement, fig. 5 (e) shows APHE + EM, and fig. 5 (f) shows the algorithm of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a method for enhancing an image of a refrigeration type infrared sensor, which is implemented according to the following steps:
step 1, counting a histogram of the whole image;
step 2, correcting a block distribution histogram;
step 3, correcting the histogram of the self-adaptive platform;
step 4, image equalization;
and 5, enhancing image details.
Wherein, each step is implemented as follows:
the step 1 is as follows:
step 1.1, defining x as an abscissa variable of an image, the range of x is 1-N, N is the maximum of the abscissa of the image, defining y as an ordinate variable of the image, the range of y is 1-M, M is the maximum of the ordinate of the image, defining image as an original image, and representing the pixel value of the original image with x as the abscissa and y as the ordinate, wherein the range of the pixel value is 0-2 PBW -1, PBW represents the pixel bit width, and pix is defined as a pixel value variable, the value ranging from 0 to 2 PBW -1, defining F (pix) as the number of occurrences of pix pixel values in the original image;
step 1.2, let x =1,y =1,F (pix) =0, 0-2 in pix range PBW All F (pix) in-1 are initialized to 0;
step 1.3, performing F (image (x, y)) = F (image (x, y)) +1;
step 1.4, judging whether y is equal to M, if y is equal to M, executing y =1, and skipping to step 1.5, otherwise executing y = y +1, and skipping to step 1.3;
step 1.5, judging whether x is equal to N, if x is equal to N, skipping to step 1.6, otherwise, executing x = x +1, and skipping to step 1.3;
and 1.6, completing the F (pix) statistics, and carrying out the next operation.
The step 2 is as follows:
step 2.1, partitioning the image:
dividing an original image into equal blocks, wherein Wblock represents the division number of a vertical coordinate, hblock represents the division number of a horizontal coordinate, the number of the blocks divided by the original image is Hblock Wblock, the vertical coordinate size of the divided blocks is M = M/Wblock, and the horizontal coordinate size is N = N/Hblock;
step 2.2, counting block distribution:
step 2.2.1, defining B (pix) as a block statistic value representing the number of blocks with pixel values pix, wherein the pix range is 0-2 PBW -1, all B (pix) initialsThe value of 0,B (pix) is in the range of 0-Hblock xWblock, hblock is defined as an abscissa block variable, the value of the value is in the range of 1-Hblock, wblock is defined as an ordinate block variable, and the value of the value is in the range of 1-Wblock; defining xblock as a pixel abscissa variable in one block within a range of 1-n, and defining yblock as a pixel ordinate variable in one block within a range of 1-m; skipping to step 2.2.2;
step 2.2.2, firstly, let pix =0, hblock =1, wblock =1, xblock =1, yblock =1; skipping to step 2.2.3;
step 2.2.3, F (pix) counted in step 1 is referred to, if F (pix) >0, the step 2.2.5 is skipped, and if F (pix) =0, the step 2.2.4 is skipped;
step 2.2.4, judge whether pix equals 2 PBW 1, if equal to 2 PBW 1, jump to step 2.2.10, otherwise execute pix = pix +1, jump to step 2.2.3;
step 2.2.5, a block of the hBlock row and the wBlock column is taken, a pixel value image ((hBlock-1) × N + xBlock, (wBlock-1) × M + yBlock) of the block with an abscissa of xBlock and an ordinate of yblock is extracted, if the pixel value is equal to pix, B (pix) = B (pix) +1, xblock =1, yblock =1 is executed, the step 2.2.8 is skipped, and if not, the step 2.2.6 is skipped;
step 2.2.6, judging whether the yblock is equal to m, if the yblock is equal to m, executing the yblock =1, and jumping to the step 2.2.7, otherwise, executing the yblock = yblock +1, and jumping to the step 2.2.5;
step 2.2.7, judging whether xblock is equal to n, if so, executing xblock =1, and skipping to step 2.2.8, otherwise, executing xblock = xblock +1, and skipping to step 2.2.5;
step 2.2.8, judging whether Wblock is equal to Wblock, if so, executing Wblock =1, and skipping to step 2.2.9, otherwise, executing Wblock = Wblock +1, and skipping to step 2.2.5;
step 2.2.9, judging whether Hblock is equal to Hblock, if yes, executing Hblock =1, and skipping to step 2.2.4, otherwise, executing Hblock = Hblock +1, and skipping to step 2.2.5;
step 2.2.10, completing block counting, wherein B (pix) is the counted number of pixel blocks;
step 2.3, correcting the histogram in step 1:
generating a new histogram by the formula N (pix) = F (pix)/B (pix), wherein N (pix) is the corrected histogram, and traversing all effective pixel values of pix, namely pix epsilon (0-2) PBW -1),B(pix)>0。
The step 3 is as follows:
step 3.1, calculating the number of all the appeared pixel values of the original image:
step 3.1.1, defining nz _ pix as the number of nonzero pixels, wherein the initial value is nz _ pix =0; let pix =0;
step 3.1.2, judging whether the F (pix) is larger than 0, if so, executing nz _ pix = nz _ pix +1, and jumping to step 3.1.3;
step 3.1.3, judge whether pix equals 2 PBW 1, if equal to 2 PBW 1, jump to step 3.1.4, otherwise execute pix = pix +1, jump to step 3.1.2;
step 3.1.4, when nz _ pix is already calculated, the value is the number of all the appeared pixel values of the original image;
step 3.2, calculate
Figure GDA0004017509330000111
L is the plateau value, which will be used for the correction of the underlying histogram;
step 3.3, correcting N (pix) generated in step 2.3, and performing plateau histogram topping by using the plateau value L obtained in step 3.2, namely { P (pix) = L, N (pix) > L; p (pix) = N (pix), N (pix) <= L }, where P (pix) is the histogram after the stage correction;
step 3.4, calculate the total count of the histogram, using the sum of the SP tables P (pix), with the formula
Figure GDA0004017509330000112
Figure GDA0004017509330000113
Step 3.5, generation by step 3.3P (pix) and the SP generated in step 3.4, the probability accumulation function CDF (pix) is calculated with the formula
Figure GDA0004017509330000114
The step 4 is as follows:
step 4.1, defining image _ eq as an equalized image, defining image _ eq (x, y) as a pixel value of the equalized image with x as an abscissa and y as an ordinate, and defining ROUND (pix) as rounding operation on the pix value, wherein x =1 and y =1;
step 4.2, calculate image _ eq (x, y) = ROUND (CDF (pix (x, y)) (2) PBW -1));
4.3, judging whether y is equal to M, if so, executing y =1, and skipping to the step 4.4; otherwise, executing y = y +1, and skipping to the step 4.2;
4.4, judging whether x is equal to N, and if x is equal to N, skipping to the step 4.5; otherwise, executing x = x +1, and skipping to the step 4.2;
and 4.5, completing calculation, wherein the newly generated image _ eq is the balanced image.
The step 5 is as follows:
step 5.1, firstly generating a Gaussian space confidence coefficient template:
the window size of the Gaussian space confidence coefficient template is selected to be W x W, W is the maximum value of the horizontal and vertical coordinates of the Gaussian space confidence coefficient template, and the Gaussian space confidence coefficient template is generated according to a formula
Figure GDA0004017509330000121
Wherein (x ', y') is the pixel coordinate of the central point of the confidence coefficient template in Gaussian space, the value should be x '= y' = (W + 1)/2, and (x, y) is the coordinate position adjacent to (x ', y'), the value range is 1-W, sigma S The value is generally σ for the distance standard deviation S = W/2, gs (x, y) is a coefficient of which the abscissa is x and the ordinate is y in the Gaussian space confidence coefficient template;
step 5.2, generating a low-pass airspace template:
firstly, setting the window size of the low-pass frequency domain template to be W x W, wherein W is the maximum value of the horizontal and vertical coordinates of the low-pass frequency domain template, FLpass (x, y) is the coefficient of the low-pass frequency domain template with the horizontal coordinate of x and the vertical coordinate of y. The template has the following values that when x, y belongs to (1, 2, 1W-1, W-1), the FLpass (x, y) =1, otherwise, the FLpass (x, y) =0; only direct current and low-frequency components are reserved in the template, and then inverse Fourier transform is carried out on the template to obtain a low-pass spatial domain template KLpass (x, y); KLpass (x, y) is a coefficient with x as the abscissa and y as the ordinate in the low-pass airspace template;
step 5.3, performing dot multiplication on the Gaussian space confidence template Gs (x, y) generated in the step 5.1 and the low-pass spatial domain template KLpass (x, y) generated in the step 5.2 to generate a layered filtering template GL, GL (x, y) = Gs (x, y) · KLpass (x, y); GL (x, y) is a coefficient with x as the abscissa and y as the ordinate in the layered filtering template;
and 5.4, filtering and layering the equalized image _ eq generated in the step 4, wherein the filtering method comprises the following steps:
Figure GDA0004017509330000131
image _ eq (x, y) is the pixel value with x abscissa and y ordinate in the equalized image generated in step 4, and defines image _ base as the filtered base layer image, image _ base (x, y) is the pixel value with x abscissa and y ordinate in the filtered base layer image, image _ detail is the detail image, image _ detail (x, y) is the pixel value with x abscissa and y ordinate in the detail image, and the calculation formula is image _ detail (x, y) = image _ eq (x, y) -image _ base (x, y);
and 5.5, amplifying the detail image _ detail to generate a final enhanced image _ en hand (x, y):
image_enhance(x,y)=image_base(x,y)+K*image_detail(x,y);
and K is used as a detail gain coefficient, and K is used as a manual adjustment variable of a user to adjust the detail enhancement amplitude.
The method of the invention combines and calculates the filtering template through the pre-calculated spatial Gaussian distribution template and the pre-calculated filtering characteristic template of the inverse Fourier transform, and adopts a filtering layering method to separate the image background and the details.
In order to detect the performance of the method, 5 scenes of test image samples are collected from a refrigeration type image sensor and common picture materials, image Enhancement intuitive sensory effect comparison and quantitative analysis comparison are carried out on the 5 scenes and the methods, and the Enhancement effect of the image is measured by adopting two evaluation indexes of Entropy Enhancement (EME) and Average Gradient (AVG) in the quantitative analysis comparison.
EME is a common evaluation index in enhancement, the approximate contrast of an image is obtained by using a blocking method, and the larger the value is, the image has a visual effect more in accordance with the Weber's law. It is defined as:
Figure GDA0004017509330000141
when EME is used for measuring image quality, an image is divided into blocks, the image is divided into K1 x K2 blocks, and the maximum value of the gray scale of each image block is obtained
Figure GDA0004017509330000142
And minimum value
Figure GDA0004017509330000143
Finally, taking logarithm to sum and averaging the blocks to obtain the measurement value.
The AVG reflects the contrast and the texture change characteristics of tiny details in the image and also reflects the definition of the image, and the larger the value is, the clearer the image is. The calculation method comprises the following steps:
Figure GDA0004017509330000144
in the formula (I), the compound is shown in the specification,
Figure GDA0004017509330000145
and with
Figure GDA0004017509330000146
Respectively representing the gradient of a certain pixel in the image in the horizontal and vertical directions. M and N represent the number of rows and columns in the image.
1. Background monotonous smoke cloud image enhancement effect comparison is shown in fig. 1 (a) to 1 (f), in which fig. 1 (a) is an original image, fig. 1 (b) is HE enhancement, fig. 1 (c) is APHE enhancement, fig. 1 (d) is CLAHE enhancement, fig. 1 (e) is APHE + EM, and fig. 1 (f) is the algorithm of the present invention.
TABLE 1 comparison of nephogram EME, AVG
ORRIGINAL HE APHE CLAHE APHE+EM MY
EME 0.515 14.67 11.27 44.91 42.93 38.22
AVG 17.28 2221.5 505.00 7502.5 1551.5 2041.8
1. First, compare the original graph (1 (a) with the algorithm graph (1 (f):
from EME and AVG indexes, the contrast and the detail are enhanced by dozens of times, from the perspective of human vision, the original image is very fuzzy and the contrast is poor, and the image processed by the algorithm of the invention has rich and clear details and obviously enhanced contrast.
2. Comparing HE enhancement fig. 1 (b) with the inventive algorithm fig. 1 (f):
the contrast is obviously enhanced from the EME index, and the visual contrast of human eyes can find that the HE algorithm has the conditions of drowning details and excessively enhancing the background, the image details are almost completely lost, a large amount of background noise is amplified, the quality of the image after enhancement is poor, and the reason why the AVG index is high (the details are the noise which we do not want to see) is also. On the basis of not amplifying background noise, the algorithm amplifies a large amount of details of the smoke cloud, and the image is clear and fine.
3. Comparing APHE enhancement FIG. 1 (c) with the algorithm of the invention FIG. 1 (f):
from EME and AVE indexes, the algorithm of the invention has the contrast and detail display capability which are obviously higher than APHE by several times, and also can obviously feel that the APHE algorithm has general contrast and fuzzy details from the visual contrast of human eyes.
4. Compare CLAHE enhancement fig. 1 (d) with the inventive algorithm fig. 1 (f):
the two images are analyzed from the visual angle of human eyes, and the fact that the CLAHE algorithm excessively enhances the background, a large amount of background noise is amplified, the integral brightness of the image is lost, the fact that the temperature of the smoke cloud is obviously higher than that of the background sky cannot be distinguished, and the image enhancement effect is poor.
5. Comparing APHE + EM enhancement FIG. 1 (e) with the algorithm of the present invention FIG. 1 (f):
the AVG indexes show that the detail information of the algorithm is obviously higher than that of an APHE + EM algorithm, clear images can be obviously sensed from the perspective of human vision, the details are fine and smooth, and the overall perception is obviously superior to that of the APHE + EM algorithm.
2. Comparison of background-rich indoor image enhancement effects is shown in fig. 2 (a) to 2 (f), where fig. 2 (a) is original, fig. 2 (b) is HE enhancement, fig. 2 (c) is APHE enhancement, fig. 2 (d) is CLAHE enhancement, fig. 2 (e) is APHE + EM, and fig. 2 (f) is the algorithm of the present invention.
TABLE 2 indoor graphs EME, AVG comparison
ORRIGINAL HE APHE CLAHE APHE+EM MY
EME 0.65 11.02 8.41 39.58 14.34 47.29
AVG 49.48 1282.8 849.51 4318.8 1530 1928.5
1. Comparing the original figure 2 (a) with the algorithm of the present invention figure 2 (f):
from EME and AVG indexes, the contrast and detail of the original image are obviously raised by dozens of times, and from the visual perception of human eyes, the original image has extremely poor contrast and is also fuzzy, and it is difficult to distinguish each object in the indoor scene. After the algorithm is enhanced, the contrast is obviously enhanced, each indoor object can be clearly distinguished, in addition, the detail lines of the objects and even the small screws can be distinguished, the overall contrast of the image is strong, and the details are clear.
2. Comparing HE enhancement algorithm fig. 2 (b) with the inventive algorithm fig. 2 (f):
according to the EME index, the contrast intensity of the algorithm is 4 times that of the HE algorithm, and the AVG index is obviously higher than that of the HE algorithm. From the visual perception of human eyes, the HE algorithm obviously submerges many details of an image (for example, a hot air port of an air conditioner is arranged on the upper left part of the image, a wind guide cloth is arranged at the hot air port, the wind guide cloth shakes with wind, so that hot air heats the image unevenly, and hot and cold stripes are formed, but all the details of the wind guide cloth are completely submerged by the HE algorithm, and even the wind guide cloth is not seen clearly), and small objects such as small parts, screws and the like of many objects in the scene cannot be distinguished.
3. Comparing the APHE enhancement algorithm FIG. 2 (c) with the inventive algorithm FIG. 2 (f):
according to the two indexes of EME and AVG, the contrast and detail display capability are obviously superior to the performance of an APHE algorithm by several times, and from the sense of human eyes, the APHE algorithm discards a large number of details, the air guide cloth loses the light and shade change, objects in a scene look a little fuzzy, but the air guide cloth can distinguish the light and shade change, the objects in the scene are clear and fine, and the overall appearance is obviously superior to the algorithm.
4. Compare the CLAHE enhancement algorithm of FIG. 2 (d) with the algorithm of the present invention of FIG. 2 (f)
The comparison of human eye senses shows that the CLAHE algorithm completely loses the brightness information of the whole image, and although a lot of details are displayed, the fact that the temperature of a hot air port is obviously higher than the temperature of other objects in a scene disappears, so that the infrared image is not beneficial to the identification of high-temperature abnormity, and the algorithm amplifies a lot of image details under the condition that the whole brightness information is not lost. Furthermore, from the EME index, the contrast of the whole graph is higher than that of the CLAHE algorithm.
5. Comparing APHE + EM enhancement algorithm FIG. 2 (e) with the inventive algorithm FIG. 2 (f):
according to two indexes of EME and AVG, the algorithm is obviously superior to an APHE + EM algorithm, from the intuition and sense of human eyes, the light and shade information of the wind guide cloth is lost, the details of the APHE + EM algorithm of objects in other scenes are better in showing capability, but compared with the method, the method is obviously different, and the details of the algorithm are more exquisite and clear.
3. As shown in fig. 3 (a) to 3 (f), fig. 3 (a) is original, fig. 3 (b) is HE enhancement, fig. 3 (c) is APHE enhancement, fig. 3 (d) is CLAHE enhancement, fig. 3 (e) is APHE + EM, and fig. 3 (f) is the algorithm of the present invention.
TABLE 3 Rich background outdoor EME, AVG contrast
Figure GDA0004017509330000171
Figure GDA0004017509330000181
1. Comparing the original image fig. 3 (a) with the algorithm of the present invention fig. 3 (f):
according to EME and AVG indexes, the contrast and detail showing capability of the algorithm are increased by dozens of times after the algorithm is enhanced, the contrast of an original image is poor, the image is fuzzy and the details of a building can not be distinguished from the human eye sense, and the image is enhanced by the method and has strong overall contrast, clear and fine details.
2. Comparing HE enhancement algorithm fig. 3 (b) with the inventive algorithm fig. 3 (f):
from the EME and AVG indexes, the algorithm has the performance that the contrast and the detail display capability are several times higher than that of the HE algorithm, and from the perspective of human eyes, the difference between the HE enhanced image and the building and the background is excessively enlarged, the details are largely submerged, and the image quality is poor.
3. Comparing the APHE enhancement algorithm FIG. 3 (c) with the inventive algorithm FIG. 3 (f):
from the EME and AVG indexes, the performance of the algorithm of the invention is several times higher than that of the APHE algorithm in terms of contrast and detail showing capability, from the perspective of human eyes, the image edge enhanced by the APHE algorithm is fuzzy, the building contrast strength is too large, the algorithm of the invention has strong contrast without losing information, and the detail is clear and fine.
4. Compare CLAHE enhancement algorithm fig. 3 (d) with the inventive algorithm fig. 3 (f):
from the EME index, the overall contrast of the image is obviously superior to that of the CLAHE algorithm, from the human eye sense, the CLAHE algorithm loses the overall information of the image, the difference of the temperature of a building and the temperature of the sky cannot be distinguished, the detail of the image is amplified on the basis of retaining the overall light and shade information, and the overall light and shade is strong, clear and fine.
5. Comparing APHE + EM enhancement algorithm FIG. 3 (e) with the inventive algorithm FIG. 3 (f):
from the two indexes of EME and AVG, the algorithm of the invention is obviously superior to the APHE + EM algorithm, and from the visual perception of human eyes, the APHE + EM algorithm has better detail expression capability but has more obvious detail expression difference compared with the algorithm of the invention.
4. Comparison of outdoor image enhancement effects with a single background is shown in fig. 4 (a) to 4 (f), where fig. 4 (a) is original, fig. 4 (b) is HE enhancement, fig. 4 (c) is APHE enhancement, fig. 4 (d) is CLAHE enhancement, fig. 4 (e) is APHE + EM, and fig. 4 (f) is the algorithm of the present invention.
TABLE 4 common background outdoor chart EME, AVG comparison
ORRIGINAL HE APHE CLAHE APHE+EM MY
EME 0.89 6.16 6.67 27.93 11.87 35.42
AVG 185.5 477.58 513.65 5503.1 1165.2 2519.2
1. Comparing the original graph (4 (a) with the algorithm of the present invention graph (4 (f):
according to the EME and AVG indexes, the performance that the contrast and the detail display capacity of the original image are greatly increased by tens of times after the original image is enhanced by the algorithm is found, the original image is dim in effect, like a foggy feeling, details are not clearly distinguished from the human vision, telegraph poles, trees and buildings in a scene are almost indistinguishable, the foggy phenomenon disappears after the original image is enhanced by the algorithm, remote telegraph poles, trees and buildings can be clearly distinguished, and the details are fine and soft.
2. Comparing HE enhancement algorithm fig. 4 (b) with the inventive algorithm fig. 4 (f):
according to EME and AVG index comparison, the performance of the two indexes of the algorithm is several times higher than that of the HE enhancement algorithm, and from the visual perception of human eyes, the HE algorithm is still hazy and has the phenomena of halation, excessive cold and hot hardness and excessive enhancement of the background.
3. Comparing APHE enhancement algorithm FIG. 4 (c) with the present invention algorithm FIG. 4 (f):
according to EME and AVG index comparison, the performance of two indexes of the algorithm is several times higher than that of an HE enhancement algorithm, and from the visual perception of human eyes, the APHE algorithm has good defogging effect but still has obvious halo, although scenery can be distinguished, the image is slightly fuzzy, but the algorithm does not have halo, and the image is clear and bright and has clear details.
4. Compare CLAHE enhancement algorithm fig. 4 (d) with the inventive algorithm fig. 4 (f):
according to EME indexes, the algorithm is obviously superior to the CLAHE algorithm in the contrast of the whole image, from the visual perception of human eyes, the CLAHE lacks the whole information of the image, such as the sun, the sky, trees and buildings, and cannot distinguish who is hot and cold, and the CLAHE can hardly be used in the places sensitive to heat analysis.
5. Comparing APHE + EM enhancement algorithm FIG. 4 (e) with the inventive algorithm FIG. 4 (f):
according to the EME and AVG indexes, the two indexes of the algorithm are obviously superior to those of the APHE + EM algorithm, and from the visual perception of human eyes, the APHE + EM processing result is ideal, the transition is soft, the details are clear, almost no halo exists, the defogging effect is good, but the algorithm of the invention obviously shows more excellent processing capability than the APHE + EM algorithm, and the details are clearer and finer.
5. The artificially created contrast for the enhancement effect of the strong background and weak detail images is shown in fig. 5 (a) to 5 (f), fig. 5 (a) is original, fig. 5 (b) is HE enhancement, fig. 5 (c) is APHE enhancement, fig. 5 (d) is CLAHE enhancement, fig. 5 (e) is APHE + EM, and fig. 5 (f) is the algorithm of the present invention.
TABLE 5 Artificial images EME, AVG comparison
ORRIGINAL HE APHE CLAHE APHE+EM MY
EME 4.24 4.61 4.71 6.71 4.82 7.54
AVG 207.58 173.11 362.96 521.61 429.27 1141.3
To further verify the scene-adaptive nature of the algorithm of the present invention, an image was artificially created, which features a very large background (background pixels occupy almost 94% of the pixels), very weak details, and only 32 gray levels of the pixels contained in the details.
1. Comparing the original graph (5 (a) with the algorithm of the present invention graph (5 (f):
according to EME and AVG indexes, the contrast and detail showing capability are obviously enhanced after the algorithm processing, and the original image can be seen from the visual angle of human eyes and can hardly see the upper half image, and the image hidden under the black background can be displayed after the algorithm processing.
2. Comparing HE enhancement algorithm fig. 5 (b) with the inventive algorithm fig. 5 (f):
according to the EME and AVG indexes, the algorithm is obviously superior to the HE algorithm, from the perspective of human vision, an HE enhanced image is blurred, image details are not distinguished completely, and trees and foreground characters in a distant mountain, a near road and a scene can be seen clearly after the algorithm is processed.
3. Comparing the APHE enhancement algorithm FIG. 5 (c) with the inventive algorithm FIG. 5 (f):
according to the EME index and the AVG index, the algorithm is obviously superior to an APHE algorithm, from the perspective of human vision, although the APHE algorithm recovers the image hidden in the black background, the overall contrast of the image is poor, and details are fuzzy, but the overall contrast of the algorithm is obviously superior to that of the APHE algorithm, and the details are clear.
4. Compare CLAHE enhancement algorithm fig. 5 (d) with the inventive algorithm fig. 5 (f):
from EME and AVG indexes, the algorithm of the invention is obviously superior to the CLAHE algorithm, from the perspective of human vision, the image processed by the CLAHE algorithm has almost no difference in brightness between the upper half part and the lower half part, and in the practical situation, the image of the upper half part is hidden in a black background, the brightness of the image should be darker, the image of the lower half part is darker, and the image of the upper half part is brighter.
5. Comparing APHE + EM enhancement algorithm FIG. 5 (e) with the inventive algorithm FIG. 5 (f):
according to EME and AVG indexes, the algorithm is obviously superior to an APHE + EM algorithm, from the perspective of human vision, the processing result of the APHE + EM algorithm is obviously weak in contrast and clear in details, but is still slightly fuzzy, and the image processed by the algorithm is strong in contrast, fine and clear in details, can clearly see the remote mountain, the near road, trees in the scenery and foreground characters, and can be easily recognized and read.
By combining the five scenes, the HE algorithm can lose a large amount of image details in almost all scenes, the APHE algorithm retains a lot of details but has weak contrast improvement capability and only performs well in outdoor image scenes with rich backgrounds, the CLAHE algorithm has strong detail extraction capability but almost all scenes lose whole light and shade information, and the strong detail extraction capability of the CLAHE algorithm is quickly reduced on artificial images. The details of the APHE + EM algorithm are improved greatly, but the poor contrast of the APHE algorithm cannot be solved, the algorithms have certain advantages in certain scenes, but the performance fluctuation of the algorithm is large and the visual perception is poor when a plurality of scenes are switched, the performance of the algorithm is excellent in each scene, the performance fluctuation is small when the scenes are switched, the scene adaptive capacity is strong, the details are clear, and the algorithm is not complex and can be easily realized in embedded equipment with limited resources.

Claims (1)

1. A method for enhancing an image of a refrigeration type infrared sensor is characterized by comprising the following steps:
step 1, counting a histogram of the whole image;
the step 1 is specifically as follows:
step 1.1, defining x as an abscissa variable of an image, the range is 1-N, N is the maximum of the abscissa of the image, defining y as an ordinate variable of the image, the range is 1-M, M is the maximum of the ordinate of the image, defining image as an original image, and representing image (x, y) as the abscissa variable in the original imagePixel values of x and y in ordinate, the values ranging from 0 to 2 PBW -1, PBW represents the pixel bit width, and pix is defined as a pixel value variable, the value ranging from 0 to 2 PBW -1, defining F (pix) as the number of occurrences of pix pixel values in the original image;
step 1.2, let x =1,y =1,F (pix) =0, 0-2 in pix range PBW All F (pix) in-1 are initialized to 0;
step 1.3, executing F (image (x, y)) = F (image (x, y)) +1;
step 1.4, judging whether y is equal to M, if y is equal to M, executing y =1, and skipping to step 1.5, otherwise executing y = y +1, and skipping to step 1.3;
step 1.5, judging whether x is equal to N, if x is equal to N, skipping to step 1.6, otherwise, executing x = x +1, and skipping to step 1.3;
step 1.6, completing F (pix) statistics, and carrying out the next operation;
step 2, correcting a block distribution histogram;
the step 2 is specifically as follows:
step 2.1, partitioning the image:
dividing an original image into equal blocks, wherein Wblock represents the division number of a vertical coordinate, hblock represents the division number of a horizontal coordinate, the division number of the original image is Hblock Wblock, the vertical coordinate of each divided block is M = M/Wblock, and the horizontal coordinate of each divided block is N = N/Hblock;
step 2.2, counting block distribution:
step 2.2.1, defining B (pix) as a block statistic value representing the number of blocks with pixel values pix, wherein the pix range is 0-2 PBW -1, all B (pix) values are initialized to 0, B (pix) values range from 0 to Hblock xWblock, hblock is defined as abscissa block variable, the values range from 1 to Hblock, wblock is defined as ordinate block variable, the values range from 1 to Wblock; defining xblock as the horizontal coordinate variable of pixels in one block in the range of 1-n, and defining yblock as the vertical coordinate variable of pixels in one block in the range of 1-m; skipping to step 2.2.2;
step 2.2.2, firstly, let pix =0, hblock =1, wblock =1, xblock =1, yblock =1; skipping to step 2.2.3;
step 2.2.3, F (pix) counted in step 1 is referred to, if F (pix) >0, the step 2.2.5 is skipped, and if F (pix) =0, the step 2.2.4 is skipped;
step 2.2.4, judge whether pix equals 2 PBW 1, if equal to 2 PBW 1, jump to step 2.2.10, otherwise execute pix = pix +1, jump to step 2.2.3;
step 2.2.5, a block of the hBlock row and the wBlock column is taken, a pixel value image ((hBlock-1) × N + xBlock, (wBlock-1) × M + yBlock) of the block with an abscissa of xBlock and an ordinate of yblock is extracted, if the pixel value is equal to pix, B (pix) = B (pix) +1, xblock =1, yblock =1 is executed, the step 2.2.8 is skipped, and if not, the step 2.2.6 is skipped;
step 2.2.6, judging whether the yblock is equal to m, if the yblock is equal to m, executing the yblock =1, and jumping to the step 2.2.7, otherwise, executing the yblock = yblock +1, and jumping to the step 2.2.5;
step 2.2.7, judging whether xblock is equal to n, if so, executing xblock =1, and skipping to step 2.2.8, otherwise, executing xblock = xblock +1, and skipping to step 2.2.5;
step 2.2.8, judging whether Wblock is equal to Wblock, if so, executing Wblock =1, and skipping to step 2.2.9, otherwise, executing Wblock = Wblock +1, and skipping to step 2.2.5;
step 2.2.9, judging whether Hblock is equal to Hblock, if yes, executing Hblock =1, and skipping to step 2.2.4, otherwise, executing Hblock = Hblock +1, and skipping to step 2.2.5;
step 2.2.10, completing block counting, wherein B (pix) is the counted pixel block number;
step 2.3, correcting the histogram in step 1:
generating a new histogram by the formula N (pix) = F (pix)/B (pix), wherein N (pix) is the corrected histogram, and traversing all effective pixel values of pix, namely pix epsilon (0-2) PBW -1),B(pix)>0;
Step 3, correcting the histogram of the self-adaptive platform;
the step 3 is specifically as follows:
step 3.1, calculating the number of all the appeared pixel values of the original image:
step 3.1.1, defining nz _ pix as the number of nonzero pixels, wherein the initial value is nz _ pix =0; let pix =0;
step 3.1.2, judging whether F (pix) is larger than 0, if so, executing nz _ pix = nz _ pix +1, and jumping to step 3.1.3;
step 3.1.3, judge whether pix equals 2 PBW 1, if equal to 2 PBW 1, jump to step 3.1.4, otherwise execute pix = pix +1, jump to step 3.1.2;
step 3.1.4, when nz _ pix is calculated, the value is the number of all the pixel values appearing in the original image;
step 3.2, calculate
Figure FDA0003990670210000031
L is the plateau value, which will be used for the correction of the underlying histogram;
step 3.3, correcting N (pix) generated in step 2.3, and performing plateau histogram topping by using the plateau value L obtained in step 3.2, namely { P (pix) = L, N (pix) > L; p (pix) = N (pix), N (pix) <= L }, where P (pix) is the histogram after the stage correction;
step 3.4, calculate the total histogram count, using the sum of the SP tables P (pix), with the formula
Figure FDA0003990670210000041
Figure FDA0003990670210000042
Step 3.5, calculating a probability accumulation function CDF (pix) by using the P (pix) generated in the step 3.3 and the SP generated in the step 3.4, wherein the formula is
Figure FDA0003990670210000043
Step 4, image equalization;
the step 4 is specifically as follows:
step 4.1, defining image _ eq as an equalized image, image _ eq (x, y) as a pixel value of the equalized image with x as an abscissa and y as an ordinate, and defining ROUND (pix) as rounding operation on the pix value, wherein x =1, y =1;
step 4.2, calculate image _ eq (x, y) = ROUND (CDF (pix (x, y)) (2) PBW -1));
4.3, judging whether y is equal to M, if so, executing y =1, and skipping to the step 4.4; otherwise, executing y = y +1, and skipping to the step 4.2;
4.4, judging whether x is equal to N, and if x is equal to N, skipping to the step 4.5; otherwise, executing x = x +1, and skipping to the step 4.2;
step 4.5, completing calculation, wherein the newly generated image _ eq is the balanced image;
step 5, enhancing image details;
the step 5 is specifically as follows:
step 5.1, firstly generating a Gaussian space confidence coefficient template:
the window size of the Gaussian space confidence coefficient template is selected to be W x W, W is the maximum value of the horizontal and vertical coordinates of the Gaussian space confidence coefficient template, and the Gaussian space confidence coefficient template is generated according to a formula
Figure FDA0003990670210000044
Wherein (x ', y') is the pixel coordinate of the central point of the confidence coefficient template in Gaussian space, the value should be x '= y' = (W + 1)/2, and (x, y) is the coordinate position adjacent to (x ', y'), the value range is 1-W, sigma S Is the distance standard deviation, takes the value of sigma S = W/2, gs (x, y) is a coefficient with x as the abscissa and y as the ordinate in the Gaussian space confidence template;
step 5.2, generating a low-pass spatial domain template:
firstly, setting the window size of a low-pass frequency domain template as W x W, wherein W is the maximum value of the horizontal and vertical coordinates of the pass-filtering frequency domain template, FLpass (x, y) is the coefficient of the low-pass frequency domain template with the horizontal coordinate of x and the vertical coordinate of y, and the template has the following values that when x, y belongs to (1, 2, 1W-1, W-1), FLpass (x, y) =1, otherwise, FLpass (x, y) =0; only direct current and low-frequency components are reserved in the template, and then inverse Fourier transform is carried out on the template to obtain a low-pass airspace template KLpass (x, y); KLpass (x, y) is a coefficient with x as the abscissa and y as the ordinate in the low-pass airspace template;
step 5.3, performing dot multiplication on the Gaussian space confidence template Gs (x, y) generated in the step 5.1 and the low-pass spatial domain template KLpass (x, y) generated in the step 5.2 to generate a layered filtering template GL, GL (x, y) = Gs (x, y) · KLpass (x, y); GL (x, y) is a coefficient with x as the abscissa and y as the ordinate in the layered filtering template;
step 5.4, filtering and layering the equalized image _ eq generated in the step 4, wherein the filtering method comprises the following steps:
Figure FDA0003990670210000051
image _ eq (x, y) is the pixel value with x abscissa and y ordinate in the equalized image generated in step 4, and defines image _ base as the filtered base layer image, image _ base (x, y) is the pixel value with x abscissa and y ordinate in the filtered base layer image, image _ detail is the detail image, image _ detail (x, y) is the pixel value with x abscissa and y ordinate in the detail image, and the calculation formula is image _ detail (x, y) = image _ eq (x, y) -image _ base (x, y);
and 5.5, amplifying the detail image _ detail to generate a final enhanced image _ en hand (x, y):
image_enhance(x,y)=image_base(x,y)+K*image_detail(x,y);
and K is used as a detail gain coefficient, and K is used as a manual adjustment variable of a user to adjust the detail enhancement amplitude.
CN202010506218.XA 2020-06-05 2020-06-05 Method for enhancing image of refrigeration type infrared sensor Active CN111768355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010506218.XA CN111768355B (en) 2020-06-05 2020-06-05 Method for enhancing image of refrigeration type infrared sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010506218.XA CN111768355B (en) 2020-06-05 2020-06-05 Method for enhancing image of refrigeration type infrared sensor

Publications (2)

Publication Number Publication Date
CN111768355A CN111768355A (en) 2020-10-13
CN111768355B true CN111768355B (en) 2023-02-10

Family

ID=72720086

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010506218.XA Active CN111768355B (en) 2020-06-05 2020-06-05 Method for enhancing image of refrigeration type infrared sensor

Country Status (1)

Country Link
CN (1) CN111768355B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381736A (en) * 2020-11-17 2021-02-19 深圳市歌华智能科技有限公司 Image enhancement method based on scene block
CN114283156B (en) * 2021-12-02 2024-03-05 珠海移科智能科技有限公司 Method and device for removing document image color and handwriting

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980282A (en) * 2010-10-21 2011-02-23 电子科技大学 Infrared image dynamic detail enhancement method
CN103177429A (en) * 2013-04-16 2013-06-26 南京理工大学 FPGA (field programmable gate array)-based infrared image detail enhancing system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654438A (en) * 2015-12-27 2016-06-08 西南技术物理研究所 Gray scale image fitting enhancement method based on local histogram equalization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980282A (en) * 2010-10-21 2011-02-23 电子科技大学 Infrared image dynamic detail enhancement method
CN103177429A (en) * 2013-04-16 2013-06-26 南京理工大学 FPGA (field programmable gate array)-based infrared image detail enhancing system and method

Also Published As

Publication number Publication date
CN111768355A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
Tan et al. Exposure based multi-histogram equalization contrast enhancement for non-uniform illumination images
CN107527332B (en) Low-illumination image color retention enhancement method based on improved Retinex
CN110599415B (en) Image contrast enhancement implementation method based on local self-adaptive gamma correction
Lee et al. Adaptive multiscale retinex for image contrast enhancement
CN105046677B (en) A kind of enhancing treating method and apparatus for traffic video image
CN110852955B (en) Image enhancement method based on image intensity threshold and adaptive cutting
CN108122206A (en) A kind of low-light (level) image denoising method and device
CN107103591A (en) A kind of single image to the fog method based on image haze concentration sealing
CN111768355B (en) Method for enhancing image of refrigeration type infrared sensor
CN103295191A (en) Multi-scale vision self-adaptation image enhancing method and evaluating method
CN110807742B (en) Low-light-level image enhancement method based on integrated network
CN111968041A (en) Self-adaptive image enhancement method
CN111598791B (en) Image defogging method based on improved dynamic atmospheric scattering coefficient function
CN106709504B (en) High fidelity tone mapping method for detail preservation
CN111476725A (en) Image defogging enhancement algorithm based on gradient domain oriented filtering and multi-scale Retinex theory
CN107239729B (en) Illumination face recognition method based on illumination estimation
CN111242878A (en) Mine underground image enhancement method based on cuckoo search
Kim et al. Single image haze removal using hazy particle maps
CN112053309A (en) Image enhancement method and image enhancement device
CN110349113B (en) Adaptive image defogging method based on dark primary color priori improvement
CN115457249A (en) Method and system for fusing and matching infrared image and visible light image
CN112070691A (en) Image defogging method based on U-Net
CN108550124B (en) Illumination compensation and image enhancement method based on bionic spiral
CN105913391B (en) A kind of defogging method can be changed Morphological Reconstruction based on shape
Kumar et al. No-reference metric optimization-based perceptually invisible image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant