CN115063331A - No-ghost multi-exposure image fusion algorithm based on multi-scale block LBP operator - Google Patents
No-ghost multi-exposure image fusion algorithm based on multi-scale block LBP operator Download PDFInfo
- Publication number
- CN115063331A CN115063331A CN202210666439.2A CN202210666439A CN115063331A CN 115063331 A CN115063331 A CN 115063331A CN 202210666439 A CN202210666439 A CN 202210666439A CN 115063331 A CN115063331 A CN 115063331A
- Authority
- CN
- China
- Prior art keywords
- image
- follows
- weight map
- weight
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 7
- 230000008859 change Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 10
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 7
- 230000007797 corrosion Effects 0.000 claims description 6
- 238000005260 corrosion Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000007670 refining Methods 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 6
- 238000012545 processing Methods 0.000 abstract description 5
- 238000007499 fusion processing Methods 0.000 abstract 1
- 238000003384 imaging method Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 125000001475 halogen functional group Chemical group 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
The invention provides a multi-scale block LBP operator based ghost-free multi-exposure image fusion algorithm, and relates to the technical field of image processing. For a multi-exposure image sequence in a dynamic scene, a multi-scale block LBP operator is used for local texture extraction of bright and dark areas and ghost removal caused by a moving object. On the basis, a new brightness self-adaption method is further provided, so that the fused image has better visibility. After the weight map is constructed, the initial weight map containing discontinuity and noise is refined by using a fast guide filter, and the final fusion process adopts a pyramid decomposition and reconstruction method.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-scale block LBP operator based ghost-free multi-exposure image fusion algorithm.
Background
At present, the fusion method of multi-exposure images mainly comprises two methods: hardware-based and software-based methods. Hardware-based methods are directed high dynamic range devices to capture and display real scenes, but these devices are often expensive and not universally available. Compared with a hardware-based method, the software-based method is easy to implement, low in price and suitable for common cameras. Existing software-based solutions are mainly divided into two main categories: HDR imaging techniques and multi-exposure image fusion (MEF). HDR imaging techniques estimate the Camera Response Function (CRF) using a multi-exposure low dynamic image, resulting in a high dynamic image. And then compressing the high-dynamic image, and converting the high-dynamic image into a low-dynamic image by using a tone mapping method, so that the high-dynamic image can be visualized on a common display device. However, the HDR imaging technique is complex, requires a long time, and is not suitable for a common camera. The multi-exposure fusion method does not need to construct HDR images, extracts pixels with larger information amount, better exposure and higher image quality from input multi-exposure low-dynamic images, then performs fusion, and finally obtains a fusion image which can be directly displayed on common display equipment without other processing. Compared with the HDR imaging technology, the multi-exposure image fusion method is lower in computational complexity and higher in speed, so that the method is the first choice of a common camera, but the existing multi-exposure image fusion technologies have various defects, for example, the information of a pixel space neighborhood is not fully considered, the texture details of an image cannot be well reserved, particularly the detail information of bright and dark areas, and the image edge has a halo phenomenon; the fused image cannot retain the characteristic information of the source image sequence, and the color of the image is distorted; fused images in dynamic scenes suffer from ghosting artifacts.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a multi-scale block LBP operator based ghost-free multi-exposure image fusion algorithm, which solves the problems that the current algorithm does not fully consider the information of a pixel space neighborhood, the texture details of an image cannot be well reserved, particularly the detail information of bright and dark areas, and the image edge has a halo phenomenon; the fused image cannot retain the characteristic information of the source image sequence, and the color of the image is distorted; the problem that the fused image in a dynamic scene is influenced by ghost artifacts.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: the ghost-free multi-exposure image fusion algorithm based on the multi-scale block LBP operator comprises the following steps:
Weight map estimation
Different weight maps are combined by pixel multiplication, and the specific calculation is as follows:
wherein the content of the first and second substances,is a contrast weight chart,Is a luminance weight map,A spatial consistency weight graph,Is the combined initial weight map;
after the initial weight map is generated, it is normalized so that the sum of the weights at each pixel (x, y) is 1, calculated as follows:
weight map refinement
Mapping the initial weightsAnd simultaneously, as a guide image and an input image, refining the initial weight map by adopting a quick guide filter, and specifically calculating as follows:
representing the weight map after refinement, FGF r,ep,∈ (I, W) represents rapid guide filtering operation, r, ep belongs to the parameters of the filter, r represents the window radius of the filter, ep is the regularization parameter of the filter, e belongs to the sub-sampling rate, and I, W respectively represent a guide image and an image to be filtered;
the weight map after thinningAnd (3) carrying out normalization to obtain a normalized weight graph, and calculating as follows:
wherein ε is a positive number, W i (x, y) represents the normalized weight map, K is the number of input images,representing the weight map after the thinning;
image fusion
Decomposing the source image into a Laplacian pyramid, decomposing the final weight map into a Gaussian pyramid, and fusing the Laplacian pyramid of the source image and the pyramid of the corresponding weight map at each level respectively as follows:
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Means decomposition of the input image into laplacian pyramids, L { F } l Is a new Laplacian pyramid after fusion, L represents the number of layers of the pyramid, and finally, the L { F } l And (5) reconstructing to obtain a final fused image.
calculating the normalized average brightness L (x, y) of the pixel point (x, y) in the multi-exposure image sequence, specifically as follows:
wherein L (x, y) is the normalized average brightness of the pixel (x, y) in the multi-exposure image sequence, L i (x, y) represents the luminance value of the pixel at the ith image position (x, y) in the sequence of input images, and K is the number of input images;
dividing an exposure normal area, a bright area and a dark area in the image by using the average brightness at the pixel point (x, y), and specifically calculating as follows:
wherein L (x, y) is the normalized average brightness of the pixel points (x, y) in the multi-exposure image sequence, and the average brightness at each pixel point (x, y) determines the bright area B of each image in the source image sequence i (x, y), dark region D i (x, y) and Normal exposed region N i (x,y),Is a grayscale image, α is a luminance threshold, K is the number of input images;
for a normally exposed region in a source image, extracting the texture and the edge of the normally exposed region by adopting a Scharr operator, and calculating the local contrast of each pixel point (x, y) as follows:
wherein, G x ,G y Representing texture variations in the horizontal and vertical directions, respectively, N i (x, y) represents the normally exposed area in the ith image in the input image sequence;
then, calculating the texture change weight of the normally exposed area according to the convolution calculation result, and calculating as follows:
wherein the content of the first and second substances,a texture change weight map, G, at a pixel point (x, y) representing an exposure normal region of the ith image in the sequence of input images x ,G y Representing texture variations in the horizontal and vertical directions, respectively;
and (3) adopting a multi-scale fast LBP operator to extract textures and edges of the bright and dark regions, wherein the calculation is as follows:
S i (x,y)=MBLBP(IN i (x,y))
wherein, IN i (x, y) are light and dark areas in the input image, MBPLBP (@) is the multi-scale block LBP operator, S i (x, y) as a coded value at pixel (x, y), i.e. LBP eigenvalue, which reflects the texture information of the central pixel (x, y) and its neighborhood;
using a fast Laplace filter pair S i The texture detail information in (x, y) is enhanced while retaining the information of the edge portion, which is calculated as follows:
wherein the content of the first and second substances,is passed through a fast Laplace filter pair S i Texture change weight maps of bright and dark regions after enhancement of texture detail information in (x, y);
and combining the two weights to obtain a final contrast ratio weight map, and calculating as follows:
wherein the content of the first and second substances,in order to be a contrast-weight map,a texture change weight map at pixel points (x, y) of an ith image exposure normal region in the input image sequence,the texture change weight map is obtained by enhancing bright and dark areas of an input image sequence by a fast Laplace filter.
the combined curve of the gaussian curve and the cauchy curve is used to construct the brightness weight values of the red, green and blue channels, a higher brightness weight is assigned to the pixel of the well-exposed area, and the pixel of the bright and dark areas in the image is assigned a lower brightness weight, which is calculated as follows:
wherein R is l Brightness weight value, G, representing the red color channel l Brightness weight value representing green channel, B l A luminance weight value representing a blue channel;
a luminance weight map is extracted, which is calculated as follows:
wherein, the first and the second end of the pipe are connected with each other,a luminance weight map is shown.
Preferably, the adaptive function η (rl, R) is calculated as follows:
l r,i (x, y) represents the luminance value of the pixel in the red channel at the ith image position (x, y) in the input image.
Preferably, the adaptive function η (gl, G) is calculated as follows:
l g,i (x, y) represents the luminance value of the pixel in the green channel at the ith image position (x, y) in the input image.
Preferably, the adaptive function η (bl, B) is calculated as follows:
l g,i (x, y) represents the luminance value of the pixel in the blue channel at the ith image position (x, y) in the input image.
firstly, calculating LBP characteristics for each pixel in a source image sequence, wherein the specific algorithm is as follows:
andrespectively, pixel values of the ith image in R, G and B channels at pixel points (x, y) in the input image;
for any two different images I in the image sequence i (x,y)、I j (x, y) (i ≠ j) by calculating T at pixel (x, y) i (x,y)、T j The Euclidean distance between (x, y) is respectively used for measuring the local similarity of the R channel, the G channel and the B channel, and the specific algorithm is as follows:
the local similarity between the i and j images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then, a spatial consistency weight term of the image in the motion scene is constructed in the following way, specifically calculated as follows:
wherein, the standard deviation is d Controlling local similarity d i,j (x, y) pair of weightsThe influence of (a);
and finally, thinning the weight graph through a morphological operator to remove the influence of noise:
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,is an expansion operation in which the gas is expanded,is a corrosion operation.
(III) advantageous effects
Compared with the existing algorithm, the ghost-free multi-exposure image fusion algorithm based on the multi-scale block LBP operator provided by the invention has the following advantages: the algorithm fully retains the texture details of the image, enhances the detail information of bright and dark areas in the image, retains the characteristics of a source image sequence to the maximum extent, does not lose the color information of the image, can process the image sequence shot in a dynamic scene, and can not be influenced by ghost artifacts in a fused image. The algorithm innovatively provides a multi-scale block LBP-based regional division method for extracting image texture information; aiming at the brightness characteristics of the images, a new brightness self-adaptive algorithm is innovatively provided, so that the fused images have better visibility; aiming at an image sequence in a dynamic scene, a method based on a multi-scale block LBP is innovatively provided to construct a spatial consistency weight term, and ghost artifacts in a fusion image are effectively removed.
Drawings
FIG. 1 is a diagram of the algorithmic process of the present invention;
FIG. 2 is a sequence of multiple exposure images input, (a) is a sequence of images in a static scene; (b) is a sequence of images in a dynamic scene;
FIG. 3 is a result of processing an image sequence by a prior art algorithm;
FIG. 4 is the result of the processing of the image sequence by the present algorithm;
FIG. 5 is a result of fusing images with a prior art algorithm;
FIG. 6 shows the result of the fusion of images by the present algorithm.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
1. Texture extraction
When fusing images in a static scene without moving objects, two image features should be considered: contrast and brightness, local contrast is used to preserve important details like texture and edges. When the multi-resolution method is used to fuse the multi-exposure images, it retains sufficient detail information in normally exposed areas. However, since the texture detail information of the bright and dark regions is affected by the brightness, the bright and dark regions lose part of the detail information. In order to solve the problem, the present invention innovatively provides a partitioned texture detail extraction method based on a multi-scale block LBP, specifically calculating as follows:
wherein L (x, y) is the normalized average brightness of the pixel (x, y) in the multi-exposure image sequence, L i (x, y) represents the intensity value of the pixel at the position (x, y) of the ith image in the sequence of input images. We determine the bright region B of each image in the sequence of source images by computing the average luminance at each pixel point (x, y) i (x, y), dark region D i (x, y) and Normal exposed region N i (x, y). Here, theIs a grayscale image, α is a luminance threshold, and K is the number of input images. L (x, y) is calculated as follows:
in the normally exposed area of the image, extracting the texture and the edge of the normally exposed area by adopting a Scharr operator with higher accuracy, and calculating the local contrast of each pixel as follows:
G x ,G y respectively representing the texture change in the horizontal direction and the vertical direction, and then calculating the texture change weight of the normal region according to the convolution calculation result, wherein the weight is calculated as follows:
and representing the texture change weight of the pixel point (x, y) of the ith image exposure normal area in the input image sequence.
In bright and dark areas in an image, a multi-scale fast LBP operator is adopted for texture and edge extraction, the operator has rotation invariance and gray invariance and strong robustness to illumination, and the texture detail information of the area can be well extracted, and the calculation is as follows:
S i (x,y)=MBLBP(IN i (x,y))
IN i (x, y) is the bright and dark areas in the input image, S i (x, y) as an encoded value of the pixel at (x, y), i.e., an LBP feature value that may reflect texture information of the center pixel (x, y) and its neighborhood. Followed by the use of a fast laplacian filter pair S i The texture detail information in (x, y) is enhanced while retaining the information of the edge portion, which is calculated as follows:
finally, the final contrast ratio weight map is obtained by combining the two texture change weights as follows:
2. luminance extraction
When a picture is taken with a normal camera, certain areas in the picture appear dark, i.e. underexposed, and certain areas in the picture appear bright, i.e. overexposed. Both underexposure and overexposure can cause serious loss of image information, and the visual quality of an image is affected, and the following brightness extraction method is innovatively proposed in the text:
the algorithm utilizes a curve formed by combining a Gaussian curve and a Cauchy curve to construct the brightness weight values of red, green and blue channels, a higher brightness weight is distributed to pixels in a well-exposed area, and pixels in bright and dark areas in an image are distributed with a lower brightness weight. It is calculated as follows:
since some pixels in the input image sequence may be in bright or dark areas by nature, rather than the image being too bright or too dark due to overexposure or underexposure. In the algorithm, a brightness adaptive function is used to adjust the weighted values of the pixels in the bright and dark regions of the RGB three-color channels, where the brightness adaptive function η (rl, R) of the red color channel is taken as an example, that is:
l r,i (x, y) represents the luminance value of the pixel in the red channel at the ith image position (x, y) in the input image. When the adaptive functions η (gl, G), η (bl, B) in the green and blue channels are calculated in the same way as in the red channel.
3. Moving object detection
When the input image sequence is captured in a dynamic scene, the effect of moving objects on the fused image must be taken into account, otherwise the final fused image will have the effect of ghosting artifacts. To solve this problem, the innovation herein proposes a method of constructing spatial consistency weights based on multi-scale block LBP (MB-LBP). Firstly, calculating LBP characteristics for each pixel in a source image sequence, wherein the specific algorithm is as follows:
andthe pixel values of the ith image in the input image at pixel point (x, y) in the R, G, B channels, respectively. For any two different images I in the image sequence i (x,y)、I j (x, y) (i ≠ j), as determined by calculating T at pixel (x, y) i (x,y)、T j The euclidean distance between (x, y) measures their local similarity at the R, G, B channels, respectively. The specific algorithm is as follows:
the local similarity between the i and j images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then, a spatial consistency weight term of the image in the motion scene is constructed in the following way, specifically calculated as follows:
wherein the standard deviation delta d Controlling local similarity d i,j (x, y) pair of weightsIs herein set as0.05. The design idea of the algorithm is as follows: if the pixel point (x, y) is in the image I i If it belongs to a motion region, image I i And all of I j (i ≠ j) local similarity D at (x, y) i,j (x, y) are increased and the spatial consistency weight W at that pixel point is increased i (x, y) will decrease, resulting in image I i The weight value at pixel point (x, y) decreases.
And finally, thinning the weight graph through a morphological operator to remove the influence of noise:
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,is an expansion operation in which the gas is expanded,is a corrosion operation.
4. Weight map estimation
From the previous calculations, three image features (local contrast, luminance feature and spatial consistency) are obtained, and these weight terms need to be combined in this step to obtain the initial weight map. In order to make the proposed method possible to extract the highest quality region from the weight map, we use pixel multiplication to combine different weight maps, which is calculated as follows:
after the initial weight map is generated, it needs to be normalized so that at each pixel (x, y), its sum of weights is 1, calculated as follows:
where epsilon is a small positive number to avoid the situation where the denominator is zero.
5. Weight map refinement
The initial weight map generally contains noise and has discontinuity, so before the final fusion is carried out, the initial weight map needs to be refined, and the algorithm adopts a fast guide filter to refine the initial weight map. The specific calculation is as follows:
representing the weight map after refinement, FGF r,ep,∈ (I, W) represents a fast boot filtering operation. r, ep, e is the parameter of the filter, r represents the window radius of the filter, ep is the regularization parameter of the filter, which controls the smoothing degree of the filter, e is the sub-sampling rate, I, W represent the guide image and the image to be filtered respectively, in the algorithm, the weight map is usedAs both the guide image and the input image. And finally, normalizing the weight graph to obtain a final weight graph, wherein the weight graph is calculated as follows:
6. image fusion
In the algorithm, a source image is decomposed into a Laplacian pyramid, the weight map is decomposed into a Gaussian pyramid, and then the Gaussian pyramid and the Laplacian pyramid of the image are fused at each level as follows:
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Representing the decomposition of the input image into a Laplacian pyramid, L { F } l Is a new laplacian pyramid after fusion, and l represents the number of layers of the pyramid. Finally, for L { F } l And (5) reconstructing to obtain a final fused image.
And (3) quality evaluation:
Q AB/F : the method is a novel objective quality evaluation index of the fused image, the index reflects the quality of visual information obtained from the fusion of the input images, the index represents the retention degree of image edge detail information, and the higher the value of the index is, the more edge detail information representing that the fused image retains a source image sequence is.
MEF-SSIM: the index is used to measure the structural similarity between the input multi-exposure image sequence and the fused image. The value range of the MEF-SSIM is 0-1, the higher the numerical value of the MEF-SSIM is, the higher the structural similarity between a result image and a source image sequence is, namely, the better the image quality is, and the MEF-SSIM adopted in the method is a full-reference evaluation index.
TABLE 1 MEF-SSIM test results
TABLE 2Q AB/F Test results
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
Claims (7)
1. The ghost-free multi-exposure image fusion algorithm based on the multi-scale block LBP operator is characterized by comprising the following steps of:
Weight map estimation
Different weight maps are combined by pixel multiplication, and the specific calculation is as follows:
wherein, the first and the second end of the pipe are connected with each other,is a contrast weight chart,Is a luminance weight map,A spatial consistency weight map,Is the combined initial weight map;
after the initial weight map is generated, it is normalized so that the sum of weights at each pixel (x, y) is 1, calculated as follows:
weight map refinement
Mapping the initial weightsAnd simultaneously, as a guide image and an input image, refining the initial weight map by adopting a quick guide filter, and specifically calculating as follows:
representing the weight map after refinement, FGF r,ep,∈ (I, W) represents rapid guide filtering operation, r, ep belongs to the parameters of the filter, r represents the window radius of the filter, ep is the regularization parameter of the filter, e belongs to the sub-sampling rate, and I, W respectively represent a guide image and an image to be filtered;
the weight map after thinningAnd (3) carrying out normalization to obtain a normalized weight graph, and calculating as follows:
wherein ε is a positive number, W i (x, y) represents the normalized weight map, K is the number of input images,representing the weight map after the thinning;
image fusion
Decomposing the source image into a Laplacian pyramid, decomposing the final weight map into a Gaussian pyramid, and fusing the Laplacian pyramid of the source image and the Gaussian pyramid of the corresponding weight map at each level respectively as follows:
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Representing the decomposition of the input image into a Laplacian pyramid, L { F } l Is a new Laplacian pyramid after fusion, L represents the number of layers of the pyramid, and finally, the L { F } l And (5) reconstructing to obtain a final fused image.
2. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 1, wherein a contrast weight map is extractedThe method comprises the following steps:
calculating the normalized average brightness L (x, y) of the pixel point (x, y) in the multi-exposure image sequence, specifically as follows:
wherein L (x, y) is the normalized average brightness of the pixel (x, y) in the multi-exposure image sequence, L i (x, y) denotes the position (x, y) of the ith image in the sequence of input imagesLuminance value of a pixel, K being the number of input images;
dividing an exposure normal area, a bright area and a dark area in the image by using the average brightness at the pixel point (x, y), and specifically calculating as follows:
wherein L (x, y) is the normalized average brightness of the pixel points (x, y) in the multi-exposure image sequence, and the average brightness at each pixel point (x, y) determines the bright area B of each image in the source image sequence i (x, y), dark region D i (x, y) and Normal exposed region N i (x,y),Is a grayscale image, α is a luminance threshold, K is the number of input images;
for a normally exposed region in a source image, extracting the texture and the edge of the normally exposed region by adopting a Scharr operator, and calculating the local contrast of each pixel point (x, y) as follows:
wherein G is x ,G y Representing texture variations in the horizontal and vertical directions, respectively, N i (x, y) represents the normally exposed area in the ith image in the input image sequence;
then, calculating the texture change weight of the exposure normal area according to the convolution calculation result, and calculating as follows:
wherein, the first and the second end of the pipe are connected with each other,a texture change weight map, G, at a pixel point (x, y) representing an exposure normal region of the ith image in the sequence of input images x ,G y Representing texture variations in the horizontal and vertical directions, respectively;
and (3) adopting a multi-scale fast LBP operator to extract textures and edges of the bright and dark regions, wherein the calculation is as follows:
S i (x,y)=MBLBP(IN i (x,y))
wherein, IN i (x, y) are the light and dark regions in the input image, MBPLBP (@) is the multiscale block LBP operator, S i (x, y) as the encoded value at pixel point (x, y);
using a fast Laplace filter pair S i The texture detail information in (x, y) is enhanced while retaining the information of the edge portion, which is calculated as follows:
wherein the content of the first and second substances,is passed through a fast Laplace filter pair S i Texture change weight maps of bright and dark regions after enhancement of texture detail information in (x, y);
and combining the two weights to obtain a final contrast ratio weight map, and calculating as follows:
wherein the content of the first and second substances,in order to be a contrast weight map,exposing normal areas for the ith image in the sequence of input imagesA texture change weight map at pixel point (x, y),the texture change weight map is obtained by enhancing bright and dark regions of an input image sequence by a fast Laplace filter.
3. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 1, wherein extracting a luminance weight mapThe method comprises the following steps:
the combined curve of the gaussian curve and the cauchy curve is used to construct the brightness weight values of the red, green and blue channels, a higher brightness weight is assigned to the pixel of the well-exposed area, and the pixel of the bright and dark areas in the image is assigned a lower brightness weight, which is calculated as follows:
wherein R is l Brightness weight value, G, representing the red channel l Brightness weight value representing green channel, B l A luminance weight value representing a blue channel;
a luminance weight map is extracted, which is calculated as follows:
7. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 1, wherein a spatial consistency weight map is extractedThe method comprises the following steps:
firstly, calculating LBP characteristics for each pixel in a source image sequence, wherein the specific algorithm is as follows:
andrespectively, pixel values of the ith image in R, G and B channels at pixel points (x, y) in the input image;
for any two different images I in the image sequence i (x,y)、I j (x, y) (i ≠ j) by calculating T at pixel (x, y) i (x,y)、T j The Euclidean distance between (x, y) is used for respectively measuring the local similarity of the (x, y) channels in R, G and B channels, and the specific algorithm is as follows:
the local similarity between the i and j images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then, a spatial consistency weight term of the image in the motion scene is constructed in the following way, specifically calculated as follows:
wherein, the standard deviation is d Controlling local similarity d i,j (x, y) pair of weightsThe influence of (a);
and finally, thinning the weight graph through a morphological operator to remove the influence of noise:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210666439.2A CN115063331B (en) | 2022-06-14 | 2022-06-14 | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210666439.2A CN115063331B (en) | 2022-06-14 | 2022-06-14 | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115063331A true CN115063331A (en) | 2022-09-16 |
CN115063331B CN115063331B (en) | 2024-04-12 |
Family
ID=83200284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210666439.2A Active CN115063331B (en) | 2022-06-14 | 2022-06-14 | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115063331B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115760663A (en) * | 2022-11-14 | 2023-03-07 | 辉羲智能科技(上海)有限公司 | Method for synthesizing high dynamic range image from low dynamic range image based on multi-frame multi-exposure |
CN116630218A (en) * | 2023-07-02 | 2023-08-22 | 中国人民解放军战略支援部队航天工程大学 | Multi-exposure image fusion method based on edge-preserving smooth pyramid |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112819736A (en) * | 2021-01-13 | 2021-05-18 | 浙江理工大学 | Workpiece character image local detail enhancement fusion method based on multiple exposures |
US20220020126A1 (en) * | 2020-07-20 | 2022-01-20 | Samsung Electronics Co., Ltd. | Guided multi-exposure image fusion |
-
2022
- 2022-06-14 CN CN202210666439.2A patent/CN115063331B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220020126A1 (en) * | 2020-07-20 | 2022-01-20 | Samsung Electronics Co., Ltd. | Guided multi-exposure image fusion |
CN112819736A (en) * | 2021-01-13 | 2021-05-18 | 浙江理工大学 | Workpiece character image local detail enhancement fusion method based on multiple exposures |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115760663A (en) * | 2022-11-14 | 2023-03-07 | 辉羲智能科技(上海)有限公司 | Method for synthesizing high dynamic range image from low dynamic range image based on multi-frame multi-exposure |
CN115760663B (en) * | 2022-11-14 | 2023-09-22 | 辉羲智能科技(上海)有限公司 | Method for synthesizing high dynamic range image based on multi-frame multi-exposure low dynamic range image |
CN116630218A (en) * | 2023-07-02 | 2023-08-22 | 中国人民解放军战略支援部队航天工程大学 | Multi-exposure image fusion method based on edge-preserving smooth pyramid |
CN116630218B (en) * | 2023-07-02 | 2023-11-07 | 中国人民解放军战略支援部队航天工程大学 | Multi-exposure image fusion method based on edge-preserving smooth pyramid |
Also Published As
Publication number | Publication date |
---|---|
CN115063331B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | An experiment-based review of low-light image enhancement methods | |
Wang et al. | Single image dehazing based on the physical model and MSRCR algorithm | |
CN110599433B (en) | Double-exposure image fusion method based on dynamic scene | |
Jiang et al. | Image dehazing using adaptive bi-channel priors on superpixels | |
CN115063331B (en) | Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method | |
CN113888437A (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
JP2015011717A (en) | Ghost artifact detection and removal methods in hdr image processing using multi-scale normalized cross-correlation | |
CN112785534A (en) | Ghost-removing multi-exposure image fusion method in dynamic scene | |
CN114862698B (en) | Channel-guided real overexposure image correction method and device | |
Wang et al. | Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition | |
Lee et al. | HDR image reconstruction using segmented image learning | |
CN113658197B (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN114862707A (en) | Multi-scale feature recovery image enhancement method and device and storage medium | |
CN117152182B (en) | Ultralow-illumination network camera image processing method and device and electronic equipment | |
CN113379861B (en) | Color low-light-level image reconstruction method based on color recovery block | |
Wu et al. | Reflectance-guided, contrast-accumulated histogram equalization | |
Fuh et al. | Mcpa: A fast single image haze removal method based on the minimum channel and patchless approach | |
CN112927160B (en) | Single low-light image enhancement method based on depth Retinex | |
Tung et al. | ICEBIN: Image contrast enhancement based on induced norm and local patch approaches | |
Han et al. | Automatic illumination and color compensation using mean shift and sigma filter | |
CN117830134A (en) | Infrared image enhancement method and system based on mixed filtering decomposition and image fusion | |
CN117391987A (en) | Dim light image processing method based on multi-stage joint enhancement mechanism | |
Wang et al. | An exposure fusion approach without ghost for dynamic scenes | |
CN116468636A (en) | Low-illumination enhancement method, device, electronic equipment and readable storage medium | |
CN115829851A (en) | Portable fundus camera image defect eliminating method and system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |