CN115063331A - No-ghost multi-exposure image fusion algorithm based on multi-scale block LBP operator - Google Patents

No-ghost multi-exposure image fusion algorithm based on multi-scale block LBP operator Download PDF

Info

Publication number
CN115063331A
CN115063331A CN202210666439.2A CN202210666439A CN115063331A CN 115063331 A CN115063331 A CN 115063331A CN 202210666439 A CN202210666439 A CN 202210666439A CN 115063331 A CN115063331 A CN 115063331A
Authority
CN
China
Prior art keywords
image
follows
weight map
weight
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210666439.2A
Other languages
Chinese (zh)
Other versions
CN115063331B (en
Inventor
李正平
叶欣荣
徐超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202210666439.2A priority Critical patent/CN115063331B/en
Publication of CN115063331A publication Critical patent/CN115063331A/en
Application granted granted Critical
Publication of CN115063331B publication Critical patent/CN115063331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a multi-scale block LBP operator based ghost-free multi-exposure image fusion algorithm, and relates to the technical field of image processing. For a multi-exposure image sequence in a dynamic scene, a multi-scale block LBP operator is used for local texture extraction of bright and dark areas and ghost removal caused by a moving object. On the basis, a new brightness self-adaption method is further provided, so that the fused image has better visibility. After the weight map is constructed, the initial weight map containing discontinuity and noise is refined by using a fast guide filter, and the final fusion process adopts a pyramid decomposition and reconstruction method.

Description

No-ghost multi-exposure image fusion algorithm based on multi-scale block LBP operator
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-scale block LBP operator based ghost-free multi-exposure image fusion algorithm.
Background
At present, the fusion method of multi-exposure images mainly comprises two methods: hardware-based and software-based methods. Hardware-based methods are directed high dynamic range devices to capture and display real scenes, but these devices are often expensive and not universally available. Compared with a hardware-based method, the software-based method is easy to implement, low in price and suitable for common cameras. Existing software-based solutions are mainly divided into two main categories: HDR imaging techniques and multi-exposure image fusion (MEF). HDR imaging techniques estimate the Camera Response Function (CRF) using a multi-exposure low dynamic image, resulting in a high dynamic image. And then compressing the high-dynamic image, and converting the high-dynamic image into a low-dynamic image by using a tone mapping method, so that the high-dynamic image can be visualized on a common display device. However, the HDR imaging technique is complex, requires a long time, and is not suitable for a common camera. The multi-exposure fusion method does not need to construct HDR images, extracts pixels with larger information amount, better exposure and higher image quality from input multi-exposure low-dynamic images, then performs fusion, and finally obtains a fusion image which can be directly displayed on common display equipment without other processing. Compared with the HDR imaging technology, the multi-exposure image fusion method is lower in computational complexity and higher in speed, so that the method is the first choice of a common camera, but the existing multi-exposure image fusion technologies have various defects, for example, the information of a pixel space neighborhood is not fully considered, the texture details of an image cannot be well reserved, particularly the detail information of bright and dark areas, and the image edge has a halo phenomenon; the fused image cannot retain the characteristic information of the source image sequence, and the color of the image is distorted; fused images in dynamic scenes suffer from ghosting artifacts.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a multi-scale block LBP operator based ghost-free multi-exposure image fusion algorithm, which solves the problems that the current algorithm does not fully consider the information of a pixel space neighborhood, the texture details of an image cannot be well reserved, particularly the detail information of bright and dark areas, and the image edge has a halo phenomenon; the fused image cannot retain the characteristic information of the source image sequence, and the color of the image is distorted; the problem that the fused image in a dynamic scene is influenced by ghost artifacts.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: the ghost-free multi-exposure image fusion algorithm based on the multi-scale block LBP operator comprises the following steps:
extracting contrast weight map
Figure BDA0003693131260000021
Luminance weight map
Figure BDA0003693131260000022
Spatial consistency weight map
Figure BDA0003693131260000023
Weight map estimation
Different weight maps are combined by pixel multiplication, and the specific calculation is as follows:
Figure BDA0003693131260000024
wherein the content of the first and second substances,
Figure BDA0003693131260000025
is a contrast weight chart,
Figure BDA0003693131260000026
Is a luminance weight map,
Figure BDA0003693131260000027
A spatial consistency weight graph,
Figure BDA0003693131260000028
Is the combined initial weight map;
after the initial weight map is generated, it is normalized so that the sum of the weights at each pixel (x, y) is 1, calculated as follows:
Figure BDA0003693131260000029
where ε is a positive number, K is the number of input images,
Figure BDA00036931312600000210
Is an initial weight graph;
weight map refinement
Mapping the initial weights
Figure BDA00036931312600000211
And simultaneously, as a guide image and an input image, refining the initial weight map by adopting a quick guide filter, and specifically calculating as follows:
Figure BDA0003693131260000031
Figure BDA0003693131260000032
representing the weight map after refinement, FGF r,ep,∈ (I, W) represents rapid guide filtering operation, r, ep belongs to the parameters of the filter, r represents the window radius of the filter, ep is the regularization parameter of the filter, e belongs to the sub-sampling rate, and I, W respectively represent a guide image and an image to be filtered;
the weight map after thinning
Figure BDA0003693131260000033
And (3) carrying out normalization to obtain a normalized weight graph, and calculating as follows:
Figure BDA0003693131260000034
wherein ε is a positive number, W i (x, y) represents the normalized weight map, K is the number of input images,
Figure BDA0003693131260000035
representing the weight map after the thinning;
image fusion
Decomposing the source image into a Laplacian pyramid, decomposing the final weight map into a Gaussian pyramid, and fusing the Laplacian pyramid of the source image and the pyramid of the corresponding weight map at each level respectively as follows:
Figure BDA0003693131260000036
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Means decomposition of the input image into laplacian pyramids, L { F } l Is a new Laplacian pyramid after fusion, L represents the number of layers of the pyramid, and finally, the L { F } l And (5) reconstructing to obtain a final fused image.
Preferably, a contrast weight map is extracted
Figure BDA0003693131260000037
The method comprises the following steps:
calculating the normalized average brightness L (x, y) of the pixel point (x, y) in the multi-exposure image sequence, specifically as follows:
Figure BDA0003693131260000038
wherein L (x, y) is the normalized average brightness of the pixel (x, y) in the multi-exposure image sequence, L i (x, y) represents the luminance value of the pixel at the ith image position (x, y) in the sequence of input images, and K is the number of input images;
dividing an exposure normal area, a bright area and a dark area in the image by using the average brightness at the pixel point (x, y), and specifically calculating as follows:
Figure BDA0003693131260000041
wherein L (x, y) is the normalized average brightness of the pixel points (x, y) in the multi-exposure image sequence, and the average brightness at each pixel point (x, y) determines the bright area B of each image in the source image sequence i (x, y), dark region D i (x, y) and Normal exposed region N i (x,y),
Figure BDA0003693131260000042
Is a grayscale image, α is a luminance threshold, K is the number of input images;
for a normally exposed region in a source image, extracting the texture and the edge of the normally exposed region by adopting a Scharr operator, and calculating the local contrast of each pixel point (x, y) as follows:
Figure BDA0003693131260000043
wherein, G x ,G y Representing texture variations in the horizontal and vertical directions, respectively, N i (x, y) represents the normally exposed area in the ith image in the input image sequence;
then, calculating the texture change weight of the normally exposed area according to the convolution calculation result, and calculating as follows:
Figure BDA0003693131260000044
wherein the content of the first and second substances,
Figure BDA0003693131260000045
a texture change weight map, G, at a pixel point (x, y) representing an exposure normal region of the ith image in the sequence of input images x ,G y Representing texture variations in the horizontal and vertical directions, respectively;
and (3) adopting a multi-scale fast LBP operator to extract textures and edges of the bright and dark regions, wherein the calculation is as follows:
S i (x,y)=MBLBP(IN i (x,y))
wherein, IN i (x, y) are light and dark areas in the input image, MBPLBP (@) is the multi-scale block LBP operator, S i (x, y) as a coded value at pixel (x, y), i.e. LBP eigenvalue, which reflects the texture information of the central pixel (x, y) and its neighborhood;
using a fast Laplace filter pair S i The texture detail information in (x, y) is enhanced while retaining the information of the edge portion, which is calculated as follows:
Figure BDA0003693131260000051
wherein the content of the first and second substances,
Figure BDA0003693131260000052
is passed through a fast Laplace filter pair S i Texture change weight maps of bright and dark regions after enhancement of texture detail information in (x, y);
and combining the two weights to obtain a final contrast ratio weight map, and calculating as follows:
Figure BDA0003693131260000053
wherein the content of the first and second substances,
Figure BDA0003693131260000054
in order to be a contrast-weight map,
Figure BDA0003693131260000055
a texture change weight map at pixel points (x, y) of an ith image exposure normal region in the input image sequence,
Figure BDA0003693131260000056
the texture change weight map is obtained by enhancing bright and dark areas of an input image sequence by a fast Laplace filter.
Preferably, a luminance weight map is extracted
Figure BDA0003693131260000057
The method comprises the following steps:
the combined curve of the gaussian curve and the cauchy curve is used to construct the brightness weight values of the red, green and blue channels, a higher brightness weight is assigned to the pixel of the well-exposed area, and the pixel of the bright and dark areas in the image is assigned a lower brightness weight, which is calculated as follows:
Figure BDA0003693131260000058
Figure BDA0003693131260000059
Figure BDA00036931312600000510
wherein R is l Brightness weight value, G, representing the red color channel l Brightness weight value representing green channel, B l A luminance weight value representing a blue channel;
a luminance weight map is extracted, which is calculated as follows:
Figure BDA0003693131260000061
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003693131260000062
a luminance weight map is shown.
Preferably, the adaptive function η (rl, R) is calculated as follows:
Figure BDA0003693131260000063
l r,i (x, y) represents the luminance value of the pixel in the red channel at the ith image position (x, y) in the input image.
Preferably, the adaptive function η (gl, G) is calculated as follows:
Figure BDA0003693131260000064
l g,i (x, y) represents the luminance value of the pixel in the green channel at the ith image position (x, y) in the input image.
Preferably, the adaptive function η (bl, B) is calculated as follows:
Figure BDA0003693131260000065
l g,i (x, y) represents the luminance value of the pixel in the blue channel at the ith image position (x, y) in the input image.
Preferably, a spatial consistency weight map is extracted
Figure BDA0003693131260000071
The method comprises the following steps:
firstly, calculating LBP characteristics for each pixel in a source image sequence, wherein the specific algorithm is as follows:
Figure BDA0003693131260000072
Figure BDA0003693131260000073
Figure BDA0003693131260000074
Figure BDA0003693131260000075
and
Figure BDA0003693131260000076
respectively, pixel values of the ith image in R, G and B channels at pixel points (x, y) in the input image;
for any two different images I in the image sequence i (x,y)、I j (x, y) (i ≠ j) by calculating T at pixel (x, y) i (x,y)、T j The Euclidean distance between (x, y) is respectively used for measuring the local similarity of the R channel, the G channel and the B channel, and the specific algorithm is as follows:
Figure BDA0003693131260000077
Figure BDA0003693131260000078
Figure BDA0003693131260000079
the local similarity between the i and j images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then, a spatial consistency weight term of the image in the motion scene is constructed in the following way, specifically calculated as follows:
Figure BDA00036931312600000710
wherein, the standard deviation is d Controlling local similarity d i,j (x, y) pair of weights
Figure BDA00036931312600000711
The influence of (a);
and finally, thinning the weight graph through a morphological operator to remove the influence of noise:
Figure BDA0003693131260000081
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,
Figure BDA0003693131260000082
is an expansion operation in which the gas is expanded,
Figure BDA0003693131260000083
is a corrosion operation.
(III) advantageous effects
Compared with the existing algorithm, the ghost-free multi-exposure image fusion algorithm based on the multi-scale block LBP operator provided by the invention has the following advantages: the algorithm fully retains the texture details of the image, enhances the detail information of bright and dark areas in the image, retains the characteristics of a source image sequence to the maximum extent, does not lose the color information of the image, can process the image sequence shot in a dynamic scene, and can not be influenced by ghost artifacts in a fused image. The algorithm innovatively provides a multi-scale block LBP-based regional division method for extracting image texture information; aiming at the brightness characteristics of the images, a new brightness self-adaptive algorithm is innovatively provided, so that the fused images have better visibility; aiming at an image sequence in a dynamic scene, a method based on a multi-scale block LBP is innovatively provided to construct a spatial consistency weight term, and ghost artifacts in a fusion image are effectively removed.
Drawings
FIG. 1 is a diagram of the algorithmic process of the present invention;
FIG. 2 is a sequence of multiple exposure images input, (a) is a sequence of images in a static scene; (b) is a sequence of images in a dynamic scene;
FIG. 3 is a result of processing an image sequence by a prior art algorithm;
FIG. 4 is the result of the processing of the image sequence by the present algorithm;
FIG. 5 is a result of fusing images with a prior art algorithm;
FIG. 6 shows the result of the fusion of images by the present algorithm.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
1. Texture extraction
When fusing images in a static scene without moving objects, two image features should be considered: contrast and brightness, local contrast is used to preserve important details like texture and edges. When the multi-resolution method is used to fuse the multi-exposure images, it retains sufficient detail information in normally exposed areas. However, since the texture detail information of the bright and dark regions is affected by the brightness, the bright and dark regions lose part of the detail information. In order to solve the problem, the present invention innovatively provides a partitioned texture detail extraction method based on a multi-scale block LBP, specifically calculating as follows:
Figure BDA0003693131260000091
wherein L (x, y) is the normalized average brightness of the pixel (x, y) in the multi-exposure image sequence, L i (x, y) represents the intensity value of the pixel at the position (x, y) of the ith image in the sequence of input images. We determine the bright region B of each image in the sequence of source images by computing the average luminance at each pixel point (x, y) i (x, y), dark region D i (x, y) and Normal exposed region N i (x, y). Here, the
Figure BDA0003693131260000092
Is a grayscale image, α is a luminance threshold, and K is the number of input images. L (x, y) is calculated as follows:
Figure BDA0003693131260000093
in the normally exposed area of the image, extracting the texture and the edge of the normally exposed area by adopting a Scharr operator with higher accuracy, and calculating the local contrast of each pixel as follows:
Figure BDA0003693131260000094
G x ,G y respectively representing the texture change in the horizontal direction and the vertical direction, and then calculating the texture change weight of the normal region according to the convolution calculation result, wherein the weight is calculated as follows:
Figure BDA0003693131260000095
Figure BDA0003693131260000096
and representing the texture change weight of the pixel point (x, y) of the ith image exposure normal area in the input image sequence.
In bright and dark areas in an image, a multi-scale fast LBP operator is adopted for texture and edge extraction, the operator has rotation invariance and gray invariance and strong robustness to illumination, and the texture detail information of the area can be well extracted, and the calculation is as follows:
S i (x,y)=MBLBP(IN i (x,y))
IN i (x, y) is the bright and dark areas in the input image, S i (x, y) as an encoded value of the pixel at (x, y), i.e., an LBP feature value that may reflect texture information of the center pixel (x, y) and its neighborhood. Followed by the use of a fast laplacian filter pair S i The texture detail information in (x, y) is enhanced while retaining the information of the edge portion, which is calculated as follows:
Figure BDA0003693131260000101
finally, the final contrast ratio weight map is obtained by combining the two texture change weights as follows:
Figure BDA0003693131260000102
2. luminance extraction
When a picture is taken with a normal camera, certain areas in the picture appear dark, i.e. underexposed, and certain areas in the picture appear bright, i.e. overexposed. Both underexposure and overexposure can cause serious loss of image information, and the visual quality of an image is affected, and the following brightness extraction method is innovatively proposed in the text:
Figure BDA0003693131260000103
the algorithm utilizes a curve formed by combining a Gaussian curve and a Cauchy curve to construct the brightness weight values of red, green and blue channels, a higher brightness weight is distributed to pixels in a well-exposed area, and pixels in bright and dark areas in an image are distributed with a lower brightness weight. It is calculated as follows:
Figure BDA0003693131260000111
Figure BDA0003693131260000112
Figure BDA0003693131260000113
since some pixels in the input image sequence may be in bright or dark areas by nature, rather than the image being too bright or too dark due to overexposure or underexposure. In the algorithm, a brightness adaptive function is used to adjust the weighted values of the pixels in the bright and dark regions of the RGB three-color channels, where the brightness adaptive function η (rl, R) of the red color channel is taken as an example, that is:
Figure BDA0003693131260000114
l r,i (x, y) represents the luminance value of the pixel in the red channel at the ith image position (x, y) in the input image. When the adaptive functions η (gl, G), η (bl, B) in the green and blue channels are calculated in the same way as in the red channel.
3. Moving object detection
When the input image sequence is captured in a dynamic scene, the effect of moving objects on the fused image must be taken into account, otherwise the final fused image will have the effect of ghosting artifacts. To solve this problem, the innovation herein proposes a method of constructing spatial consistency weights based on multi-scale block LBP (MB-LBP). Firstly, calculating LBP characteristics for each pixel in a source image sequence, wherein the specific algorithm is as follows:
Figure BDA0003693131260000115
Figure BDA0003693131260000116
Figure BDA0003693131260000117
Figure BDA0003693131260000121
and
Figure BDA0003693131260000122
the pixel values of the ith image in the input image at pixel point (x, y) in the R, G, B channels, respectively. For any two different images I in the image sequence i (x,y)、I j (x, y) (i ≠ j), as determined by calculating T at pixel (x, y) i (x,y)、T j The euclidean distance between (x, y) measures their local similarity at the R, G, B channels, respectively. The specific algorithm is as follows:
Figure BDA0003693131260000123
Figure BDA0003693131260000124
Figure BDA0003693131260000125
the local similarity between the i and j images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then, a spatial consistency weight term of the image in the motion scene is constructed in the following way, specifically calculated as follows:
Figure BDA0003693131260000126
wherein the standard deviation delta d Controlling local similarity d i,j (x, y) pair of weights
Figure BDA0003693131260000127
Is herein set as0.05. The design idea of the algorithm is as follows: if the pixel point (x, y) is in the image I i If it belongs to a motion region, image I i And all of I j (i ≠ j) local similarity D at (x, y) i,j (x, y) are increased and the spatial consistency weight W at that pixel point is increased i (x, y) will decrease, resulting in image I i The weight value at pixel point (x, y) decreases.
And finally, thinning the weight graph through a morphological operator to remove the influence of noise:
Figure BDA0003693131260000128
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,
Figure BDA0003693131260000129
is an expansion operation in which the gas is expanded,
Figure BDA00036931312600001210
is a corrosion operation.
4. Weight map estimation
From the previous calculations, three image features (local contrast, luminance feature and spatial consistency) are obtained, and these weight terms need to be combined in this step to obtain the initial weight map. In order to make the proposed method possible to extract the highest quality region from the weight map, we use pixel multiplication to combine different weight maps, which is calculated as follows:
Figure BDA0003693131260000131
after the initial weight map is generated, it needs to be normalized so that at each pixel (x, y), its sum of weights is 1, calculated as follows:
Figure BDA0003693131260000132
where epsilon is a small positive number to avoid the situation where the denominator is zero.
5. Weight map refinement
The initial weight map generally contains noise and has discontinuity, so before the final fusion is carried out, the initial weight map needs to be refined, and the algorithm adopts a fast guide filter to refine the initial weight map. The specific calculation is as follows:
Figure BDA0003693131260000133
Figure BDA0003693131260000134
representing the weight map after refinement, FGF r,ep,∈ (I, W) represents a fast boot filtering operation. r, ep, e is the parameter of the filter, r represents the window radius of the filter, ep is the regularization parameter of the filter, which controls the smoothing degree of the filter, e is the sub-sampling rate, I, W represent the guide image and the image to be filtered respectively, in the algorithm, the weight map is used
Figure BDA0003693131260000135
As both the guide image and the input image. And finally, normalizing the weight graph to obtain a final weight graph, wherein the weight graph is calculated as follows:
Figure BDA0003693131260000136
6. image fusion
In the algorithm, a source image is decomposed into a Laplacian pyramid, the weight map is decomposed into a Gaussian pyramid, and then the Gaussian pyramid and the Laplacian pyramid of the image are fused at each level as follows:
Figure BDA0003693131260000141
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Representing the decomposition of the input image into a Laplacian pyramid, L { F } l Is a new laplacian pyramid after fusion, and l represents the number of layers of the pyramid. Finally, for L { F } l And (5) reconstructing to obtain a final fused image.
And (3) quality evaluation:
Q AB/F : the method is a novel objective quality evaluation index of the fused image, the index reflects the quality of visual information obtained from the fusion of the input images, the index represents the retention degree of image edge detail information, and the higher the value of the index is, the more edge detail information representing that the fused image retains a source image sequence is.
MEF-SSIM: the index is used to measure the structural similarity between the input multi-exposure image sequence and the fused image. The value range of the MEF-SSIM is 0-1, the higher the numerical value of the MEF-SSIM is, the higher the structural similarity between a result image and a source image sequence is, namely, the better the image quality is, and the MEF-SSIM adopted in the method is a full-reference evaluation index.
Figure BDA0003693131260000142
Figure BDA0003693131260000151
TABLE 1 MEF-SSIM test results
Figure BDA0003693131260000152
TABLE 2Q AB/F Test results
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (7)

1. The ghost-free multi-exposure image fusion algorithm based on the multi-scale block LBP operator is characterized by comprising the following steps of:
extracting contrast weight map
Figure FDA0003693131250000011
Luminance weight map
Figure FDA0003693131250000012
Spatial consistency weight map
Figure FDA0003693131250000013
Weight map estimation
Different weight maps are combined by pixel multiplication, and the specific calculation is as follows:
Figure FDA0003693131250000014
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003693131250000015
is a contrast weight chart,
Figure FDA0003693131250000016
Is a luminance weight map,
Figure FDA0003693131250000017
A spatial consistency weight map,
Figure FDA0003693131250000018
Is the combined initial weight map;
after the initial weight map is generated, it is normalized so that the sum of weights at each pixel (x, y) is 1, calculated as follows:
Figure FDA0003693131250000019
where ε is a positive number, K is the number of input images,
Figure FDA00036931312500000110
Is an initial weight graph;
weight map refinement
Mapping the initial weights
Figure FDA00036931312500000111
And simultaneously, as a guide image and an input image, refining the initial weight map by adopting a quick guide filter, and specifically calculating as follows:
Figure FDA00036931312500000112
Figure FDA00036931312500000113
representing the weight map after refinement, FGF r,ep,∈ (I, W) represents rapid guide filtering operation, r, ep belongs to the parameters of the filter, r represents the window radius of the filter, ep is the regularization parameter of the filter, e belongs to the sub-sampling rate, and I, W respectively represent a guide image and an image to be filtered;
the weight map after thinning
Figure FDA00036931312500000114
And (3) carrying out normalization to obtain a normalized weight graph, and calculating as follows:
Figure FDA0003693131250000021
wherein ε is a positive number, W i (x, y) represents the normalized weight map, K is the number of input images,
Figure FDA0003693131250000022
representing the weight map after the thinning;
image fusion
Decomposing the source image into a Laplacian pyramid, decomposing the final weight map into a Gaussian pyramid, and fusing the Laplacian pyramid of the source image and the Gaussian pyramid of the corresponding weight map at each level respectively as follows:
Figure FDA0003693131250000023
G{W i (x,y)} l representing the decomposition of the weight map into Gaussian pyramids, L { I } i (x,y)} l Representing the decomposition of the input image into a Laplacian pyramid, L { F } l Is a new Laplacian pyramid after fusion, L represents the number of layers of the pyramid, and finally, the L { F } l And (5) reconstructing to obtain a final fused image.
2. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 1, wherein a contrast weight map is extracted
Figure FDA0003693131250000024
The method comprises the following steps:
calculating the normalized average brightness L (x, y) of the pixel point (x, y) in the multi-exposure image sequence, specifically as follows:
Figure FDA0003693131250000025
wherein L (x, y) is the normalized average brightness of the pixel (x, y) in the multi-exposure image sequence, L i (x, y) denotes the position (x, y) of the ith image in the sequence of input imagesLuminance value of a pixel, K being the number of input images;
dividing an exposure normal area, a bright area and a dark area in the image by using the average brightness at the pixel point (x, y), and specifically calculating as follows:
Figure FDA0003693131250000031
wherein L (x, y) is the normalized average brightness of the pixel points (x, y) in the multi-exposure image sequence, and the average brightness at each pixel point (x, y) determines the bright area B of each image in the source image sequence i (x, y), dark region D i (x, y) and Normal exposed region N i (x,y),
Figure FDA0003693131250000032
Is a grayscale image, α is a luminance threshold, K is the number of input images;
for a normally exposed region in a source image, extracting the texture and the edge of the normally exposed region by adopting a Scharr operator, and calculating the local contrast of each pixel point (x, y) as follows:
Figure FDA0003693131250000033
wherein G is x ,G y Representing texture variations in the horizontal and vertical directions, respectively, N i (x, y) represents the normally exposed area in the ith image in the input image sequence;
then, calculating the texture change weight of the exposure normal area according to the convolution calculation result, and calculating as follows:
Figure FDA0003693131250000034
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003693131250000035
a texture change weight map, G, at a pixel point (x, y) representing an exposure normal region of the ith image in the sequence of input images x ,G y Representing texture variations in the horizontal and vertical directions, respectively;
and (3) adopting a multi-scale fast LBP operator to extract textures and edges of the bright and dark regions, wherein the calculation is as follows:
S i (x,y)=MBLBP(IN i (x,y))
wherein, IN i (x, y) are the light and dark regions in the input image, MBPLBP (@) is the multiscale block LBP operator, S i (x, y) as the encoded value at pixel point (x, y);
using a fast Laplace filter pair S i The texture detail information in (x, y) is enhanced while retaining the information of the edge portion, which is calculated as follows:
Figure FDA0003693131250000041
wherein the content of the first and second substances,
Figure FDA0003693131250000042
is passed through a fast Laplace filter pair S i Texture change weight maps of bright and dark regions after enhancement of texture detail information in (x, y);
and combining the two weights to obtain a final contrast ratio weight map, and calculating as follows:
Figure FDA0003693131250000043
wherein the content of the first and second substances,
Figure FDA0003693131250000044
in order to be a contrast weight map,
Figure FDA0003693131250000045
exposing normal areas for the ith image in the sequence of input imagesA texture change weight map at pixel point (x, y),
Figure FDA0003693131250000046
the texture change weight map is obtained by enhancing bright and dark regions of an input image sequence by a fast Laplace filter.
3. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 1, wherein extracting a luminance weight map
Figure FDA0003693131250000047
The method comprises the following steps:
the combined curve of the gaussian curve and the cauchy curve is used to construct the brightness weight values of the red, green and blue channels, a higher brightness weight is assigned to the pixel of the well-exposed area, and the pixel of the bright and dark areas in the image is assigned a lower brightness weight, which is calculated as follows:
Figure FDA0003693131250000048
Figure FDA0003693131250000049
Figure FDA00036931312500000410
wherein R is l Brightness weight value, G, representing the red channel l Brightness weight value representing green channel, B l A luminance weight value representing a blue channel;
a luminance weight map is extracted, which is calculated as follows:
Figure FDA00036931312500000411
wherein the content of the first and second substances,
Figure FDA0003693131250000051
a luminance weight map is shown.
4. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 3, wherein: the adaptive function η (rl, R) is calculated as follows:
Figure FDA0003693131250000052
l r,i (x, y) represents the luminance value of the pixel in the red channel at the ith image position (x, y) in the input image.
5. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 3, wherein: the adaptive function η (gl, G) is calculated as follows:
Figure FDA0003693131250000053
l g,i (x, y) represents the luminance value of the pixel in the green channel at the ith image position (x, y) in the input image.
6. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 3, wherein: the adaptive function η (bl, B) is calculated as follows:
Figure FDA0003693131250000054
l b,i (x, y) represents the luminance value of the pixel in the blue channel at the ith image position (x, y) in the input image.
7. The multi-scale block LBP operator ghost-free multi-exposure image fusion algorithm of claim 1, wherein a spatial consistency weight map is extracted
Figure FDA0003693131250000061
The method comprises the following steps:
firstly, calculating LBP characteristics for each pixel in a source image sequence, wherein the specific algorithm is as follows:
Figure FDA0003693131250000062
Figure FDA0003693131250000063
Figure FDA0003693131250000064
Figure FDA0003693131250000065
and
Figure FDA0003693131250000066
respectively, pixel values of the ith image in R, G and B channels at pixel points (x, y) in the input image;
for any two different images I in the image sequence i (x,y)、I j (x, y) (i ≠ j) by calculating T at pixel (x, y) i (x,y)、T j The Euclidean distance between (x, y) is used for respectively measuring the local similarity of the (x, y) channels in R, G and B channels, and the specific algorithm is as follows:
Figure FDA0003693131250000067
Figure FDA0003693131250000068
Figure FDA0003693131250000069
the local similarity between the i and j images is calculated as follows:
D i,j (x,y)=d r i,j (x,y) 2 ×d g i,j (x,y) 2 ×d b i,j (x,y) 2
then, a spatial consistency weight term of the image in the motion scene is constructed in the following way, specifically calculated as follows:
Figure FDA00036931312500000610
wherein, the standard deviation is d Controlling local similarity d i,j (x, y) pair of weights
Figure FDA00036931312500000611
The influence of (a);
and finally, thinning the weight graph through a morphological operator to remove the influence of noise:
Figure FDA0003693131250000071
wherein s is 1 、s 2 Is a structural element of expansion and corrosion,
Figure FDA0003693131250000072
is an expansion operation in which the gas is expanded,
Figure FDA0003693131250000073
is a corrosion operation.
CN202210666439.2A 2022-06-14 2022-06-14 Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method Active CN115063331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210666439.2A CN115063331B (en) 2022-06-14 2022-06-14 Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210666439.2A CN115063331B (en) 2022-06-14 2022-06-14 Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method

Publications (2)

Publication Number Publication Date
CN115063331A true CN115063331A (en) 2022-09-16
CN115063331B CN115063331B (en) 2024-04-12

Family

ID=83200284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210666439.2A Active CN115063331B (en) 2022-06-14 2022-06-14 Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method

Country Status (1)

Country Link
CN (1) CN115063331B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115760663A (en) * 2022-11-14 2023-03-07 辉羲智能科技(上海)有限公司 Method for synthesizing high dynamic range image from low dynamic range image based on multi-frame multi-exposure
CN116630218A (en) * 2023-07-02 2023-08-22 中国人民解放军战略支援部队航天工程大学 Multi-exposure image fusion method based on edge-preserving smooth pyramid

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112819736A (en) * 2021-01-13 2021-05-18 浙江理工大学 Workpiece character image local detail enhancement fusion method based on multiple exposures
US20220020126A1 (en) * 2020-07-20 2022-01-20 Samsung Electronics Co., Ltd. Guided multi-exposure image fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220020126A1 (en) * 2020-07-20 2022-01-20 Samsung Electronics Co., Ltd. Guided multi-exposure image fusion
CN112819736A (en) * 2021-01-13 2021-05-18 浙江理工大学 Workpiece character image local detail enhancement fusion method based on multiple exposures

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115760663A (en) * 2022-11-14 2023-03-07 辉羲智能科技(上海)有限公司 Method for synthesizing high dynamic range image from low dynamic range image based on multi-frame multi-exposure
CN115760663B (en) * 2022-11-14 2023-09-22 辉羲智能科技(上海)有限公司 Method for synthesizing high dynamic range image based on multi-frame multi-exposure low dynamic range image
CN116630218A (en) * 2023-07-02 2023-08-22 中国人民解放军战略支援部队航天工程大学 Multi-exposure image fusion method based on edge-preserving smooth pyramid
CN116630218B (en) * 2023-07-02 2023-11-07 中国人民解放军战略支援部队航天工程大学 Multi-exposure image fusion method based on edge-preserving smooth pyramid

Also Published As

Publication number Publication date
CN115063331B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
Wang et al. An experiment-based review of low-light image enhancement methods
Wang et al. Single image dehazing based on the physical model and MSRCR algorithm
CN110599433B (en) Double-exposure image fusion method based on dynamic scene
Jiang et al. Image dehazing using adaptive bi-channel priors on superpixels
CN115063331B (en) Multi-scale block LBP operator-based ghost-free multi-exposure image fusion method
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
JP2015011717A (en) Ghost artifact detection and removal methods in hdr image processing using multi-scale normalized cross-correlation
CN112785534A (en) Ghost-removing multi-exposure image fusion method in dynamic scene
CN114862698B (en) Channel-guided real overexposure image correction method and device
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Lee et al. HDR image reconstruction using segmented image learning
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN114862707A (en) Multi-scale feature recovery image enhancement method and device and storage medium
CN117152182B (en) Ultralow-illumination network camera image processing method and device and electronic equipment
CN113379861B (en) Color low-light-level image reconstruction method based on color recovery block
Wu et al. Reflectance-guided, contrast-accumulated histogram equalization
Fuh et al. Mcpa: A fast single image haze removal method based on the minimum channel and patchless approach
CN112927160B (en) Single low-light image enhancement method based on depth Retinex
Tung et al. ICEBIN: Image contrast enhancement based on induced norm and local patch approaches
Han et al. Automatic illumination and color compensation using mean shift and sigma filter
CN117830134A (en) Infrared image enhancement method and system based on mixed filtering decomposition and image fusion
CN117391987A (en) Dim light image processing method based on multi-stage joint enhancement mechanism
Wang et al. An exposure fusion approach without ghost for dynamic scenes
CN116468636A (en) Low-illumination enhancement method, device, electronic equipment and readable storage medium
CN115829851A (en) Portable fundus camera image defect eliminating method and system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant