CN115330653A - Multi-source image fusion method based on side window filtering - Google Patents

Multi-source image fusion method based on side window filtering Download PDF

Info

Publication number
CN115330653A
CN115330653A CN202210982221.8A CN202210982221A CN115330653A CN 115330653 A CN115330653 A CN 115330653A CN 202210982221 A CN202210982221 A CN 202210982221A CN 115330653 A CN115330653 A CN 115330653A
Authority
CN
China
Prior art keywords
image
infrared
vis
fusion
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210982221.8A
Other languages
Chinese (zh)
Inventor
宋江鲁奇
杨庆友
周慧鑫
张鑫
李欢
秦翰林
王炳健
王财顺
刘志宇
梅峻溪
张嘉嘉
王珂
罗云麟
滕翔
赖睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202210982221.8A priority Critical patent/CN115330653A/en
Publication of CN115330653A publication Critical patent/CN115330653A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a multi-source image fusion method based on side window filtering, which fuses infrared and visible light images by using a side window filtering, rarity color statistics and gradient energy optimization significance detection method and a multi-level fusion strategy method. The specific method comprises the following steps: firstly, respectively carrying out multi-scale decomposition on the infrared image and the visible image to be fused by using side window filtering to generate a base layer image and a detail layer image; generating significance detection on the infrared image and the visible light image respectively by a significance detection method combining rarity color statistics and gradient energy optimization to obtain significance maps; then, obtaining a fusion weight graph through significance comparison and filtering optimization; secondly, guiding the fusion of the images of the base layer by using the fusion weight to obtain a fusion base layer; meanwhile, a multi-level fusion strategy is adopted to fuse detail layers; and finally, reconstructing the fusion basic layer and the fusion detail layer to obtain a fused image.

Description

Multi-source image fusion method based on side window filtering
Technical Field
The invention belongs to the field of multi-source image processing technology, and particularly relates to a multi-source image fusion method based on side window filtering.
Background
Due to the limitations of the imaging principle and the technical level of the sensor, the information collected by a single sensor is always limited, and the requirement for more comprehensive scene information in the context of a specific application is difficult to meet. Therefore, how to integrate the images acquired by various sensors to generate an image which is rich in information and more beneficial to human perception gradually becomes a research hotspot.
When different sensors are used for carrying out combined imaging on the same scene, the two forms of (1) carrying out combined imaging on the sensors based on different imaging principles and (2) carrying out imaging integration on the same sensor when different parameters are set are mainly divided according to the type and the number of imaging detectors.
The combination of infrared and visible light has many advantages in multi-sensor joint imaging. The infrared detector has the advantages of obvious imaging target, strong anti-interference capability and full-day work by receiving external thermal radiation imaging, but the imaging resolution is generally low, the detailed texture information of a scene cannot be well described, and the image can only be displayed through gray scale. The visible light sensor forms images by receiving reflected light, has the advantages of high spatial resolution, clear texture details and better accordance with the visual impression of human beings, but is sensitive to illumination conditions and is easily influenced by environmental factors such as severe weather. Therefore, in the above case, it is necessary to complement the dominant information in the infrared image and the visible light image to obtain complete information of the scene.
The research of the multi-source image fusion technology can be traced back to 80 years in the last century, and Daily and the like use the technology in the field of remote sensing for the first time. Since then, multi-source image fusion techniques have been developed, and many excellent algorithms have been generated, most of the fusion algorithms can be classified as: a multi-scale decomposition based method, a sparse representation based method, a subspace based method, a saliency based method, a neural network based method. However, the existing algorithm has the defects of noise interference, information loss and the like, and has a great improvement space.
Disclosure of Invention
In order to overcome the defects of the prior art, the present invention aims to provide a multi-source image fusion method based on side window filtering, so as to solve the problem of information loss in different scales in the separation process in the prior art, better retain image information, maintain the contrast of a target and the continuity of a salient region, and significantly improve the contrast and definition of the target in the fused image.
In order to achieve the purpose, the invention comprises the following main steps:
a multi-source image fusion method based on side window filtering comprises the following steps:
step (1), inputting a multi-source image to be fused, wherein the multi-source image is an infrared image I shot in the same scene IR And a visible light image I Vis ,I IR And I Vis Are equal in size;
step (2) of respectively aligning the infrared images I IR And a visible light image I Vis Performing side window filtering to obtain an infrared base layer image B IR And visible base layer image B Vis
Step (3), respectively carrying out significance detection on the infrared images I by using rarity color statistics and gradient energy optimization IR And a visible light image I Vis Carrying out significance detection to generate an infrared significance map U IR And a visible light significant figure U Vis
Step (4), carrying out comparison on the infrared saliency map U IR And a visible light significant figure U Vis Performing significance comparison to obtain initial fusion weight of the base layer, and optimizing to obtain final fusion weight
Figure RE-GDA0003891642060000021
And
Figure RE-GDA0003891642060000022
step (5) with
Figure RE-GDA0003891642060000023
And
Figure RE-GDA0003891642060000024
guide infrared base layer image B IR And visible light base layer image B Vis Fusing to generate a fused base layer image FuB;
and (6) fusing the infrared detail layer and the visible light detail layer by using a mixed fusion strategy of gradient features (GC) and Intensity Variance (IV) of the image to obtain a fused detail layer image FuD i
Step (7) of fusing the base layer image FuB and the detail layer image FuD i And adding to obtain a fused image FuI.
The side window filtering, which uses kernel function F to perform multi-scale decomposition on input image I, includes the following steps:
step 1, randomly selecting a pixel point from an input image I;
step 2, obtaining eight side windows from the downward (D), right (R), upward (U), left (L), southwest (SW), southeast (SE), northeast (NE) and Northwest (NW) sides of the selected pixel points;
and 3, calculating the Side Window Filtering (SWF) output of any pixel of the input image according to the following formula:
Figure RE-GDA0003891642060000031
Figure RE-GDA0003891642060000032
where m represents one of the side windows, S = { L, R, U, D, NW, NE, SW, SE } is a set of side windows, I m Is the side window output, N, obtained by applying the kernel function F in the side window m m Is based on the sum of the weights of the set of side windows,
Figure RE-GDA0003891642060000033
is a side window in eight directions of the l-th target pixel, q, based on the kernel function F j Is the intensity of the input image I at the l-th target pixel, w lj Is the weight of the jth pixel in the vicinity of the ith target pixel, I, based on the kernel function F t Is with q j L of 2 Outputting the side window with the minimum distance, namely outputting the final side window filtering;
and 4, finishing selecting all pixel points of the input image I, and obtaining the f-th filtering result I f , f=1,2,...,n,I 0 = I, will I f And f-1 filtering result I f-1 Differencing to obtain a detail layer image D of the current level f Is represented by D f =I f-1 -I f The final filtering result is the base layer image B, where n is the decomposition level;
Wherein, the input images I are respectively taken as infrared images I IR And a visible light image I Vis Obtaining an infrared base layer image B IR And visible light base layer image B Vis
Compared with the prior art, the invention has the following advantages:
firstly, the invention respectively carries out side window filtering on all pixel values of the infrared image and the visible light image to complete multi-scale decomposition, overcomes the defect of information loss of different scales in the separation process in the prior art, and has better image information retention degree.
Secondly, the invention provides a composite significance detection method combining rarity color statistics and gradient energy optimization, and the method is applied to finish significance detection on infrared and visible light images so as to better keep the contrast of the target and the continuity of a significant area, so that the invention can obviously improve the contrast and definition of the target in the fused image.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
FIG. 2 is a flow chart of multi-scale decomposition based on edge window filtering according to an embodiment of the present invention.
FIG. 3 is a diagram illustrating the definition of side windows in side window filtering.
FIG. 4 is a flow chart of construction of a fusion weight graph of a composite significance detection method based on rarity color statistics in combination with gradient energy optimization (RCSAGE) in an embodiment of the present invention.
FIG. 5 is a graph of infrared and visible image saliency detection results in an example of the present invention. Wherein, (a) is IR, (b) is VI, (c) is IR significant map, and (d) is VI significant map.
FIG. 6 is a diagram of the infrared and visible image weight generation process in accordance with an embodiment of the present invention. Wherein, (a) is IR initial weight map, (b) is VI initial weight map, (c) is IR optimization weight map, and (d) is VI optimization weight map.
FIG. 7 is a graph showing the comparison of the fusion effect of nine different fusion algorithms on the "camp" sequence; wherein (a) is an infrared image; (b) is a visible light image; (c) is BF; (d) is GFF; (e) is CBF; (f) NSCT; (g) is MST-SR; (h) is GTF; (i) is MF; (j) is QDBI; (k) is FMSPD; (l) is the process of the present invention.
FIG. 8 is a graph showing the results of comparison of the fusion effect of nine different fusion algorithms on a "Kaptein" sequence; wherein (a) is an infrared image; (b) is a visible light image; (c) is BF; (d) is GFF; (e) is CBF; (f) NSCT; (g) is MST-SR; (h) is GTF; (i) is MF; (j) is QDBI; (k) is FMSPD; (l) is the process of the present invention.
FIG. 9 is a comparison result chart of the fusion effect of nine different fusion algorithms on the "Marne" sequence; wherein (a) is an infrared image; (b) is a visible light image; (c) is BF; (d) is GFF; (e) is CBF; (f) NSCT; (g) is MST-SR; (h) is GTF; (i) is MF; (j) is QDBI; (k) is FMSPD; (l) is the process of the present invention.
FIG. 10 is a graph showing the results of comparison of the fusion effect of nine different fusion algorithms on the "Tank" sequence; wherein (a) is an infrared image; (b) is a visible light image; (c) is BF; (d) is GFF; (e) is CBF; (f) NSCT; (g) is MST-SR; (h) is GTF; (i) is MF; (j) is QDBI; (k) is FMSPD; (l) is the process of the present invention.
FIG. 11 is a graph showing the comparison of the fusion effect of nine different fusion algorithms on the "Road" sequence; wherein (a) is an infrared image; (b) is a visible light image; (c) is BF; (d) is GFF; (e) is CBF; (f) NSCT; (g) is MST-SR; (h) is GTF; (i) is MF; (j) is QDBI; (k) is FMSPD; (l) is the process of the present invention.
FIG. 12 is a graph showing the comparison of the fusion effect of nine different fusion algorithms on the "Kayak" sequence; wherein (a) is an infrared image; (b) is a visible light image; (c) is BF; (d) is GFF; (e) is CBF; (f) NSCT; (g) is MST-SR; (h) is GTF; (i) is MF; (j) is QDBI; (k) is FMSPD; (l) is the process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
As mentioned above, the existing multi-source image fusion algorithm generally has the defects of noise interference, information loss, etc., and still needs to be further improved. Based on the above, the invention provides a multi-source image fusion method based on side window filtering, according to infrared and visible light images shot in the same scene, the infrared and visible light images to be fused are firstly subjected to multi-scale decomposition respectively by using the side window filtering to generate images of a base layer and a detail layer; respectively performing significance detection (SF) on the infrared image and the visible image through a significance detection method combining rarity color statistics and gradient energy optimization to obtain significance maps; then, obtaining a fusion weight graph through significance comparison and filtering optimization; secondly, guiding the fusion of the images of the base layer by using the fusion weight to obtain a fusion base layer; meanwhile, a multi-level fusion strategy is adopted to fuse detail layers; and finally, reconstructing the fusion basic layer and the fusion detail layer to obtain a fused image.
To demonstrate the effectiveness of the method of the present invention, the text fusion algorithm was qualitatively and quantitatively analyzed from the published TNO dataset using 6 sets of infrared and visible light images in different environments. In the experiment, 6 different scene images such as "camp", "Kaptein", "Marne", "Tank", "Road" and "Kayak" visible in the TNO image fusion data set and the infrared image sequence are respectively fused. Referring to fig. 1, the basic flow of image fusion for TNO image data is as follows:
(1) And inputting the multi-source image to be fused.
Respectively inputting an infrared image I to be fused IR And a visible light image I to be fused Vis In which I IR And I Vis Shoot in the same scene, both are equal in size.
(2) For infrared image I IR Performing side window filtering to obtain an infrared base layer image B IR
(2a) The invention uses Box-Filter (Box-Filter) as kernel function F of side window filtering to perform multi-scale decomposition on input image I so as to enhance edge-preserving capability and reduce algorithm complexity, as shown in FIG. 2.
(2b) Taking infrared image I IR To be transportedAnd inputting the image I, and randomly selecting a pixel point from the image I.
(2c) Eight side windows are obtained from the down (D), right (R), up (U), left (L), southwest (SW), southeast (SE), northeast (NE) and Northwest (NW) sides of the selected pixel point. The structure definition of the side window in the side window filtering can refer to (a) in fig. 3, and defines the side window in the continuous case, the parameters are theta and r, theta is the angle of the window relative to the horizontal line, and r is the radius of the filtering window, which is artificially defined and fixed for all windows. ρ ∈ { o, r }, (x, y) is the position of the target pixel. Keeping the (x, y) position constant and changing θ allows the window to be adjusted in direction and have its sides remain aligned with pixel l. To simplify the operation, eight side windows are defined in the discrete case, as shown in (b) - (d) of fig. 3, respectively. Which correspond to θ = k × pi/2, k ∈ [0,3], respectively.
(2d) Respectively applying kernel function F in eight side windows to obtain eight different outputs in turn, and selecting L of input intensity 2 And outputting the side window with the minimum distance as final output to finish side window filtering.
Specifically, the side-window filter (SWF) output of an arbitrary pixel of the input image is calculated as follows:
Figure RE-GDA0003891642060000061
Figure RE-GDA0003891642060000062
where m represents one of the side windows, S = { L, R, U, D, NW, NE, SW, SE } is a set of side windows, I m Is the side window output, N, obtained by applying the kernel function F in the side window m m Is based on the sum of the weights of the set of side windows,
Figure RE-GDA0003891642060000063
is a side window in eight directions of the l-th target pixel (i.e. the pixel point selected in step (2 b)) based on the kernel function F, q j Is that the input image I is in the ith target imageIntensity at plain, w lj Is the weight of the jth pixel in the vicinity of the ith target pixel, I, based on the kernel function F t Is with q j L of 2 And outputting the side window with the minimum distance, namely outputting the final side window filtering.
(2e) Judging the Infrared image I IR Whether all pixel points are selected is finished, if so, the f-th filtering result is obtained
Figure RE-GDA0003891642060000071
f =1, 2.. N, initial state
Figure RE-GDA0003891642060000072
Will be provided with
Figure RE-GDA0003891642060000073
And f-1 filtering result
Figure RE-GDA0003891642060000074
Obtaining the infrared detail layer image of the current level by difference
Figure RE-GDA0003891642060000075
Expressed as:
Figure RE-GDA0003891642060000076
the final filtering result is an infrared base layer image B IR Continuing to execute the step (3); otherwise, executing step (2 b); where f is the current filtering frequency, n is the decomposition level, and 3 is taken in this embodiment.
(3) For visible light image I Vis Performing side window filtering to obtain visible light base layer image B Vis
(3a) Similarly to step (2 a), box filtering (Box-Filter) is also used as the kernel function F of the side window filtering, as shown in fig. 3.
(3b) Taking visible light images I Vis To input an image I, a pixel is arbitrarily selected therefrom.
(3c) In the same step (2 c), eight side windows are obtained from the downward (D), right (R), upward (U), left (L), southwest (SW), southeast (SE), northeast (NE) and Northwest (NW) sides of the selected pixel point.
(3d) In the same step (2 d), applying kernel functions F in eight side windows respectively to obtain eight different outputs in sequence, and selecting L of input intensity 2 And outputting the side window with the minimum distance as final output to finish side window filtering. The Side Window Filter (SWF) output calculation for any pixel of the image is also performed as in step (2 d).
(3e) Judging the visible light image I Vis If all the pixel points are selected completely, if so, the f-th filtering result is obtained
Figure RE-GDA0003891642060000077
f =1, 2.. N, initial state
Figure RE-GDA0003891642060000078
Will be provided with
Figure RE-GDA0003891642060000079
And the last filtering result
Figure RE-GDA00038916420600000710
Differencing to obtain visible light detail layer image of current level
Figure RE-GDA00038916420600000711
The final filtering result is the visible base layer image B Vis Continuing to execute the step (4), otherwise, executing the step (3 b); wherein f is the current filtering frequency, n is the decomposition level, and 3 is taken out in the algorithm. It is easy to understand that in step (2 e) and step (3 e), the value of f is the same.
(4) Composite significance detection method for infrared image I by using rarity color statistics and gradient energy optimization IR And a visible light image I Vis Carrying out significance detection to obtain an infrared significance map S IR And a visible light significant image S Vis Referring to fig. 4, the specific steps are as follows:
(4a) Calculating a designated pixel I in the input image I according to the following formula k Significance value of (2) defines S (I) k ):
Figure RE-GDA00038916420600000712
Wherein, I k ∈[0,255]I is an input image I o Is any pixel in the input image, | | · | | represents the distance between color values, can be translated into
S(I k )=||I k -I 1 ||+||I k -I 2 ||+…+||I k -I N ||
Where N is the total number of pixels in the input image, I 1 ~I N Representing the first to Nth pixels, any pixel I given an input image o The values of (a) are known.
(4b) Calculating the designated pixel I according to the following formula k Rarity color statistic T (I) k ):
Figure RE-GDA0003891642060000081
Wherein, N (I) k ) Is a designated pixel I k The rarity of (a) represents a minimum value.
(4c) The gradient energy GE (x, y) of the input image is calculated as follows:
Figure RE-GDA0003891642060000082
where ω is the neighborhood of pixel (x, y), the window radius is set to 3,I in the present invention h ,I v ,I b Representing the gradient characteristics in horizontal, vertical and diagonal lines, respectively.
(4d) Calculating a composite significance detection result of rarity color statistics and gradient energy optimization of the input image according to the following formula:
U=T(I k )*GE(x,y)
for infrared image I IR The method comprises the following steps:
U IR =T(I IR )*GE IR
for visible light image I Vis The method comprises the following steps:
U Vis =T(I Vis )*GE Vis
wherein, U IR Indicating the result of the saliency detection of the infrared image, i.e. the infrared saliency map U IR ,U Vis A saliency detection result representing a visible light image, i.e. a visible light saliency map U Vis
(5) For infrared saliency map U IR And a visible light significant figure U Vis Performing significance comparison to obtain initial fusion weight of the base layer, and optimizing to obtain final fusion weight
Figure RE-GDA0003891642060000083
And
Figure RE-GDA0003891642060000084
the method comprises the following specific steps:
(5a) For infrared saliency map U IR And a visible light significant figure U Vis The significance comparison is performed to obtain an initial fusion weight. Specifically, an initial fusion weight O at (x, y) of the infrared image base layer and the visible light image base layer is calculated as follows IR (x, y) and O Vis (x,y):
Figure RE-GDA0003891642060000091
Figure RE-GDA0003891642060000092
Wherein, U IR (x, y) and U Vis (x, y) represents the saliency of the infrared image and the visible image at (x, y), respectively; each O is IR (x, y) initial fusion weight O constituting infrared image base layer IR Each of O Vis (x, y) initial fusion weight O constituting the visible image base layer Vis
(5b) There is typically noise for the initial weight map and boundaries with the source image may occurThe fusion image quality is reduced due to the phenomenon of incomplete alignment, so that the initial fusion weight is optimized by adopting Anisotropic Guided Filtering (AGF) to obtain the fusion weight W after the infrared and visible light images are optimized IR And W Vis It can be expressed as:
W IR =AGF(O IR )
W Vis =AGF(O Vis )
(5c) W is to be IR And W Vis Normalizing to obtain the final fusion weight
Figure RE-GDA0003891642060000093
And
Figure RE-GDA0003891642060000094
(6) Using the final fusion weight of step (5)
Figure RE-GDA0003891642060000095
And
Figure RE-GDA0003891642060000096
guide base layer image B IR And B Vis And fusing, namely calculating to obtain a fused base layer image FuB according to the following formula:
Figure RE-GDA0003891642060000097
(7) Generating fused detail layer image FuD i
The method utilizes a mixed fusion strategy of Gradient Characteristics (GC) and Intensity Variance (IV) of the image to fuse the infrared detail layer and the visible detail layer. The gradient feature (GC) reflects the contrast of tiny details of the image, the stronger the gradient feature is, the more prominent the edge feature of the image is, the Intensity Variance (IV) reflects the local energy of the image, and the larger the local energy is, the clearer the corresponding region of the image is;
the composite fusion strategy for a detail layer can be expressed as follows, calculating the value of each element in the fused detail layer:
Figure RE-GDA0003891642060000098
wherein the content of the first and second substances,
Figure RE-GDA0003891642060000099
representing information at an ith detail layer (x, y) of the infrared detail layers,
Figure RE-GDA00038916420600000910
representing information at an ith detail layer (x, y) of the visible light detail layers,
Figure RE-GDA0003891642060000101
and
Figure RE-GDA0003891642060000102
respectively representing the gradient characteristics of the ith detail layer in the infrared detail layer and the visible detail layer,
Figure RE-GDA0003891642060000103
and
Figure RE-GDA0003891642060000104
respectively representing the intensity variance, fuD, of the ith segment in the infrared segment and in the visible segment i (x, y) represents the fusion result of the ith detail layer (x, y) of the infrared detail layer and the ith detail layer (x, y) of the visible light detail layer.
(7c) All FuD i (x, y) constitutes the fused detail layer image FuD i
(8) Fusing a base layer image FuB and a detail layer image FuD i And adding to obtain a fused image FuI. The calculation formula is as follows:
Figure RE-GDA0003891642060000105
subjective evaluation is popular and straight in fusion quality evaluationHowever, subjective evaluation is easily affected by human factors, and when the difference between the fused images is small, accurate judgment is difficult to be made only through the subjective evaluation. Therefore, the invention introduces objective evaluation indexes for objective evaluation on the basis of subjective evaluation. The objective evaluation indexes adopted by the invention are as follows: entropy of information (EN), average Gradient (AG), structural Similarity (SSIM), image fidelity (VIF), edge preservation factor (Q) AB/F ). Wherein EN represents the information content contained in the fused image, AG is the definition measurement of the image, SSIM and VIF represent the information retention degree of the fused image to the source image, and Q AB/F Representing the degree of preservation of edge information in the fused image.
The larger the value of all five evaluation indexes adopted by the invention is, the better the fusion effect is. Each row represents a numerical value obtained by calculating the same evaluation index under the fusion results corresponding to different algorithms, and the numerical value is displayed in a bold mode to be the best method under the same evaluation index.
The following table is the average value of fusion results of the methods of the six groups of image sequences, and it can be seen that in comparison experiments with various advanced algorithms under different scenes, the numerical result of the algorithm provided by the invention is better under most evaluation standards, namely the fusion effect is better.
Figure RE-GDA0003891642060000106
Figure RE-GDA0003891642060000111
Referring to fig. 5 and 6, the infrared and visible image saliency detection results and the weight generation process are shown. The subjective evaluation method is based on the human visual system to evaluate the quality of the fusion image, and the quality of the visual effect can indicate the quality of the fusion result. The method selects six groups of multi-focus source images with different scenes from the public data set to carry out a contrast experiment, and qualitatively analyzes the experiment result.
The ideal infrared and visible light fused image not only contains the significant target information of the infrared image, but also retains the clear texture details of the visible light image. The comparison results of the method and the selected 9 comparison algorithms under six scenes of 'Camp', 'Kaptein', 'Marne', 'Tank', 'Road' and 'Kayak' are shown in fig. 7-12. The source images are all subjected to strict registration, and defects existing in the comparison algorithm result are marked at corresponding positions by red frames.
In fig. 7 to 12, (a) is an infrared image, (b) is a visible light image, (c) is a BF algorithm fusion result, (d) is a GFF algorithm fusion result, (e) is a CBF algorithm fusion result, (f) is an NSCT fusion result, (g) is an MST-SR algorithm fusion result, (h) is a GTF algorithm fusion result, (i) is an MF algorithm fusion result, (j) is a QDBI algorithm fusion result, (k) is an FMSPD algorithm fusion result, and (l) is an algorithm fusion result provided by the present invention.
In conclusion, in comparison experiments with various advanced algorithms under different scenes, the algorithm provided by the invention has optimal comprehensive performance in subjective evaluation and objective evaluation, and the method has excellent algorithm performance and wide applicability.

Claims (9)

1. A multi-source image fusion method based on side window filtering is characterized by comprising the following steps:
step (1), inputting a multi-source image to be fused, wherein the multi-source image is an infrared image I shot in the same scene IR And a visible light image I Vis ,I IR And I Vis Are equal in size;
step (2) of respectively aligning the infrared images I IR And a visible light image I Vis Performing side window filtering to obtain an infrared base layer image B IR And visible base layer image B Vis
Step (3), respectively carrying out significance detection on the infrared image I by using a rarity color statistics and gradient energy optimization combined method IR And a visible light image I Vis Carrying out significance detection to generate an infrared significance map U IR And a visible light significant figure U Vis
Step (4), carrying out comparison on the infrared saliency map U IR And a visible light significant figure U Vis Performing significance comparison to obtain initial fusion weight of the base layer, and optimizing to obtain final fusion weight
Figure FDA0003800219850000011
And
Figure FDA0003800219850000012
step (5) with
Figure FDA0003800219850000013
And
Figure FDA0003800219850000014
guide infrared base layer image B IR And visible base layer image B Vis Fusing to generate a fused base layer image FuB;
and (6) fusing the infrared detail layer and the visible light detail layer by using a mixed fusion strategy of gradient features (GC) and Intensity Variance (IV) of the image to obtain a fused detail layer image FuD i
Step (7), fusing the base layer image FuB and the detail layer image FuD i And adding to obtain a fused image FuI.
2. The multi-source image fusion method based on the side window filtering is characterized in that the side window filtering comprises the following steps:
step 1, randomly selecting a pixel point from an input image I;
step 2, obtaining eight side windows from the downward (D), right (R), upward (U), left (L), southwest (SW), southeast (SE), northeast (NE) and Northwest (NW) sides of the selected pixel points;
and 3, calculating the Side Window Filtering (SWF) output of any pixel of the input image according to the following formula:
Figure FDA0003800219850000015
Figure FDA0003800219850000021
where m represents one of the side windows, S = { L, R, U, D, NW, NE, SW, SE } is a set of side windows, I m Is the side window output, N, obtained by applying the kernel function F in the side window m m Is based on the sum of the weights of the set of side windows,
Figure FDA0003800219850000022
is a side window in eight directions of the l-th target pixel, q, based on the kernel function F j Is the intensity of the input image I at the l-th target pixel, w lj Is the weight of the jth pixel in the vicinity of the ith target pixel based on the kernel function F, I t Is with q j L of 2 The side window output with the minimum distance, namely the final side window filtering output;
step 4, all pixel points of the input image I are selected, and the f-th filtering result I is obtained f ,f=1,2,...,n,I 0 = I, will I f And f-1 th filtering result I f-1 Differencing to obtain a detail layer image D of the current level f Is represented by D f =I f-1 -I f The final filtering result is the base layer image B, where n is the decomposition level;
wherein, the input images I are respectively taken as infrared images I IR And a visible light image I Vis Obtaining an infrared base layer image B IR And visible light base layer image B Vis
3. The edge window filtering-based multi-source image fusion method according to claim 2, wherein the kernel function F of the edge window filtering is box filtering.
4. The multi-source image fusion method based on the side window filtering, according to claim 1, characterized in that the specific steps of the step (3) are as follows:
step 1, calculating a designated pixel I in an input image I according to the following formula k Significance value of (2) defines S (I) k ):
S(I k )=||I k -I 1 ||+||I k -I 2 ||+…+||I k -I N ||
Wherein I is an input image, I k ∈[0,255]In the method, a color value is calculated by taking the color values of the input image as a reference, and the color values are calculated by calculating the distance between the color values 1 ~I N Representing first to nth pixels;
step 2, according to the following formula, calculating the appointed pixel I k Rarity color statistics of T (I) k ):
Figure FDA0003800219850000023
Wherein, N (I) k ) Is the designation of the pixel I in step 1 k A degree of rareness of (a), σ represents a minimum value;
and 3, calculating gradient energy GE (x, y) of the input image according to the following formula:
Figure FDA0003800219850000031
where ω is the neighborhood of pixel (x, y), the window radius is set to 3,I in the present invention h ,I v ,I b Respectively representing the gradient characteristics in the horizontal, vertical and diagonal directions;
and 4, calculating a composite significance detection result of rarity color statistics and gradient energy optimization of the input image according to the following formula:
U=T(I k )*GE(x,y);
respectively taking an input image I as an infrared image I IR And a visible light image I Vis To obtain an infrared saliency map U IR And a visible light significant figure U Vis
5. The multi-source image fusion method based on the side window filtering of claim 1, wherein the specific steps of the step (4) are as follows:
step (4 a), aiming at the infrared saliency map U IR And a visible light significant figure U Vis Carrying out significance comparison to obtain an initial fusion weight;
step (4 b), optimizing the initial fusion weight by using Anisotropic Guided Filtering (AGF) to obtain the fusion weight W after optimizing the infrared and visible light images IR And W Vis
Step (4 c) of converting W IR And W Vis Normalizing to obtain the final fusion weight
Figure FDA0003800219850000032
And
Figure FDA0003800219850000033
6. the multi-source image fusion method based on side-window filtering of claim 5, wherein the step (4 a) calculates the initial fusion weight O of the infrared image base layer and the visible light image base layer at (x, y) according to the following formula IR (x, y) and O Vis (x,y):
Figure FDA0003800219850000034
Figure FDA0003800219850000035
Wherein, U IR (x, y) and U Vis (x, y) represents the saliency of the infrared image and the visible image at (x, y), respectively; each O is IR (x, y) initial fusion weight O constituting infrared image base layer IR Each of O Vis (x, y) initial fusion weight O constituting the visible image base layer Vis
The step (4 b) of optimizing the initial fusion weight by using Anisotropic Guided Filtering (AGF), which is denoted as W IR =AGF(O IR )
W Vis =AGF(O Vis )。
7. The multi-source image fusion method based on edge window filtering according to claim 1 or 5, wherein in the step (5), the calculation formula for fusing the base layer image FuB is as follows:
Figure FDA0003800219850000041
8. the multi-source image fusion method based on edge window filtering according to claim 1, wherein the step (6) calculates each element value in the fusion detail layer according to the following formula:
Figure FDA0003800219850000042
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003800219850000043
representing information at an ith detail layer (x, y) of the infrared detail layers,
Figure FDA0003800219850000044
representing information at an ith detail layer (x, y) of the visible light detail layers,
Figure FDA0003800219850000045
and
Figure FDA0003800219850000046
respectively showing the gradient characteristics of the ith detail layer in the infrared detail layer and the visible detail layer,
Figure FDA0003800219850000047
and
Figure FDA0003800219850000048
respectively representing the intensity variance, fuD, of the ith detail layer in the infrared detail layer and in the visible detail layer i (x, y) represents the fusion result of the ith detail layer (x, y) of the infrared detail layer and the ith detail layer (x, y) of the visible detail layer, and all FuDs i (x, y) constitutes the fused detail layer image FuD i
9. The multi-source image fusion method based on edge window filtering of claim 5, wherein in the step (7), the fused image FuI has the following calculation formula:
Figure FDA0003800219850000049
where n represents a decomposition level.
CN202210982221.8A 2022-08-16 2022-08-16 Multi-source image fusion method based on side window filtering Pending CN115330653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210982221.8A CN115330653A (en) 2022-08-16 2022-08-16 Multi-source image fusion method based on side window filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210982221.8A CN115330653A (en) 2022-08-16 2022-08-16 Multi-source image fusion method based on side window filtering

Publications (1)

Publication Number Publication Date
CN115330653A true CN115330653A (en) 2022-11-11

Family

ID=83923649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210982221.8A Pending CN115330653A (en) 2022-08-16 2022-08-16 Multi-source image fusion method based on side window filtering

Country Status (1)

Country Link
CN (1) CN115330653A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578304A (en) * 2022-12-12 2023-01-06 四川大学 Multi-band image fusion method and system combining saliency region detection
CN116167956A (en) * 2023-03-28 2023-05-26 无锡学院 ISAR and VIS image fusion method based on asymmetric multi-layer decomposition
CN117372276A (en) * 2023-12-04 2024-01-09 长春理工大学 Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering
CN117745555A (en) * 2023-11-23 2024-03-22 广州市南沙区北科光子感知技术研究院 Fusion method of multi-scale infrared and visible light images based on double partial differential equation

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578304A (en) * 2022-12-12 2023-01-06 四川大学 Multi-band image fusion method and system combining saliency region detection
CN115578304B (en) * 2022-12-12 2023-03-10 四川大学 Multi-band image fusion method and system combining saliency region detection
CN116167956A (en) * 2023-03-28 2023-05-26 无锡学院 ISAR and VIS image fusion method based on asymmetric multi-layer decomposition
CN116167956B (en) * 2023-03-28 2023-11-17 无锡学院 ISAR and VIS image fusion method based on asymmetric multi-layer decomposition
CN117745555A (en) * 2023-11-23 2024-03-22 广州市南沙区北科光子感知技术研究院 Fusion method of multi-scale infrared and visible light images based on double partial differential equation
CN117372276A (en) * 2023-12-04 2024-01-09 长春理工大学 Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering
CN117372276B (en) * 2023-12-04 2024-03-08 长春理工大学 Multispectral and panchromatic image fusion panchromatic sharpening method based on side window filtering

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN115330653A (en) Multi-source image fusion method based on side window filtering
CN106780485B (en) SAR image change detection method based on super-pixel segmentation and feature learning
CN111738314B (en) Deep learning method of multi-modal image visibility detection model based on shallow fusion
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN108596975B (en) Stereo matching algorithm for weak texture region
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
CN110866882B (en) Layered joint bilateral filtering depth map repairing method based on depth confidence
CN111709888B (en) Aerial image defogging method based on improved generation countermeasure network
CN111161160B (en) Foggy weather obstacle detection method and device, electronic equipment and storage medium
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN109671031B (en) Multispectral image inversion method based on residual learning convolutional neural network
CN114782298B (en) Infrared and visible light image fusion method with regional attention
CN111444929B (en) Saliency map calculation method and system based on fuzzy neural network
CN108614998B (en) Single-pixel infrared target detection method
CN112734822A (en) Stereo matching algorithm based on infrared and visible light images
Junwu et al. An infrared and visible image fusion algorithm based on LSWT-NSST
CN112465735A (en) Pedestrian detection method, device and computer-readable storage medium
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras
CN114549642B (en) Low-contrast infrared dim target detection method
Kim et al. Cross fusion-based low dynamic and saturated image enhancement for infrared search and tracking systems
CN114639002A (en) Infrared and visible light image fusion method based on multi-mode characteristics
Babu et al. An efficient image dahazing using Googlenet based convolution neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination