CN107248150A - A kind of Multiscale image fusion methods extracted based on Steerable filter marking area - Google Patents

A kind of Multiscale image fusion methods extracted based on Steerable filter marking area Download PDF

Info

Publication number
CN107248150A
CN107248150A CN201710638504.XA CN201710638504A CN107248150A CN 107248150 A CN107248150 A CN 107248150A CN 201710638504 A CN201710638504 A CN 201710638504A CN 107248150 A CN107248150 A CN 107248150A
Authority
CN
China
Prior art keywords
msub
mrow
image
msubsup
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710638504.XA
Other languages
Chinese (zh)
Inventor
崔光茫
赵巨峰
公晓丽
辛青
逯鑫淼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201710638504.XA priority Critical patent/CN107248150A/en
Publication of CN107248150A publication Critical patent/CN107248150A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of Multiscale image fusion methods extracted based on Steerable filter marking area, comprise the following steps:The visible images and infrared image of Same Scene are inputted, carries out Image Multiscale using non-down sampling contourlet transform and decomposes, image is divided into the details figure layer of some different scales;Calculate the Local standard deviation distribution map of each figure layer image;On the basis of Local standard deviation distribution map, and then calculating obtains corresponding binaryzation conspicuousness weight map;The acquisition of salient region information is realized based on Steerable filter device;With reference to salient region figure, image co-registration is carried out in each figure layer;Rebuild using weighted accumulation and obtain final fusion results.The present invention realizes effective multi-resolution decomposition, employ the salient region extraction algorithm based on Steerable filter, the marking area information of correspondence figure layer can effectively be extracted so that fusion results preferably retain the conspicuousness information of respective image source, with preferable vision syncretizing effect.

Description

A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
Technical field
The present invention relates to computer image processing technology, more particularly to a kind of extracted based on Steerable filter salient region Multiple dimensioned infrared visible light image fusion method.
Background technology
With the development of sensor technology, the imaging sensor of different-waveband is widely used, and the image developed therewith melts Conjunction technology also turns into the focus that people study.What multi-band image fusion can gather different images sensor under Same Scene Image carries out information fusion, obtains the fused images of information more horn of plenty, has emphatically in the imaging detection such as military and civilian field The application wanted.
Infrared visual image fusion technology can be by the heat-emanating target area information and visible ray figure in infrared image Scene detailed information as in is combined, and retains the characteristic information in both images simultaneously in fusion results.Research both at home and abroad Scholar proposes many Image Fusions, mainly using multi-scale image disassembling tool, utilize pyramid decomposition, utilize Principal component analysis and morphology cap transformation scheduling algorithm, fusion, which is combined, has good texture and contrast metric, is extracted Key character in visible ray and infrared image.From the point of view of the research trends of blending algorithm, how effectively to extract in multi-source image Significant characteristics information, realize the fine fusion of image detail information, be that infrared visual image fusion algorithm is urgently to be resolved hurrily The problem of.
The content of the invention
The present invention proposes a kind of Multiscale image fusion methods extracted based on Steerable filter marking area, is adopted using under non- Sample profile wave convert (NSCT) carries out multi-resolution decomposition to the infrared image and visible images of input, in different details scalograms In layer, with reference to the salient region extracting method based on Steerable filter, carry out effective image co-registration, it is ensured that each figure layer The reservation of visual salient region information in multi-resolution decomposition image, obtains imitating with fine vision enhancement eventually through weighting reconstruction The fusion results of fruit.
The present invention is decomposed and Steerable filter salient region extracting method using NSCT multi-scale images, it is proposed that Yi Zhongji In the multiple dimensioned infrared visible light image fusion method of Steerable filter salient region extraction, its main thought is:
1. utilizing NSCT multi-resolution decomposition instruments, effective multi-resolution decomposition is realized, it is ensured that fuse information is by coarse To fine layered shaping, help to lift the abundant information degree of fusion results.Meanwhile, rebuild using weighted accumulation, will not Rebuild with yardstick details figure layer fusion fusion results and obtain final fused images, being set by rational weighted value to obtain Preferable visual information enhancing effect.
2. employing the salient region extraction algorithm based on Steerable filter, the notable area of correspondence figure layer can be effectively extracted Domain information.Obtain the salient region weight map of binaryzation using the algorithm of design, the input picture operated as Steerable filter, The figure characterizes the larger edge of Local standard deviation in original image and details enriches region so that filter result can reflect people Eye vision significant properties.The guiding figure that original graph is operated as Steerable filter, can obtain in original image [0,1] and continuously divide The notable information in edge of cloth, and the fuzzy parameter setting of guiding is combined, it can obtain by coarse to the notable of fine different scale Property figure result so that the marking area information of the multi-resolution decomposition image of each figure layer is preferably retained.
A kind of Multiscale image fusion methods extracted based on Steerable filter marking area, are comprised the following steps:
(1) carry out multi-scale image using non-down sampling contourlet transform to decompose.
The visible images f and infrared image g of same image scene are inputted, implements multi-scale image decomposition to it respectively, (NSCT, non-subsampled Contourlet transform), which is decomposed, using non-down sampling contourlet obtains different details The decomposition figure layer of yardstick, its procedural representation is as follows:
fi=Multi_NSCT (f, i) (1)
gi=Multi_NSCT (g, i) (2)
Wherein, i=1,2...N, N represent NSCT Decomposition order.Multi_NSCT represents many chis of image using NSCT Degree decomposes framework.fiAnd giThe visible ray and infrared details figure layer of correspondence yardstick are represented respectively.
NSCT has good multiple dimensioned and time-frequency local characteristicses, while also possessing the multi-direction characteristic of anisotropic.Profit Carry out Image Multiscale with NSCT to decompose, each figure layer image after decomposition can preferably retain the image border letter of different scale Breath, contributes to the lifting of final effect.Meanwhile, the decomposition method does not have any down-sampling to operate, and each decomposition figure layer can The original resolution sizes of image are kept, process of reconstruction does not up-sample the information loss brought.
(2) Local standard deviation distribution map is calculated.
Each block layer decomposition image obtained for step (1), using local window traversal method, calculating obtains local standard Difference Butut, procedural representation is as follows:
Wherein, W is the local window that size is T × T, and LocalStd is that local window graphics standard difference calculates operation, WithRespectively visible ray and it is infrared correspondence figure layer Local standard deviation distribution map.The larger image-region of local variance is in distribution There is larger pixel value in figure.
(3) binaryzation conspicuousness weight map is obtained.
The Local standard deviation distribution map obtained for step (2), passes through the infrared and visible ray standard difference of same figure layer Butut compares, and combines image closed operation, obtains the conspicuousness weight map of binaryzation, procedural representation is as follows:
Wherein,WithRespectively pixel value of the visible ray figure layer corresponding with infrared figure at pixel k.The choosing of binaryzation Criterion is taken to ensure that conspicuousness weight map can be good at embodying the conspicuousness information of respective wave band.Then, application image form Closed operation in, obtains the conspicuousness weight map of final binaryzation:
Wherein, imclose () is the closed operation in morphological image process,WithThe corresponding figure layer respectively obtained Binaryzation conspicuousness weight map.By image closed operation, the tiny non-area of UNICOM in conspicuousness power region can be effectively eliminated Domain, obtains more smooth salient region profile.
(4) the salient region figure based on Steerable filter is extracted.
The binaryzation conspicuousness weight map obtained for step (3), salient region extraction is obtained using Steerable filter As a result, its procedural representation is:
Wherein, GF () represents Steerable filter operation,WithFor the input picture in Steerable filter processing procedure, f and g For the guiding figure in Steerable filter processing procedure, riAnd μiThe Steerable filter device size and fog-level of figure layer are respectively corresponded to,WithRespectively visible ray and the salient region of infrared correspondence figure layer extracts result.
In salient region extraction process, using the conspicuousness weight map of binaryzation as input picture, the figure shows The larger edge of local variance and details enrich region in original image, and these regions exactly human eye vision is most interested, most close The region of note, with reference to the process of Steerable filter, it is ensured that filter result can reflect human eye vision significant properties.By original image As figure is oriented to, is operated by Steerable filter, compared to the marking area weight map of binaryzation, can further obtain original image In [0,1] continuously distributed notable information in edge, meanwhile, set using different Steerable filter fuzzy parameters, can obtain by The coarse Saliency maps result to fine different scale so that the marking area information of the multi-resolution decomposition image of each figure layer Preferably retained.
(5) image co-registration that salient region is extracted is combined
The salient region figure obtained using step (4) extracts result, carries out image co-registration processing in each yardstick figure layer, Enable fusion results preferably to retain the marking area detailed information in different images, be specifically expressed as follows:
Wherein, MiRepresent the fusion results of each figure layer.
(6) fused images weighting is rebuild
Each figure layer fusion results obtained for step (5), are rebuild using weighted accumulation and obtain final fusion results, melted Close image weighting reconstruction procedural representation as follows:
Wherein, λiWeighted value, M are rebuild for each figure layer fusion resultsfusionFor fusion results outside final visible red, lead to Cross and set rational weighted value to obtain the fused images result of information enhancement.
The beneficial effects of the invention are as follows:The present invention is directed to visible ray and infrared image integration technology, is adopted using using under non- Sample profile wave convert NSCT carries out multi-resolution decomposition to image, in different details yardstick figure layers, with reference to based on Steerable filter Salient region extracting method, carries out effective image co-registration, it is ensured that vision shows in the multi-resolution decomposition image of each figure layer The reservation of area information is write, is rebuild eventually through weighting and obtains the fusion results with fine vision enhancement effect.In the present invention In, simply enter the visible images and infrared image of Same Scene, you can implement effective multi-scale image fusion, obtain height The fusion results of quality.Present invention can apply to the fields such as remote sensing, military surveillance, security monitoring, industrial production.
Brief description of the drawings
Fig. 1 is algorithm flow chart;
Fig. 2 (a) is the infrared image of input;
Fig. 2 (b) is visible images;
Fig. 3 (a) is infrared image Local standard deviation distribution map;
Fig. 3 (b) is visible images Local standard deviation distribution map;
Fig. 4 (a) is infrared image binaryzation conspicuousness weight map;
Fig. 4 (b) is visible images binaryzation conspicuousness weight map;
Fig. 5 (a) is infrared image salient region figure;
Fig. 5 (b) is visible images salient region figure;
Fig. 6 is infrared visual image fusion result.
Embodiment
Below in conjunction with the accompanying drawings, by specific embodiment, clear, complete description is carried out to technical scheme.
The flow chart of the inventive method is as shown in Figure 1.
Fig. 2 is the example of the infrared visible images of one group of Same Scene, and wherein Fig. 2 (a) is the infrared image of input, figure 2 (b) is the visible images of input.
Fig. 3-Fig. 5 illustrates the process that the salient region figure based on Steerable filter is obtained.Fig. 3 (a) and (b) are respectively red The Local standard deviation distribution map of outer image and visible images, has highlighted the larger scene area of Local standard deviation in processing image Domain;Fig. 4 (a) and (b) are respectively the binaryzation conspicuousness weight map of infrared image and visible images, reflect respective image Human eye conspicuousness information;Fig. 5 (a) and (b) are respectively the salient region figure of infrared image and visible images, and Saliency maps are anti- Human eye vision is reflected and has been most interested in current region, be introduced into image co-registration framework, help to obtain with more preferable subjective vision The fusion results of effect.
In the present embodiment, N=4 is set, i.e., using NSCT multi-resolution decompositions instrument to many of input picture 4 yardsticks of progress Scale Decomposition, is obtained by coarse to 4 fine details figure layers.
Subsequently for each decomposition scale, by corresponding original infrared figure and visible ray figure as figure is oriented to, carry out aobvious Work property administrative division map is extracted.Wherein, in Local standard deviation distribution map calculating process, local window W's is dimensioned to T=11;Lead Into filtering salient region figure extraction process, Steerable filter device size and fog-level parameter are set to ri=10,7,7, 7},μi={ 0.1,0.001,0.00001,0.000001 }
(i=1,2,3,4).Decomposition figure layer is finer, and corresponding Steerable filter fog-level is smaller, preferably to retain The detailed information of salient region.
With reference to Saliency maps distribution, fusion is carried out for the infrared figure and visible images of each yardstick, can be preferably Keep the edge and details of image, prominent human eye vision salient region.
Finally, rebuild by the weighting of each yardstick fusion results figure and obtain final infrared visible ray fusion results.Fusion As a result rebuild weighted value and be chosen for λi={ 0.60,0.31,0.45,0.75 } (i=1,2,3,4), final fusion results such as Fig. 6 It is shown, the salient region information of infrared figure and visible ray figure is preferably remained, with excellent visual effect.

Claims (1)

1. a kind of Multiscale image fusion methods extracted based on Steerable filter marking area, it is characterised in that this method is specific Comprise the following steps:
(1) carry out multi-scale image using non-down sampling contourlet transform to decompose;
The visible images f and infrared image g of same image scene are inputted, implements multi-scale image decomposition to it respectively, is utilized NSCT decomposes the decomposition figure layer for obtaining different details yardsticks, and its procedural representation is as follows:
fi=Multi_NSCT (f, i) (1)
gi=Multi_NSCT (g, i) (2)
Wherein, i=1,2...N, N represent NSCT Decomposition order;Multi_NSCT represents the Image Multiscale point using NSCT Solve framework;fiAnd giThe visible ray and infrared details figure layer of correspondence yardstick are represented respectively;NSCT is that non-down sampling contourlet is decomposed;
(2) Local standard deviation distribution map is calculated;
Each block layer decomposition image obtained for step (1), using local window traversal method, calculating obtains Local standard deviation point Butut, procedural representation is as follows:
<mrow> <msub> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>L</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>S</mi> <mi>t</mi> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>L</mi> <mi>o</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>S</mi> <mi>t</mi> <mi>d</mi> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>W</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Wherein, W is the local window that size is T × T, and LocalStd is that local window graphics standard difference calculates operation,With Respectively visible ray and it is infrared correspondence figure layer Local standard deviation distribution map;
(3) binaryzation conspicuousness weight map is obtained;
The Local standard deviation distribution map obtained for step (2), passes through the infrared and visible ray standard difference Butut of same figure layer Compare, and combine image closed operation, obtain the conspicuousness weight map of binaryzation, procedural representation is as follows:
<mrow> <munder> <msubsup> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mrow> <mi>k</mi> <mo>&amp;Element;</mo> <msub> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> </mrow> </munder> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <msubsup> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>=</mo> <mi>max</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <munder> <msubsup> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mrow> <mi>k</mi> <mo>&amp;Element;</mo> <msub> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> </mrow> </munder> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mn>1</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mi> </mi> <msubsup> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>=</mo> <mi>max</mi> <mrow> <mo>(</mo> <mrow> <msubsup> <mi>S</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>,</mo> <msubsup> <mi>S</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mn>0</mn> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
Wherein,WithRespectively pixel value of the visible ray figure layer corresponding with infrared figure at pixel k;Then, application image shape Closed operation in state, obtains the conspicuousness weight map of final binaryzation:
<mrow> <msub> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>i</mi> <mi>m</mi> <mi>c</mi> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>e</mi> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>i</mi> <mi>m</mi> <mi>c</mi> <mi>l</mi> <mi>o</mi> <mi>s</mi> <mi>e</mi> <mrow> <mo>(</mo> <msubsup> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> <mi>k</mi> </msubsup> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
Wherein, imclose () is the closed operation in morphological image process,WithThe two-value of the corresponding figure layer respectively obtained Change conspicuousness weight map;
(4) the salient region figure based on Steerable filter is extracted;
The binaryzation conspicuousness weight map obtained for step (3), the result of salient region extraction is obtained using Steerable filter, Its procedural representation is:
<mrow> <msub> <mi>Map</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>G</mi> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>,</mo> <mi>f</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&amp;mu;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>Map</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>=</mo> <mi>G</mi> <mi>F</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>,</mo> <mi>g</mi> <mo>,</mo> <msub> <mi>r</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&amp;mu;</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow> 1
Wherein, GF () represents Steerable filter operation,WithFor the input picture in Steerable filter processing procedure, f and g are to lead Guiding figure into filter process, riAnd μiThe Steerable filter device size and fog-level of figure layer are respectively corresponded to, WithRespectively visible ray and the salient region of infrared correspondence figure layer extracts result;
(5) image co-registration that salient region is extracted is combined
The salient region figure obtained using step (4) extracts result, carries out image co-registration processing in each yardstick figure layer so that Fusion results can preferably retain the marking area detailed information in different images, specifically be expressed as follows:
<mrow> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <msub> <mi>Map</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>+</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>Map</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> <mo>+</mo> <mo>&amp;lsqb;</mo> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <msub> <mi>Map</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>g</mi> <mi>i</mi> </msub> <mo>&amp;times;</mo> <msub> <mi>Map</mi> <msub> <mi>g</mi> <mi>i</mi> </msub> </msub> <mo>)</mo> <mo>&amp;rsqb;</mo> </mrow> <mn>2</mn> </mfrac> <mo>,</mo> <mrow> <mo>(</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>...</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
Wherein, MiRepresent the fusion results of each figure layer;
(6) fused images weighting is rebuild
Each figure layer fusion results obtained for step (5), are rebuild using weighted accumulation and obtain final fusion results, fusion figure As weighting reconstruction procedural representation is as follows:
<mrow> <msub> <mi>M</mi> <mrow> <mi>f</mi> <mi>u</mi> <mi>s</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msub> <mi>M</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
Wherein, λiWeighted value, M are rebuild for each figure layer fusion resultsfusionFor fusion results outside final visible red, pass through setting Rational weighted value obtains the fused images result of information enhancement.
CN201710638504.XA 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area Pending CN107248150A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710638504.XA CN107248150A (en) 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710638504.XA CN107248150A (en) 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Publications (1)

Publication Number Publication Date
CN107248150A true CN107248150A (en) 2017-10-13

Family

ID=60013287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710638504.XA Pending CN107248150A (en) 2017-07-31 2017-07-31 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Country Status (1)

Country Link
CN (1) CN107248150A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977950A (en) * 2017-12-06 2018-05-01 上海交通大学 Based on the multiple dimensioned fast and effective video image fusion method for instructing filtering
CN108364273A (en) * 2018-01-30 2018-08-03 中南大学 A kind of method of multi-focus image fusion under spatial domain
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method
CN109344699A (en) * 2018-08-22 2019-02-15 天津科技大学 Winter jujube disease recognition method based on depth of seam division convolutional neural networks
CN109360175A (en) * 2018-10-12 2019-02-19 云南大学 A kind of infrared image interfusion method with visible light
CN109754385A (en) * 2019-01-11 2019-05-14 中南大学 It is not registrated the rapid fusion method of multiple focussing image
CN109816617A (en) * 2018-12-06 2019-05-28 重庆邮电大学 Multimode medical image fusion method based on Steerable filter and graph theory conspicuousness
CN110009551A (en) * 2019-04-09 2019-07-12 浙江大学 A kind of real-time blood vessel Enhancement Method of CPUGPU collaboration processing
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110211081A (en) * 2019-05-24 2019-09-06 南昌航空大学 A kind of multi-modality medical image fusion method based on image attributes and guiding filtering
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
CN110349117A (en) * 2019-06-28 2019-10-18 重庆工商大学 A kind of infrared image and visible light image fusion method, device and storage medium
CN110930311A (en) * 2018-09-19 2020-03-27 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111681243A (en) * 2020-08-17 2020-09-18 广东利元亨智能装备股份有限公司 Welding image processing method and device and electronic equipment
CN112132753A (en) * 2020-11-06 2020-12-25 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放***箭军工程大学 Night infrared medium-long wave image fusion method and system
WO2021120406A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement
CN117745555A (en) * 2023-11-23 2024-03-22 广州市南沙区北科光子感知技术研究院 Fusion method of multi-scale infrared and visible light images based on double partial differential equation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)
US20140064636A1 (en) * 2007-11-29 2014-03-06 Sri International Multi-scale adaptive fusion with contrast normalization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064636A1 (en) * 2007-11-29 2014-03-06 Sri International Multi-scale adaptive fusion with contrast normalization
CN102521818A (en) * 2011-12-05 2012-06-27 西北工业大学 Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform)

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许磊 等: "基于多尺度分解和显著性区域提取的可见光红外图像融合方法", 《激光与光电子学进展网络预出版HTTP://KNS.CNKI.NET/KCMS/DETAIL/31.1690.TN.20170623.1053.014.HTML》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977950A (en) * 2017-12-06 2018-05-01 上海交通大学 Based on the multiple dimensioned fast and effective video image fusion method for instructing filtering
CN107977950B (en) * 2017-12-06 2021-06-04 上海交通大学 Rapid and effective video image fusion method based on multi-scale guide filtering
CN108364273A (en) * 2018-01-30 2018-08-03 中南大学 A kind of method of multi-focus image fusion under spatial domain
CN108364273B (en) * 2018-01-30 2022-02-25 中南大学 Method for multi-focus image fusion in spatial domain
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method
CN109344699A (en) * 2018-08-22 2019-02-15 天津科技大学 Winter jujube disease recognition method based on depth of seam division convolutional neural networks
CN110930311A (en) * 2018-09-19 2020-03-27 杭州萤石软件有限公司 Method and device for improving signal-to-noise ratio of infrared image and visible light image fusion
CN109360175A (en) * 2018-10-12 2019-02-19 云南大学 A kind of infrared image interfusion method with visible light
CN109816617A (en) * 2018-12-06 2019-05-28 重庆邮电大学 Multimode medical image fusion method based on Steerable filter and graph theory conspicuousness
CN109816617B (en) * 2018-12-06 2023-05-26 重庆邮电大学 Multi-mode medical image fusion method based on guided filtering and graph theory significance
CN109754385A (en) * 2019-01-11 2019-05-14 中南大学 It is not registrated the rapid fusion method of multiple focussing image
CN110009551A (en) * 2019-04-09 2019-07-12 浙江大学 A kind of real-time blood vessel Enhancement Method of CPUGPU collaboration processing
CN110210541A (en) * 2019-05-23 2019-09-06 浙江大华技术股份有限公司 Image interfusion method and equipment, storage device
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method
CN110211081B (en) * 2019-05-24 2023-05-16 南昌航空大学 Multimode medical image fusion method based on image attribute and guided filtering
CN110211081A (en) * 2019-05-24 2019-09-06 南昌航空大学 A kind of multi-modality medical image fusion method based on image attributes and guiding filtering
CN110349117B (en) * 2019-06-28 2023-02-28 重庆工商大学 Infrared image and visible light image fusion method and device and storage medium
CN110349117A (en) * 2019-06-28 2019-10-18 重庆工商大学 A kind of infrared image and visible light image fusion method, device and storage medium
WO2021120406A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Infrared and visible light fusion method based on saliency map enhancement
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111223069B (en) * 2020-01-14 2023-06-02 天津工业大学 Image fusion method and system
CN111681243A (en) * 2020-08-17 2020-09-18 广东利元亨智能装备股份有限公司 Welding image processing method and device and electronic equipment
CN112132753A (en) * 2020-11-06 2020-12-25 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112132753B (en) * 2020-11-06 2022-04-05 湖南大学 Infrared image super-resolution method and system for multi-scale structure guide image
CN112837253A (en) * 2021-02-05 2021-05-25 中国人民解放***箭军工程大学 Night infrared medium-long wave image fusion method and system
CN117745555A (en) * 2023-11-23 2024-03-22 广州市南沙区北科光子感知技术研究院 Fusion method of multi-scale infrared and visible light images based on double partial differential equation

Similar Documents

Publication Publication Date Title
CN107248150A (en) A kind of Multiscale image fusion methods extracted based on Steerable filter marking area
Chen et al. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition
CN104200452B (en) Method and device for fusing infrared and visible light images based on spectral wavelet transformation
CN104809734B (en) A method of the infrared image based on guiding filtering and visual image fusion
Yang et al. Visual attention guided image fusion with sparse representation
CN104616274B (en) A kind of multi-focus image fusing method based on salient region extraction
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
Guo et al. Three-dimensional wavelet texture feature extraction and classification for multi/hyperspectral imagery
CN109801250A (en) Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression
CN105719263A (en) Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features
CN108629757A (en) Image interfusion method based on complex shear wave conversion Yu depth convolutional neural networks
CN109447909A (en) The infrared and visible light image fusion method and system of view-based access control model conspicuousness
CN102800070B (en) Multi-modality image fusion method based on region and human eye contrast sensitivity characteristic
CN102005037A (en) Multimodality image fusion method combining multi-scale bilateral filtering and direction filtering
CN109308691A (en) Infrared and visible light image fusion method based on image enhancement and NSCT
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN109360175A (en) A kind of infrared image interfusion method with visible light
CN107169944A (en) A kind of infrared and visible light image fusion method based on multiscale contrast
CN110189284A (en) A kind of infrared and visible light image fusion method
CN103020933A (en) Multi-source image fusion method based on bionic visual mechanism
Huang et al. A multiscale urban complexity index based on 3D wavelet transform for spectral–spatial feature extraction and classification: an evaluation on the 8-channel WorldView-2 imagery
CN106897999A (en) Apple image fusion method based on Scale invariant features transform
Patel et al. A review on infrared and visible image fusion techniques
Lu et al. Infrared and visible image fusion based on tight frame learning via VGG19 network
Zhang et al. Image fusion using online convolutional sparse coding

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171013

RJ01 Rejection of invention patent application after publication