CN109509163A - A kind of multi-focus image fusing method and system based on FGF - Google Patents

A kind of multi-focus image fusing method and system based on FGF Download PDF

Info

Publication number
CN109509163A
CN109509163A CN201811194833.0A CN201811194833A CN109509163A CN 109509163 A CN109509163 A CN 109509163A CN 201811194833 A CN201811194833 A CN 201811194833A CN 109509163 A CN109509163 A CN 109509163A
Authority
CN
China
Prior art keywords
source images
image
fgf
levels
detail
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811194833.0A
Other languages
Chinese (zh)
Other versions
CN109509163B (en
Inventor
张永新
张传才
赵秀英
伍临莉
徐文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyang Normal University
Original Assignee
Luoyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Normal University filed Critical Luoyang Normal University
Priority to CN201811194833.0A priority Critical patent/CN109509163B/en
Publication of CN109509163A publication Critical patent/CN109509163A/en
Application granted granted Critical
Publication of CN109509163B publication Critical patent/CN109509163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to optical image security technical fields, it discloses a kind of based on quick Steerable filter (Fast Guided Filter, FGF multi-focus image fusing method and system), source images are carried out smoothly with mean filter, the small structure in source images is removed, and source images are decomposed to obtain the basal layer and levels of detail of source images;Laplce's filtering and Gassian low-pass filter are successively filtered source images, obtain the notable figure of source images;The weight figure of corresponding source images is obtained by comparing source images notable figure pixel size;DECOMPOSED OPTIMIZATION is carried out to weight figure using source images as navigational figure and using FGF, respectively obtains the weight figure basal layer and levels of detail of optimization;According to certain fusion rule, basal layer and levels of detail respective pixel are merged;Fused basal layer and levels of detail are merged, blending image is obtained.The present invention is not only able to effectively improve the focal zone detection accuracy in source images, and the subjective and objective quality of blending image can be greatly improved.

Description

A kind of multi-focus image fusing method and system based on FGF
Technical field
The invention belongs to optical image security technical field, a kind of multi-focus image fusing method more particularly to one are designed Multi-focus image fusing method and system of the kind based on FGF.
Background technique
Since the target in focus can only be clearly imaged in picture plane by optical sensor imaging system, and focus Target imaging outside point is fuzzy.Therefore, the limited problem of focusing range is easy to cause optical imaging system can not be to field Scape target whole blur-free imaging.If understood completely entire scene objects, need to analyze a considerable amount of similar diagrams Picture was not only wasted time but also was frittered away energy, and will also result in the waste on memory space.It is same that a width is obtained by image interfusion method All objects all clearly images in one scene, make its more comprehensively, true reflection scene information is for accurate point of image Analysis and understanding are of great significance, and multi-focus image fusion is one of the effective technology means for realizing this target.
Multi-focus image fusion is exactly to be obtained under the identical image-forming condition to process registration about more in a certain scene Width focusedimage detects by activity measurement and extracts the clear area of every width focusedimage, so using certain blending algorithm These region merging techniques are generated by all objects all clearly images in the width scene according to certain fusion rule afterwards.It is more Focusedimage integration technology can be characterized extraction with the characterization scene objects information of complete display, and target identification and tracking etc. are established Good basis is determined, so that the utilization rate and system that effectively improve image information are to the reliable of object table detection identification Property, space-time unique is extended, uncertainty is reduced.
The key of Multi-focus image fusion is to make accurate characterization to focal zone characteristic, is accurately positioned and extracts Region in focusing range or pixel out, this is also that asking of solving very well is not yet received in multi-focus image fusion technology so far One of topic.Currently, image co-registration research continue for for more than 30 years.With the continuous development of computer and imaging technique, state Inside and outside researcher determines for focal zone present in multi-focus image fusion technology and extracts problem, proposes several hundred kinds The blending algorithm haveing excellent performance.These Multi-focus image fusions are broadly divided into two classes: spatial domain multi-focus image fusion is calculated Method and transform domain Multi-focus image fusion.Wherein, spatial domain Image Fusion is according to the gray scale of pixel in source images It is worth size, is come out the pixel of focal zone or extracted region using different focal zone evaluating characteristics, according to melts Normally obtain blending image.The advantages of algorithm is that method is simple, is easily performed, and computation complexity is low, blending image packet Raw information containing source images.The disadvantage is that being also easy to produce " blocking artifact " vulnerable to noise jamming.Transform domain image blending algorithm pair Source images are converted, and are handled according to fusion rule transformation coefficient, and by treated, transformation coefficient progress inverse transformation is obtained To blending image.Its shortcoming is mainly manifested in decomposable process complexity, time-consuming, and high frequency coefficient space hold is big, fusion process In easily cause information to lose.If changing a transformation coefficient of blending image, the airspace gray value of whole image all will It changes, as a result during enhancing some image-region attributes, introduces unnecessary artificial interference.It is more common Pixel-level Multi-focus image fusion have it is following several:
(1) it is based on the multi-focus image fusing method of laplacian pyramid (Laplacian Pyramid, LAP).Its Main process is to carry out Laplacian pyramid to source images, suitable fusion rule is then used, by high and low frequency Coefficient is merged, and the progress inverse transformation of fused pyramid coefficient is obtained blending image.This method has good time-frequency Local characteristics achieve good results, but each inter-layer data that decomposes has redundancy, can not determine the data phase on each decomposition layer Guan Xing.It is poor to extract detailed information ability, decomposable process medium-high frequency information is lost seriously, and fused image quality is directly affected.
(2) it is based on the multi-focus image fusing method of wavelet transformation (Discrete Wavelet Transform, DWT). Its main process is to carry out wavelet decomposition to source images, then uses suitable fusion rule, high and low frequency coefficient is carried out Fused wavelet coefficient progress wavelet inverse transformation is obtained blending image by fusion.This method has good time-frequency part special Property, it achieves good results, but 2-d wavelet base is made of by way of tensor product one-dimensional wavelet basis, for image In the expression of singular point be optimal, but the line and face unusual for image can not carry out rarefaction representation.In addition DWT belongs to It is converted in down-sampling, lacks translation invariance, the loss of information is easily caused in fusion process, blending image is caused to be distorted.
(3) contourlet transform based on non-lower sampling (Non-sub-sampled Contourlet Transform, NSCT multi-focus image fusing method).Its main process is to carry out NSCT decomposition to source images, then uses and suitably melts Normally, high and low frequency coefficient is merged, the progress NSCT inverse transformation of fused wavelet coefficient is obtained into fusion figure Picture.This method can obtain good syncretizing effect, but the speed of service is slower, and decomposition coefficient needs to occupy a large amount of memory space.
(4) it is based on the multi-focus image fusion of principal component analysis (Principal Component Analysis, PCA) Method.Its main process is source images to be preferentially converted into column vector according to row major or column, and calculate covariance, according to Covariance matrix seeks feature vector, determines the corresponding feature vector of first principal component and determines therefrom that each source images fusion Weight is weighted fusion according to weight.When this method has certain common characteristics between source images, it can obtain preferably Syncretizing effect;And the feature difference between source images it is larger when, then be easy to introduce false information in blending image, Fusion results are caused to be distorted.This method calculates simply, and speed is fast, but where the gray value of single pixel point can not indicate The focus characteristics of image-region cause blending image soft edge, the low problem of contrast occur.
(5) it is based on the multi-focus image fusing method of spatial frequency (Spatial Frequency, SF).Its main process It is that source images are subjected to block segmentation, then calculates each piece of SF, the SF of source images corresponding blocks is compared, by the big correspondence image of SF value Merged block obtains blending image.This method is simply easy to implement, but piecemeal size is difficult to adaptive determination, and piecemeal is too big, Yi Jiang Pixel outside focus is all included, and reduces fusion mass, is declined blending image contrast, is also easy to produce blocking artifact, piecemeal is too It is small limited to region readability characterization ability, easily there is the wrong choice of block, so that consistency is poor between adjacent sub-blocks, is handing over Occur obvious detail differences at boundary, generates " blocking artifact ".In addition, the focus characteristics of image subblock are difficult to accurate description, it is how sharp With the focus characteristics of the image subblock local feature accurate description sub-block, will directly affect the accuracy for focusing sub-block selection and The quality of blending image.
(6) multiple focussing image of convolution rarefaction representation (convolutional sparse representation, CSR) Fusion method.Its main process is to carry out CSR decomposition to source images, the basal layer and levels of detail of source images is obtained, then to base Plinth layer and levels of detail are merged, and finally merge the basal layer of fusion and levels of detail to obtain blending image.This method is not direct Dependent on the focus characteristics of source images, but source images are determined by source images basal layer and the significant characteristics of levels of detail Focal zone, to noise have robustness.
(7) multi-focus of (cartoon-texture decomposition, CTD) is decomposed based on cartoon-texture image Image interfusion method.Its main process is that multi-focus source images are carried out with cartoon-texture image respectively to decompose, and obtains multi-focus The cartoon ingredient and texture ingredient of source images, and the cartoon ingredient and texture ingredient of multi-focus source images are merged respectively, Merge fused cartoon ingredient and texture ingredient obtains blending image.Its fusion rule be cartoon ingredient based on image and The focus characteristics design of texture ingredient, the focus characteristics of source images are not directly dependent on, to have to noise and scratch breakage There is robustness.
(8) multi-focus image fusing method based on Steerable filter (Guided Filter Fusion, GFF).It is main Process be using guiding image filter by picture breakdown be the basal layer comprising large scale Strength Changes and comprising small scale it is thin The levels of detail of section, then using the conspicuousness and Space Consistency of basal layer and levels of detail building blending weight figure, and as Basis merges the basal layer of source images and levels of detail respectively, finally the basal layer of fusion and levels of detail is merged to obtain final Blending image, this method can obtain good syncretizing effect, but lack robustness to noise.
Above-mentioned eight kinds of methods are more common multi-focus image fusing methods, but in these methods, wavelet transformation (DWT) geometrical characteristic possessed by image data itself cannot be made full use of, cannot optimal or most " sparse " expression image, Blending image is easily caused offset and information Loss occur;Contourlet transform (NSCT) method based on non-lower sampling due to Decomposable process is complicated, and the speed of service is slower, and in addition decomposition coefficient needs to occupy a large amount of memory space.Principal component analysis (PCA) Method is easily reduced blending image contrast, influences fused image quality.Convolution rarefaction representation (CSR), cartoon texture image point Solution (CTD), Steerable filter (GFF) are all the new methods proposed in recent years, good syncretizing effect are all achieved, wherein being oriented to Filtering (GFF) is that edge holding and translation invariant operation are carried out based on local nonlinearity model, and computational efficiency is high;It can use While iteration frame restores large scale edge, the small details of adjacent edges is eliminated;Preceding four kinds of common fusion methods all exist Different disadvantages, be difficult to reconcile between speed and fusion mass, limit the application and popularization of these methods, the 8th kind of method It is the more excellent blending algorithm of current fusion performance, but Steerable filter is not filtered source images directly, is easy Lost part source image information, at the same average weight integration technology with affect fusion performance to a certain extent.
In conclusion problem of the existing technology is:
In the prior art, (1) traditional Space domain mainly uses region partitioning method to carry out, region division size It is excessive to will lead to exterior domain in focus and be located at the same area, cause fused image quality to decline;Region division is undersized, son Provincial characteristics cannot sufficiently reflect the provincial characteristics, be easy to cause the judgement inaccuracy of focal zone pixel and generate and falsely drop, make Consistency is poor between obtaining adjacent area, obvious detail differences occurs in intersection, generates " blocking artifact ".(2) traditional based on more rulers It spends in the multi-focus image fusion method decomposed, is always handled whole picture multi-focus source images as single entirety, detailed information It extracts imperfect, the detailed information such as source images Edge texture cannot be preferably indicated in blending image, affect blending image pair The integrality of source images potential information description, and then influence fused image quality.
Summary of the invention
In view of the problems of the existing technology, the present invention provides one kind can effectively eliminate " blocking artifact ", expansion optical The imaging system depth of field and can the subjective and objective quality of significant increase blending image multi-focus image fusing method based on FGF and be System.It overcomes focal zone present in multi-focus image fusion and determines inaccuracy, it cannot effective extraction source image border texture Information, blending image minutia characterize imperfect, part loss in detail, " blocking artifact ", the problems such as contrast decline.
(1) source images are carried out smoothly with mean filter, is decomposed to obtain the basal layer of source images and thin to source images Ganglionic layer;(2) Laplce's filtering and Gassian low-pass filter are successively filtered source images, obtain the aobvious of source images Write figure;(3) the weight figure of corresponding source images is obtained by comparing source images notable figure pixel size;(4) using source images as drawing It leads image and DECOMPOSED OPTIMIZATION is carried out to weight figure using FGF, respectively obtain the weight figure basal layer and levels of detail of optimization;(5) root Basal layer and levels of detail respective pixel are melted using the weight figure basal layer and levels of detail of optimization according to certain fusion rule It closes;(6) fused basal layer and levels of detail are merged, obtains blending image.
The invention is realized in this way basal layer and levels of detail will be decomposed into source images with mean filter first;Then Conspicuousness detection is carried out to source images using Laplce's filtering and Gassian low-pass filter, obtains the notable figure of source images;Then The weight figure of corresponding source images is obtained by comparing source images notable figure pixel size;And simultaneously using source images as navigational figure DECOMPOSED OPTIMIZATION is carried out to weight figure using FGF, respectively obtains the weight figure basal layer and levels of detail of optimization;It is then based on decision Matrix respectively merges basal layer and levels of detail respective pixel according to certain fusion rule;Finally by fused basal layer Merge with levels of detail, obtains blending image.
Further, the multi-focus image fusing method based on FGF, to the multiple focussing image I after registration1And I2It carries out Fusion, I1And I2It is gray level image, and I1, I2∈□M×N,It is the space that size is M × N, M and N are positive integer, It specifically includes:
(1) using mean value wave device AF respectively to multiple focussing image I1And I2Smooth operation is carried out, source images I is removed1And I2 In small structure, obtain source images basal layer (B1, B2), source images levels of detail (D1, D2).Wherein: (B1, B2)=AF (I1, I2), (D1, D2)=(I1, I2)-(B1, B2)。
(2) source images are filtered with LF, obtain the high-pass filtering image H of source images1And H2, with GLF to H1 And H2Low-pass filtering treatment obtains source images notable figure S1And S2.Wherein: (H1, H2)=LF (I1, I2), (S1, S2)=GLF (H1, H2)。
(3) according to source images I1And I2Pixel S in corresponding notable figure1(i, j) and S2(i, j) size constructs source images pair The weight matrix P answered1And P2.Wherein:
S1(i, j) is source images I1Notable figure pixel (i, j);
S2(i, j) is source images I2Notable figure pixel (i, j);
P1(i, j) is source images I1Weight matrix element (i, j);
P2(i, j) is source images I2Weight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;
(4) by source images I1And I2As navigational figure, with FGF to weight matrix P1And P2DECOMPOSED OPTIMIZATION is carried out, is obtained Weight matrix W1 B, W2 B, W1 DAnd W2 D.Wherein: (W1 B, W1 D)=FGF (P1, I1), (W2 B, W2 D)=FGF (P2, I2)。
(5) it is based on source images basal layer (B1, B2) and levels of detail (D1, D2), according to the weight matrix W of optimization1 B, W2 B, W1 D And W2 DConstruct blending image basal layer FB,With levels of detail FD,, obtain fused basal layer FB With levels of detail FD.Wherein, FB=W1 BB1+W2 BB2, FD=W1 DD1+W2 DD2
(6) blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD
Further, corrosion expansive working processing is carried out to the eigenmatrix constructed in step (4), and using treated Eigenmatrix constructs blending image.
The multi-focus image fusion system based on FGF that another object of the present invention is to provide a kind of.
Another object of the present invention is to provide a kind of intelligence using the above-mentioned multi-focus image fusing method based on FGF Intelligent city multi-focus image fusion system.
Another object of the present invention is to provide a kind of doctors using the above-mentioned multi-focus image fusing method based on FGF Treat imaging multi-focus image fusion system.
Another object of the present invention is to provide a kind of peaces using the above-mentioned multi-focus image fusing method based on FGF Full monitoring multi-focus image fusion system.
Advantages of the present invention and good effect are as follows:
(1) the invention firstly uses mean filters, and basal layer and levels of detail will be decomposed into source images, then general using drawing Lars high-pass filtering and Gassian low-pass filter carry out conspicuousness detection to source images, obtain the notable figure of source images;Then pass through Comparison source images notable figure pixel size obtains the weight figure of corresponding source images;And source images as navigational figure and are utilized FGF carries out DECOMPOSED OPTIMIZATION to weight figure, respectively obtains the weight figure basal layer and levels of detail of optimization, utilizes the basis of weight figure Layer and levels of detail respectively merge source images basal layer and levels of detail, then melt fused basal layer and levels of detail Conjunction obtains the blending image of source images.Secondary fusion is carried out to source images, is improved to the judgement of source images focal zone characteristic Accuracy rate is conducive to the extraction of clear area target, can have preferably from detailed information such as source images transfer Edge textures Effect improves the subjective and objective quality of blending image.
(2) in the present invention, image co-registration frame is flexible, easy to implement, can be used for other kinds of image co-registration task.
(3) when this blending algorithm carries out smooth operation to source images with mean filter, can effectively inhibit in source images Influence of the noise to fused image quality.
Image interfusion method frame of the present invention is flexible, determines accuracy rate with higher to source images focal zone characteristic, Focal zone target detail can be accurately extracted, it is clear to indicate image detail feature, while " blocking artifact " is effectively eliminated, Effectively improve the subjective and objective quality of blending image.
Detailed description of the invention
Fig. 1 is the multi-focus image fusing method flow chart based on FGF that case study on implementation of the present invention provides.
Fig. 2 is source images to be fused ' Disk ' effect picture that case study on implementation 1 of the present invention provides.
Fig. 3 is that case study on implementation offer of the present invention is Laplce (LAP), wavelet transformation (DWT), based on non-lower sampling Contourlet transform (NSCT), principal component analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon line Manage picture breakdown (CTD), Steerable filter (GFF) and of the invention (Proposed) totally nine kinds of image interfusion methods to multi-focus The syncretizing effect figure of image ' Disk ' Fig. 3 (a) and (b).
Fig. 4 is image to be fused ' Book ' effect picture that case study on implementation 2 of the present invention provides 2;
Fig. 5 be Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, it is main at Analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon texture image decompose (CTD), guiding filter Wave (GFF) and nine kinds of fusion methods of (Proposed) image of the invention melt multiple focussing image ' Book ' Fig. 4 (a) with (b) Close effect image.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with case study on implementation, to this Invention is further elaborated.It should be appreciated that specific implementation case described herein is only used to explain the present invention, It is not intended to limit the present invention.
In the prior art, blending algorithm determines inaccuracy to source images focal zone in multi-focus image fusion field, carefully It is imperfect to save information extraction, the detailed information such as source images Edge texture, syncretizing effect cannot be preferably indicated in blending image Difference.
Application principle of the invention is described in detail with reference to the accompanying drawing.
As shown in Figure 1, the multi-focus image fusing method based on FGF that case study on implementation of the present invention provides, comprising:
S101: basal layer and levels of detail will be decomposed into source images first with mean filter.
S102: and then conspicuousness detection is carried out to source images using Laplce's high-pass filtering and Gassian low-pass filter, it obtains The weight figure of corresponding source images is obtained to the notable figure of source images, and by comparing corresponding source image saliency map pixel size.
S103: DECOMPOSED OPTIMIZATION is carried out to weight figure using source images as navigational figure and using FGF, respectively obtains optimization Weight figure basal layer and levels of detail, and using weight figure basal layer and levels of detail respectively by source images basal layer and details Layer fusion.
S104: finally fused basal layer and levels of detail are merged, obtain blending image.
Below with reference to detailed process, the invention will be further described.
The multi-focus image fusing method based on FGF that case study on implementation of the present invention provides, detailed process include:
Using mean value wave device AF respectively to multiple focussing image I1And I2Smooth operation is carried out, source images I is removed1And I2In Small structure obtains source images basal layer (B1, B2), source images levels of detail (D1, D2).Wherein: (B1, B2)=AF (I1, I2), (D1, D2)=(I1, I2)-(B1, B2);
Source images are filtered with LF, obtain the high-pass filtering image H of source images1And H2, with GLF to H1And H2 Low-pass filtering treatment obtains source images notable figure S1And S2.Wherein: (H1, H2)=LF (I1, I2), (S1, S2)=GLF (H1, H2)。 And according to source images I1And I2Pixel S in corresponding notable figure1(i, j) and S2(i, j) size, the corresponding weight square of building source images Battle array P1And P2.Wherein:
S1(i, j) is source images I1Notable figure pixel (i, j);
S2(i, j) is source images I2Notable figure pixel (i, j);
P1(i, j) is source images I1Weight matrix element (i, j);
P2(i, j) is source images I2Weight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;
Respectively by source images I1And I2As navigational figure, with FGF to weight matrix P1And P2DECOMPOSED OPTIMIZATION is carried out, is obtained Weight matrix W1 B, W2 B, W1 DAnd W2 D.Wherein: (W1 B, W1 D)=FGF (P1, I1), (W2 B, W2 D)=FGF (P2, I2)。
Based on source images basal layer (B1, B2) and levels of detail (D1, D2), according to the weight matrix W of optimization1 B, W2 B, W1 DWith W2 DConstruct blending image basal layer FB,With levels of detail FD,, obtain fused basal layer FBWith Levels of detail FD.Wherein, FB=W1 BB1+W2 BB2, FD=W1 DD1+W2 DD2
Blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD
Below with reference to specific implementation case, the invention will be further described.
Fig. 2 is source images to be fused ' Disk ' effect picture that case study on implementation 1 of the present invention provides.
Case study on implementation 1
The solution of the present invention is followed, which carries out fusion treatment to two width source images shown in Fig. 2 (a) and (b), Processing result is as shown in the Propose in Fig. 3.Simultaneously using Laplce (LAP), wavelet transformation (DWT), based on adopting under non- The contourlet transform (NSCT) of sample, principal component analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon Texture image decompose (CTD), eight kinds of image interfusion methods of Steerable filter (GFF) to two width source images shown in Fig. 2 (a) and (b) into Row fusion treatment carries out quality evaluation to the blending images of different fusion methods, and processing calculates to obtain result shown in table 1.
Table 1 multiple focussing image ' Disk ' fused image quality evaluates
Case study on implementation 2:
The solution of the present invention is followed, which carries out fusion treatment to two width source images shown in Fig. 4 (a) and (b), Processing result is as shown in the Proposed in Fig. 5.
Simultaneously Laplce (LAP), wavelet transformation (DWT), the contourlet transform (NSCT) based on non-lower sampling, it is main at Analysis (PCA) method, spatial frequency (SF), convolution rarefaction representation (CSR), cartoon texture image decompose (CTD), guiding filter Eight kinds of image interfusion methods of wave (GFF) carry out fusion treatment to two width source images (a) shown in Fig. 4 and (b), merge to Fig. 5 difference The blending image of method carries out quality evaluation, and processing calculates to obtain result shown in table 2.
Table 2 multiple focussing image ' Book ' fused image quality evaluates
In Tables 1 and 2: Method represents method;Fusion method includes eight kinds: Laplce (LAP), small echo Convert (DWT), the contourlet transform (NSCT) based on non-lower sampling, principal component analysis (PCA) method, spatial frequency (SF), volume Product rarefaction representation (CSR), cartoon texture image decompose (CTD), Steerable filter (GFF);When Running Time represents operation Between, unit is the second.MI represents mutual information, is that fused image quality based on mutual information objectively evaluates index.QAB/FIt represents from source The marginal information total amount shifted in image.
From Fig. 3, Fig. 5 can be seen that other method frequency domain methods include Laplce (LAP), wavelet transformation (DWT), The problem of contourlet transform (NSCT) based on non-lower sampling, blending image all deposit artifact again, fuzzy and poor contrast; Its blending image contrast of principal component analysis (PCA) method is worst in the method for airspace, the fusion figure of spatial frequency (SF) method As there is " blocking artifact " phenomenon, and convolution rarefaction representation (CSR), cartoon texture image decompose (CTD), Steerable filter (GFF) melts It is relatively preferable to close quality, but there is also a small amount of obscure portions.Method of the invention is to multiple focussing image Fig. 3 ' Disk ' and poly The blending image subjective vision effect of burnt image graph 5 ' Book ' is substantially better than the syncretizing effect of other fusion methods.
It can be seen that from blending image, extractability of the method for the present invention to source images focus area object edge and texture Other methods are substantially better than, can be good at for the target information of focus area in source images being transferred in blending image, are protected Deposit the detailed information such as edge and the texture in source images.The target detail information of focal zone can be effectively captured, image is improved Fusion mass.The method of the present invention has good subjective attribute.
As can be seen from Table 1 and Table 2, the picture quality of the method for the present invention blending image objectively evaluates index MI than other The blending image of method corresponds to index and is averagely higher by 1.5, and the picture quality of blending image objectively evaluates index QAB/FThan its other party The blending image of method corresponds to index and is higher by 0.04.Illustrate that this method obtains blending image and has good objective figures.
The foregoing is merely preferable case study on implementation of the invention, are not intended to limit the invention, all of the invention Made any modifications, equivalent replacements, and improvements etc. within spirit and principle, should be included in protection scope of the present invention it It is interior.

Claims (7)

1. a kind of multi-focus image fusing method based on FGF, which is characterized in that the multi-focus image fusion based on FGF Method and system the following steps are included:
(1) source images are decomposed with mean filter (Average Filtering, AF), respectively obtains the basis of source images Layer and levels of detail;
(2) Laplce is filtered into (Laplacian Filter, LF) and Gassian low-pass filter (Gaussian Low-pass Filter, GLF) successively source images are filtered, obtain the notable figure of source images;
(3) the weight figure that notable figure pixel size constructs corresponding source images is corresponded to according to source images, schemed source images as guidance Picture simultaneously carries out DECOMPOSED OPTIMIZATION to weight figure using FGF, respectively obtains the weight figure basal layer and levels of detail of optimization;
(4) basal layer and levels of detail based on weight figure, according to certain fusion rule respectively by source images basal layer and details Layer respective pixel fusion;
(5) fused basal layer and levels of detail are merged, obtains blending image.
2. as described in claim 1 based on the multi-focus image fusing method of FGF, which is characterized in that described based on the more of FGF Focusedimage fusion method, to the multiple focussing image I after registration1And I2It is merged, I1And I2It is gray level image, and I1, I2 ∈□M×N,It is the space that size is M × N, M and N are positive integer, it specifically includes:
(1) using mean value wave device AF respectively to multiple focussing image I1And I2Smooth operation is carried out, source images I is removed1And I2In it is small Structure obtains source images basal layer (B1, B2), source images levels of detail (D1, D2).Wherein: (B1, B2)=AF (I1, I2), (D1, D2) =(I1, I2)-(B1, B2)。
(2) source images are filtered with LF, obtain the high-pass filtering image H of source images1And H2, with GLF to H1And H2It is low Pass filter handles to obtain source images notable figure S1And S2.Wherein: (H1, H2)=LF (I1, I2), (S1, S2)=GLF (H1, H2)。
(3) according to source images I1And I2Pixel S in corresponding notable figure1(i, j) and S2(i, j) size, the corresponding power of building source images Value matrix P1And P2.Wherein:
S1(i, j) is source images I1Notable figure pixel (i, j);
S2(i, j) is source images I2Notable figure pixel (i, j);
P1(i, j) is source images I1Weight matrix element (i, j);
P2(i, j) is source images I2Weight matrix element (i, j);
I=1,2,3 ..., M;J=1,2,3 ..., N;
S (i, j) is the element of the i-th row of matrix notable figure S, jth column;
(4) by source images I1And I2As navigational figure, with FGF to weight matrix P1And P2DECOMPOSED OPTIMIZATION is carried out, weight square is obtained Battle array W1 B, W2 B, W1 DAnd W2 D.Wherein: (W1 B, W1 D)=FGF (P1, I1), (W2 B, W2 D)=FGF (P2, I2)。
(5) it is based on source images basal layer (B1, B2) and levels of detail (D1, D2), according to the weight matrix W of optimization1 B, W2 B, W1 DAnd W2 D Construct blending image basal layer FB,With levels of detail FD,Obtain fused basal layer FBWith it is thin Ganglionic layer FD.Wherein, FB=W1 BB1+W2 BB2, FD=W1 DD1+W2 DD2
(6) blending image F is constructed,Obtain fused gray level image, in which: F=FB+FD
3. as claimed in claim 2 based on the multi-focus image fusing method of FGF, which is characterized in that building in step (4) Weight matrix optimize resolution process, and using the basal layer and levels of detail of treated weight matrix fusion source images, And then construct blending image.
4. a kind of multi-focus image fusion based on FGF of the multi-focus image fusing method based on FGF as described in claim 1 System.
5. a kind of melted using the smart city multiple focussing image of the multi-focus image fusing method described in claim 1 based on FGF Collaboration system.
6. a kind of melted using the imaging of medical multiple focussing image of the multi-focus image fusing method described in claim 1 based on FGF Collaboration system.
7. a kind of melted using the security monitoring multiple focussing image of the multi-focus image fusing method described in claim 1 based on FGF Collaboration system.
CN201811194833.0A 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system Active CN109509163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811194833.0A CN109509163B (en) 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811194833.0A CN109509163B (en) 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system

Publications (2)

Publication Number Publication Date
CN109509163A true CN109509163A (en) 2019-03-22
CN109509163B CN109509163B (en) 2022-11-11

Family

ID=65746461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811194833.0A Active CN109509163B (en) 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system

Country Status (1)

Country Link
CN (1) CN109509163B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211081A (en) * 2019-05-24 2019-09-06 南昌航空大学 A kind of multi-modality medical image fusion method based on image attributes and guiding filtering
CN110648302A (en) * 2019-10-08 2020-01-03 太原科技大学 Light field full-focus image fusion method based on edge enhancement guide filtering
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111815549A (en) * 2020-07-09 2020-10-23 湖南大学 Night vision image colorization method based on guided filtering image fusion
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN117391985A (en) * 2023-12-11 2024-01-12 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011051134A1 (en) * 2009-10-30 2011-05-05 Siemens Aktiengesellschaft A body fluid analyzing system and an imaging processing device and method for analyzing body fluids
CN103455991A (en) * 2013-08-22 2013-12-18 西北大学 Multi-focus image fusion method
CN104036479A (en) * 2013-11-11 2014-09-10 西北大学 Multi-focus image fusion method based on non-negative matrix factorization
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011051134A1 (en) * 2009-10-30 2011-05-05 Siemens Aktiengesellschaft A body fluid analyzing system and an imaging processing device and method for analyzing body fluids
CN103455991A (en) * 2013-08-22 2013-12-18 西北大学 Multi-focus image fusion method
CN104036479A (en) * 2013-11-11 2014-09-10 西北大学 Multi-focus image fusion method based on non-negative matrix factorization
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘先红等: "结合引导滤波和卷积稀疏表示的红外与可见光图像融合", 《光学精密工程》 *
刘帅奇等: "结合向导滤波与复轮廓波变换的多聚焦图像融合算法", 《信号处理》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211081A (en) * 2019-05-24 2019-09-06 南昌航空大学 A kind of multi-modality medical image fusion method based on image attributes and guiding filtering
CN110648302A (en) * 2019-10-08 2020-01-03 太原科技大学 Light field full-focus image fusion method based on edge enhancement guide filtering
CN110648302B (en) * 2019-10-08 2022-04-12 太原科技大学 Light field full-focus image fusion method based on edge enhancement guide filtering
CN111223069A (en) * 2020-01-14 2020-06-02 天津工业大学 Image fusion method and system
CN111223069B (en) * 2020-01-14 2023-06-02 天津工业大学 Image fusion method and system
CN111815549A (en) * 2020-07-09 2020-10-23 湖南大学 Night vision image colorization method based on guided filtering image fusion
CN112884690A (en) * 2021-02-26 2021-06-01 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN112884690B (en) * 2021-02-26 2023-01-06 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN117391985A (en) * 2023-12-11 2024-01-12 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system
CN117391985B (en) * 2023-12-11 2024-02-20 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system

Also Published As

Publication number Publication date
CN109509163B (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN109509163A (en) A kind of multi-focus image fusing method and system based on FGF
CN107909560A (en) A kind of multi-focus image fusing method and system based on SiR
CN106339998B (en) Multi-focus image fusing method based on contrast pyramid transformation
CN102360421B (en) Face identification method and system based on video streaming
CN109509164B (en) Multi-sensor image fusion method and system based on GDGF
CN108830818B (en) Rapid multi-focus image fusion method
CN105894483B (en) A kind of multi-focus image fusing method based on multi-scale image analysis and block consistency checking
CN108399611B (en) Multi-focus image fusion method based on gradient regularization
CN103455991B (en) A kind of multi-focus image fusing method
Canny A Variational Approach to Edge Detection.
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
He et al. Infrared and visible image fusion based on target extraction in the nonsubsampled contourlet transform domain
CN108230282A (en) A kind of multi-focus image fusing method and system based on AGF
CN107169479A (en) Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication
CN106447640B (en) Multi-focus image fusing method and device based on dictionary learning, rotation guiding filtering
Xiao et al. Image Fusion
CN109492647A (en) A kind of power grid robot barrier object recognition methods
Chen et al. Structural characterization and measurement of nonwoven fabrics based on multi-focus image fusion
CN103065291A (en) Image fusion method based on promoting wavelet transform and correlation of pixel regions
Li et al. Automatic gauge detection via geometric fitting for safety inspection
CN113763300A (en) Multi-focus image fusion method combining depth context and convolution condition random field
Luo et al. Infrared and visible image fusion based on VPDE model and VGG network
Zhang et al. Medical Image Fusion Based on Low‐Level Features
Li et al. An improved image registration and fusion algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant