CN103247042B - A kind of image interfusion method based on similar piece - Google Patents

A kind of image interfusion method based on similar piece Download PDF

Info

Publication number
CN103247042B
CN103247042B CN201310198572.0A CN201310198572A CN103247042B CN 103247042 B CN103247042 B CN 103247042B CN 201310198572 A CN201310198572 A CN 201310198572A CN 103247042 B CN103247042 B CN 103247042B
Authority
CN
China
Prior art keywords
source images
image
represent
similar
shared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310198572.0A
Other languages
Chinese (zh)
Other versions
CN103247042A (en
Inventor
屈小波
李磊
赖宗英
陈忠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN201310198572.0A priority Critical patent/CN103247042B/en
Publication of CN103247042A publication Critical patent/CN103247042A/en
Application granted granted Critical
Publication of CN103247042B publication Critical patent/CN103247042B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Based on the image interfusion method of similar piece, relate to Digital Image Processing.Excellent effect is provided, is easy to a kind of image interfusion method based on similar piece operated.1) build the self-similar structure that multiple source images is shared: in several source images, carry out similar Block-matching, obtain the similar block structure shared, in described shared similar piece, namely contain the self-similarity of image; 2) weighting: to share similar piece in carry out image characteristics extraction after, determine the pixel value finally chosen according to the method for weighting.Owing to make use of the self-similarity of source images self, the feature for source images has certain robustness, and in practical application, the selection range of parameter is larger.By weighting, make fusion results more level and smooth.

Description

A kind of image interfusion method based on similar piece
Technical field
The present invention relates to Digital Image Processing, especially relate to multi-focused a kind of image interfusion method based on similar piece.
Background technology
Multi-focused Digital Image Processing is widely used in the fields such as satellite remote sensing, military affairs, medical science.Common multi-focus image fusing method is probably divided into two classes: Space domain and transform domain method.The advantage of Space domain is that the pixel of fused images all comes from source images, source image information is retained complete, but shortcoming often there will be block artifact; The method of transform domain can effectively suppress block artifact, but due to transform domain operation non-linear, make fusion results have distortion compared with source images.
In Space domain, there are (the Evaluationoffocusmeasuresinmulti-focusimagefusion such as W.Huang and Z.Jing, PatternRecognitionLetters, vol.28, pp:493-500,2007) common method and evaluation index have been made compared conclusion.For the block artifact that fusion results occurs, these methods often take braking measure, such as, carry out unification filtering process to image, region growth etc.
In transform domain method, (the Acategorizationofmultiscale-decomposition-basedimagefusi onschemeswithaperformancestudyforadigitalcameraapplicati on such as ZhangZhong, ProceedingsoftheIEEE, vol.87, pp.1315-1326,1999) conclusion has been made to common transform domain method, however these methods often to there is susceptibility high, the bad weakness chosen of parameter.
In addition, (the Imagefusionalgorithmbasedonspatialfrequency-motivatedpul secoupledneuralnetworksinnonsubsampledcontourlettransfor mdomain such as domestic QuXiaobo, ActaAutomaticaSinica, vol.34, pp.1508-1514,2008) also successful fusion by transform domain image combines with neural network, achieves good effect.
2006, the people such as K.Dabov (Imagedenoisingwithblock-matchingand3Dfiltering, ProceedingsoftheIEEE, vol.6064, pp.354-365,2006) method proposing similar Block-matching solves image denoising problem.This process employs the self-similarity of image, achieve successfully in denoising.
Summary of the invention
The object of the present invention is to provide excellent effect, be easy to a kind of image interfusion method based on similar piece operated.
The present invention includes following steps:
1) build the self-similar structure that multiple source images is shared: in several source images, carry out similar Block-matching, obtain the similar block structure shared, in described shared similar piece, namely contain the self-similarity of image;
2) weighting: to share similar piece in carry out image characteristics extraction after, determine the pixel value finally chosen according to the method for weighting.
In step 2) in, the method for described weighting can be: set the image definition index of extraction as C, use C 1and C 2represent the sharpness of source images 1 and source images 2, be expressed as a counter large with source images etc. with M, for a jth pixel, use with represent the counter of source images 1 and source images 2, given similar piece of shared D,
Work as C 1>=C 2time, have
M 1 j = M 1 j + 1 , j ∈ D ;
Work as C 1< C 2time, have
M 2 j = M 2 j + 1 , j &Element; D ;
After the feature of entire image is all extracted, use with represent source images 1 and source images 2 value in a jth pixel respectively, then a jth pixel F of fused images F jobtain in the following manner:
F j = M 1 j S 1 j M 1 j + M 2 j + M 2 j S 2 j M 1 j + M 2 j .
Concrete grammar is as follows:
1) self-similar structure that multiple source images is shared is built: set entire image being regarded as some regions, described some regions can be overlapped, each region is regarded as the set of some image blocks, can overlap each other between described some image blocks, in each area, first choose a central block, by carrying out similarity mode between other image block and central blocks:
η=||P q-P r||
Wherein, η represents similarity degree, P qand P rrepresent the matrix of the pixel composition of the image block centered by pixel q and pixel r respectively, || A|| is matrix norm, and it is defined as follows:
| | A | | = &Sigma; i = 1 I &Sigma; j = 1 J | a i j | 2
Wherein, a ijrepresent the value of each pixel in image block A;
By the similarity degree numerical value obtained, find the less image block of difference each other and form analog structure in a width source images, use represent the similarity degree between image block centered by pixel i and central block, if the similarity numerical relation searched out is as follows:
&eta; q 1 &le; &eta; q 2 &le; ... &le; &eta; q k &le; &eta; q k + 1 &le; ... &le; &eta; q ( n - m + 1 ) 2
Wherein, (n-m+1) 2represent in the region that size is n, total total (n-m+1) 2individual size is the image block of m.Then in different source images, obtain the analog structure shared; Make L irepresent the analog structure in the i-th width source images, with represent a jth image block similar to central block in the i-th sub-picture, suppose
L 1 = { q 1 1 , q 2 1 , ... , q k 1 } , L 2 = { q 1 2 , q 2 2 , ... , q k 2 }
Then have
L g 12 = L 1 &cap; L 2
Wherein, L 1represent the analog structure in source images 1, L 2represent the analog structure in source images 2, represent the analog structure shared, shared analog structure can embody the self-similarity of source images;
2) weighting: under spatial domain, often there is block artifact in the result of fusion, in a yardstick of transform domain, also there is similar problem; Under normal circumstances, the method for weighting can make fusion results more level and smooth, and the Weighted Rule that the present invention adopts is as follows: after carrying out image characteristics extraction to the analog structure shared, if the characteristics of image (being generally articulation index herein) extracted is C, use C 1and C 2represent the sharpness of source images 1 and source images 2, for a jth pixel, use with represent the counter of source images 1 and source images 2; Given similar piece of shared D, then work as C 1>=C 2time, have
M 1 j = M 1 j + 1 , j &Element; D ;
Work as C 1< C 2time, have
M 2 j = M 2 j + 1 , j &Element; D ;
After the feature of entire image is all extracted, use with represent source images 1 and source images 2 value in a jth pixel respectively, then a jth pixel F of fused images F jobtain in the following manner:
F j = M 1 j M 1 j + M 2 j &CenterDot; S 1 j + M 2 j M 1 j + M 2 j &CenterDot; S 2 j
Wherein F represents fused images, and S represents source images.
From above formula, the pixel value of fused images belongs in the closed interval that two width source image pixels values are starting point formation.
Application under transform domain is as follows:
To several source images carry out certain conversion after, obtain the coefficient of source images under this transform method on different scale, in each yardstick, above-mentioned steps 1 carried out to coefficient) and 2) operation, be the application of the present invention on transform domain.
The invention has the beneficial effects as follows: owing to present invention utilizes the self-similarity of source images self, make the present invention have certain robustness for the feature of source images, in practical application, the selection range of parameter is larger.By above-mentioned Weighted Rule, make fusion results more level and smooth.
Accompanying drawing explanation
Fig. 1 is source images 1.
Fig. 2 is source images 2.
Fig. 3 is similar Block-matching and forms the exemplary plot of the analog structure shared.
Fig. 4 is the weight of the source images 1 obtained by Weighted Rule.
Fig. 5 is the fusion results obtained under spatial domain.
Embodiment
The embodiment of the present invention uses 256 grades of gray-scale maps to carry out mixing operation.The gray level image that the embodiment of the present invention uses is two 512 × 512 sizes, and the picture of tif form, two width source images as illustrated in fig. 1 and 2.
Because the present invention has stronger robustness to parameter, in implementation process, choose canonical parameter operation.Parameter choose is as table 1:
Table 1: parameter value in embodiment
Specific implementation process is as follows:
The first step: build the self-similar structure that multiple source images is shared
First source images is carried out border extension, what adopt in the present embodiment is take image boundary as the symmetric extension method of axis of symmetry, and the amplitude of expansion is 20 pixels.
After border extension, source images image becomes 552 × 552 sizes.Some sizes are divided into by source images to be the region (in the present embodiment n=16) of n × n, and be divided into some sizes to be the image block (in the present embodiment m=8) of m × m in each region, any two regions or the distance of any two image blocks is 1, so altogether comprise (552-16+1) in each width source images 2individual region, and comprise (16-8+1) in each region 2individual image block.
As Fig. 3, the f of outermost layer frame 1represent source images 1, f 1in R 1represent the some regions in source images 1, at this region R 1in, first have chosen a central block 31, central block 31 is marked as strip shade (as shown in Figure 3).F 2represent source images 2, R 2represent in source images 2 with R 1the region of same position, R 2in also have a central block.
In a region, make between other image block and central blocks and carry out similarity mode:
η=||P q-P r||
Wherein, η represents similarity degree, P qand P rrepresent the matrix of the pixel composition of the image block centered by pixel q and pixel r respectively, || A|| is matrix norm, and it is defined as follows:
| | A | | = &Sigma; i = 1 I &Sigma; j = 1 J | a i j | 2
Wherein, a ijthe value of each element in representing matrix A;
By the similarity degree numerical value obtained, find the less image block of difference each other and form analog structure in a width source images, use η qirepresent the similarity degree between image block centered by pixel i and central block, if the similarity numerical relation searched out is as follows:
&eta; q 1 &le; &eta; q 2 &le; ... &le; &eta; q k &le; &eta; q k + 1 &le; ... &le; &eta; q ( n - m + 1 ) 2
Wherein, (n-m+1) 2represent in the region that size is n, total total (n-m+1) 2individual size is the image block of m.After searching out this magnitude relationship, choose k the most similar to central block in all image blocks (k=16 in the present embodiment, but directly perceived for simplicity, what represent in Fig. 3 is the situation of k=5) image block, as shown in Figure 3, suppose
Then in different source images, obtain the analog structure 33 shared; Make L irepresent the analog structure in the i-th width source images, with represent a jth image block similar to central block in the i-th sub-picture, suppose that 5 space rectangles around central block are 5 the image blocks the most similar selected, then claim them to be similar piece 32.Get their intersection respectively in two images, then the intersection obtained in two width images got friendship:
L 1 = { q 1 1 , q 2 1 , ... , q k 1 } , L 2 = { q 1 2 , q 2 2 , ... , q k 2 }
L g 12 = L 1 &cap; L 2
Wherein, L 1represent the intersection of in source images 1 similar piece, claim it to be analog structure in source images 1, L 2represent the analog structure in source images 2, represent the analog structure shared, as shown in Figure 3, the netted shadow representation of analog structure 33 shared.The analog structure 33 shared can embody the self-similarity of source images;
The index representing sharpness is extracted in the share similar structure of two width figure.What the present embodiment adopted is Laplce's energy of improvement and the index of (Sum-Modified-Laplacian, hereinafter referred to as SML), and SML is defined as follows:
SML ( x 0 , y 0 ) = &Sigma; x = x 0 - N y = x 0 + N &Sigma; x = y 0 - N y = y 0 + N &dtri; M L 2 f ( x 0 , y 0 )
Wherein represent the Laplace operator improved, be defined as:
&dtri; M L 2 f ( x 0 , y 0 ) = | 2 f ( x 0 , y 0 ) - f ( x 0 - s t e p , y 0 ) - f ( x 0 + s t e p , y 0 ) | + | 2 f ( x 0 , y 0 ) - f ( x 0 , y 0 - s t e p ) - f ( x 0 , y 0 + s t e p ) |
Wherein f (x 0, y 0) expression position is (x 0, y 0) pixel value, step represents the step-length of each computing movement, is taken as 1 herein.
Second step: weighting
After the analog structure shared being calculated with SML, if the characteristics of image (being SML value) extracted is C, use C herein 1and C 2represent the sharpness of source images 1 and source images 2, for a jth pixel, use with represent the counter of source images 1 and source images 2.The given similar piece of D shared, then work as C 1>=C 2time, have
M 1 j = M 1 j + 1 , j &Element; D .
Work as C 1< C 2time, have
M 2 j = M 2 j + 1 , j &Element; D .
After the feature of entire image is all extracted, with be exactly the weights of two width source images on pixel j.Wherein the first width source images weight matrix M a little 1represent, be shown as image namely as shown in Figure 4.With with represent source images 1 and source images 2 value in a jth pixel respectively, then a jth pixel of fused images F obtains in the following manner:
F j = M 1 j M 1 j + M 2 j &CenterDot; S 1 j + M 2 j M 1 j + M 2 j &CenterDot; S 2 j
Wherein F represents fused images, and S represents source images.Fusion results as shown in Figure 5.

Claims (2)

1., based on the image interfusion method of similar piece, it is characterized in that comprising the following steps:
1) build the self-similar structure that multiple source images is shared: in several source images, carry out similar Block-matching, obtain the similar block structure shared, in described shared similar piece, namely contain the self-similarity of image; The concrete grammar of the self-similar structure that the multiple source images of described structure is shared is as follows: set entire image being regarded as some regions, described some regions can be overlapped, each region is regarded as the set of some image blocks, can overlap each other between described some image blocks, in each area, first choose a central block, by carrying out similarity mode between other image block and central blocks:
η=||P q-P r||
Wherein, η represents similarity degree, P qand P rrepresent the matrix of the pixel composition of the image block centered by pixel q and pixel r respectively, || A|| is matrix norm, and it is defined as follows:
| | A | | = &Sigma; i = 1 I &Sigma; j = 1 J | a i j | 2
Wherein, a ijrepresent the value of each pixel in image block A;
By the similarity degree numerical value obtained, find the less image block of difference each other and form analog structure in a width source images, use represent the similarity degree between image block centered by pixel i and central block, if the similarity numerical relation searched out is as follows:
&eta; q 1 &le; &eta; q 2 &le; ... &le; &eta; q k &le; &eta; q k + 1 &le; ... &le; &eta; q ( n - m + 1 ) 2
Wherein, (n-m+1) 2represent in the region that size is n, total total (n-m+1) 2individual size is the image block of m, in different source images, then obtain the analog structure shared; Make L irepresent the analog structure in the i-th width source images, with represent a jth image block similar to central block in the i-th sub-picture, suppose
L 1 = { q 1 1 , q 2 1 , ... , q k 1 } , L 2 = { q 1 2 , q 2 2 , ... , q k 2 }
Then have
L g 12 = L 1 &cap; L 2
Wherein, L 1represent the analog structure in source images 1, L 2represent the analog structure in source images 2, represent the analog structure shared, shared analog structure can embody the self-similarity of source images;
2) weighting: to share similar piece in carry out image characteristics extraction after, determine the pixel value finally chosen according to the method for weighting.
2. a kind of image interfusion method based on similar piece as claimed in claim 1, is characterized in that in step 2) in, the method for described weighting is: set the image definition index of extraction as C, use C 1and C 2represent the sharpness of source images 1 and source images 2, be expressed as a counter large with source images etc. with M, for a jth pixel, use with represent the counter of source images 1 and source images 2, given similar piece of shared D, works as C 1>=C 2time, have:
M 1 j = M 1 j + 1 , j &Element; D ;
Work as C 1< C 2time, have:
M 2 j = M 2 j + 1 , j &Element; D ;
After the feature of entire image is all extracted, use with represent source images 1 and source images 2 value in a jth pixel respectively, then a jth pixel F of fused images F jobtain in the following manner:
F j = M 1 j S 1 j M 1 j + M 2 j + M 2 j S 2 j M 1 j + M 2 j .
CN201310198572.0A 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece Expired - Fee Related CN103247042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310198572.0A CN103247042B (en) 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310198572.0A CN103247042B (en) 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece

Publications (2)

Publication Number Publication Date
CN103247042A CN103247042A (en) 2013-08-14
CN103247042B true CN103247042B (en) 2015-11-11

Family

ID=48926547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310198572.0A Expired - Fee Related CN103247042B (en) 2013-05-24 2013-05-24 A kind of image interfusion method based on similar piece

Country Status (1)

Country Link
CN (1) CN103247042B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106327539B (en) * 2015-07-01 2019-06-28 北京大学 Image rebuilding method and device based on sample
CN107087107B (en) * 2017-05-05 2019-11-29 中国科学院计算技术研究所 Image processing apparatus and method based on dual camera
CN108537264B (en) * 2018-03-30 2021-09-07 西安电子科技大学 Heterogeneous image matching method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing
CN102005033A (en) * 2010-11-16 2011-04-06 中国科学院遥感应用研究所 Method for suppressing noise by image smoothing
CN202134044U (en) * 2011-07-06 2012-02-01 长安大学 An image splicing device based on extracting and matching of angular point blocks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1402191A (en) * 2002-09-19 2003-03-12 上海交通大学 Multiple focussing image fusion method based on block dividing
CN102005033A (en) * 2010-11-16 2011-04-06 中国科学院遥感应用研究所 Method for suppressing noise by image smoothing
CN202134044U (en) * 2011-07-06 2012-02-01 长安大学 An image splicing device based on extracting and matching of angular point blocks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Image denoising with block-matching and 3D filtering;Kostadin Dabov et al.;《ImgeProcessing:Algorithms and Systems:Neural Networks,and Machine Learning》;20061231;全文 *
图像融合研究最新进展;闫敬文 等;《厦门理工学院学报》;20071231;第15卷(第4期);全文 *

Also Published As

Publication number Publication date
CN103247042A (en) 2013-08-14

Similar Documents

Publication Publication Date Title
CN103279935B (en) Based on thermal remote sensing image super resolution ratio reconstruction method and the system of MAP algorithm
CN110738252A (en) Space autocorrelation machine learning satellite precipitation data downscaling method and system
CN111080724A (en) Infrared and visible light fusion method
CN101872472B (en) Method for super-resolution reconstruction of facial image on basis of sample learning
CN110119780A (en) Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network
CN110223234A (en) Depth residual error network image super resolution ratio reconstruction method based on cascade shrinkage expansion
CN105069825A (en) Image super resolution reconstruction method based on deep belief network
CN105046672A (en) Method for image super-resolution reconstruction
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN104268847A (en) Infrared light image and visible light image fusion method based on interactive non-local average filtering
CN103413286A (en) United reestablishing method of high dynamic range and high-definition pictures based on learning
CN104537678B (en) A kind of method that cloud and mist is removed in the remote sensing images from single width
CN107689036A (en) A kind of Real-time image enhancement method based on the bilateral study of depth
CN101299235A (en) Method for reconstructing human face super resolution based on core principle component analysis
CN107220957B (en) It is a kind of to utilize the remote sensing image fusion method for rolling Steerable filter
CN107203969B (en) A kind of high magnification image super-resolution rebuilding method of medium scale constraint
CN103971340A (en) High-bit-width digital image dynamic range compression and detail enhancement method
CN103247042B (en) A kind of image interfusion method based on similar piece
CN105139371A (en) Multi-focus image fusion method based on transformation between PCNN and LP
CN104517126A (en) Air quality assessment method based on image analysis
CN103778616A (en) Contrast pyramid image fusion method based on area
Horinouchi et al. Image velocimetry for clouds with relaxation labeling based on deformation consistency
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
He et al. Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks
CN116343053B (en) Automatic solid waste extraction method based on fusion of optical remote sensing image and SAR remote sensing image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151111