CN103593833A - Multi-focus image fusion method based on compressed sensing and energy rule - Google Patents
Multi-focus image fusion method based on compressed sensing and energy rule Download PDFInfo
- Publication number
- CN103593833A CN103593833A CN201310512370.9A CN201310512370A CN103593833A CN 103593833 A CN103593833 A CN 103593833A CN 201310512370 A CN201310512370 A CN 201310512370A CN 103593833 A CN103593833 A CN 103593833A
- Authority
- CN
- China
- Prior art keywords
- image
- observation vector
- image block
- energy
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a multi-focus image fusion method based on compressed sensing and an energy rule. The multi-focus image fusion method based on compressed sensing and the energy rule mainly solves the problems that the similarity of image blocks is high in the process of multi-focus image fusion under the condition of compressed sensing, and information cannot be easily extracted. The multi-focus image fusion method based on the compressed sensing and the energy rule comprises the steps of observing two multi-focus images through the Hadamard measurement matrix used for block scrambling to obtain observation vectors of the two input images, calculating the fusion weight through energy and the data similarity of the two observation vectors, calculating an observation vector of a fused image according to the fusion weight, and carrying out reconstruction on the observation vector through a gradient projection method on a CDF9/7 wavelet basis to obtain the fused image. Compared with a classic multi-focus image fusion method, the multi-focus image fusion method based on compressed sensing and the energy rule has the advantages that information of the two images can be included at the same time, the fusion weight can be adjusted by adjusting items according to the similarity of the image blocks, the problem that the similarity of the multi-focus image blocks is excessively high is solved, the fusion effect is improved, and the multi-focus image fusion method can be used for fusion of the multi-focus images.
Description
Technical field
The invention belongs to technical field of image processing, relate to multi-focus image fusing method, can be used for the follow-up image interpretation of multiple focussing image, target detection, target identification etc.
Background technology
Image co-registration refers to be extracted different original images and more accurate, comprehensive to same target or scene, the reliable image of describing of comprehensive its complementary information acquisition, i.e. fused images by certain fusion rule.Original image can be from identical sensor or different sensors, and fused images can make full use of the complementary information that original image provides, and for follow-up image, process and provide more reliable information as image interpretation, target detection, target identification etc.Image co-registration mainly can be divided three classes: pixel-level image fusion, feature level image co-registration and decision level image co-registration.But these current methods have all required original image, this has increased the storage burden of computing machine undoubtedly, and along with the increase day by day of magnanimity sensing data scale, the complexity of calculating also becomes the huge challenge that image is processed simultaneously.
The people such as Donoho have proposed compressed sensing technology, see D.L.Donoho, Compressedsensing, IEEETrans.Inform.Theory.2006,52 (4): 1289 – 1306. the method point out to use speed far below nyquist sampling theorem requirement to image sampling, and do not lose any useful information of image, reach the object of Perfect Reconstruction original image.Classical image interfusion method based on this technology has at present:
The people such as Wan propose compressed image fusion method, see T.Wan, N.Canagarajah, A.Achim, CompressiveImageFusion, inProc.IEEEInt.Conf.ImageProcess, pp.1308-1311,2008. this articles carry out image co-registration in compressed sensing territory first, and proposed compressed sensing territory absolute value and got maximum image interfusion method, first the method carries out FFT conversion to image, then carries out measurement fusion, finally carries out inverse transformation and obtains fused images.Because the method is the observation of carrying out in FFT conversion, thereby must there is original image before experiment, observe the value in territory under random observation, lose the larger characteristic of the larger representative information amount of conversion coefficient, thereby absolute value is got large fusion rule and is had certain contingency and irrationality simultaneously.
The image interfusion method based on compressed sensing that the people such as Luo propose, see X.Y.Luo, J.Zhang, J.Y.Yang, andQ.H.Dai, Imagefusionincompressedsensing, inProc.IEEEInt.Conf.onImageProcess, pp.2205 – 2208IEEE, Piscataway, NJ2009. the method directly utilizes the entropy of observation vector to carry out image co-registration in observation territory, is then reconstructed, and obtains fused images.When the method is applied to multi-focus image fusion, because image block is too similar, the entropy of image block is too approaching, cannot effectively extract the information of two width multiple focussing image clear areas, and syncretizing effect is not good.
Summary of the invention
The object of the invention is to the deficiency for above-mentioned prior art, propose a kind of multi-focus image fusing method based on compressed sensing and energy rule, to improve the syncretizing effect when image block is too similar.
Realizing the object of the invention key problem in technology is a kind of new fusion rule based on energy of design, and introduces data similarity and solve the higher problem of multiple focussing image piece similarity, improves syncretizing effect.Implementation step comprises as follows:
(1) input two width multiple focussing image A and B, and it is carried out to piecemeal, obtain n to the image block of N * N and every pair of image block is pulled into row, be expressed as Ai and Bi, i=1,2 ..., n, n is the logarithm of image block, and N is image block window size, N>1;
(2) the observing matrix Φ to image block according to i
bi, obtain the observation vector y of two image blocks
aiand y
bi;
(3) according to the following rules i is merged the observed reading y of the image subblock after being merged to image block
bi:
(3a) according to the image block observation vector y of multiple focussing image A and B
ai, y
bi, and the ENERGY E of observation vector
ai, E
bi, calculate the blending weight of every pair of image block:
ω
B=1-ω
A,
Wherein, E
ai, E
bibe respectively image block observation vector y
ai, y
bienergy, DS (y
ai, y
bi) be observation vector y
ai, y
bidata similarity, ω
a, ω
bfor the blending weight of image block, T is adjustable parameter, T ≠ 1,
for regulating item, for the blending weight to fusion rule, regulate;
(3b) according to the blending weight ω of every pair of image block
a, ω
b, the observation vector y to every pair of image block of multiple focussing image A and B
ai, y
bimerge the observation vector y of the image block after being merged
bi:
y
bi=ω
Ay
Ai+ω
By
Bi;
(3c) repeating step (3a) and (3b), until all image blocks are to all completing observation;
(4) by all observation vector y that obtain
bibe integrated into row, obtain the observation vector y of fused images:
y=[y
b1 T,y
b2 T,...,y
bn T]
T,
Wherein, subscript T represents vectorial transposition;
(5) to the observation vector y of fused images, adopt gradient projection method to be reconstructed under CDF9/7 wavelet basis, obtain rarefaction representation coefficient θ; Again rarefaction representation coefficient θ is carried out to wavelet inverse transformation, the total focus image F after being merged.
The present invention has the following advantages compared with prior art:
1. the present invention utilizes the energy of two image block observation vectors and data similarity to formulate fusion rule, makes to merge the information that vector comprises two observation vectors simultaneously, is conducive to improve the effect of image co-registration;
2. the present invention introduces data similarity as the adjusting item of blending weight in fusion rule, and when image block similarity degree is high, the introducing of this adjusting item can make the average information that comprises two observation vectors of observation vector after fusion; When image block similarity degree is low, energy can further be strengthened because of the existence of this adjusting item compared with the blending weight of large observation vector, thereby makes the vector after fusion comprise more information, improves syncretizing effect.
Accompanying drawing explanation
Fig. 1 is general flow chart of the present invention;
Fig. 2 is the source images of two width multiple focussing image Clock images;
Fig. 3 is the source images of two width multiple focussing image Pepsi images;
Fig. 4 is result figure multi-focus Clock image being merged by the present invention and existing three kinds of methods;
Fig. 5 is result figure multi-focus Pepsi image being merged by the present invention and existing three kinds of methods.
Embodiment
With reference to Fig. 1, specific implementation step of the present invention is as follows:
Input two width multiple focussing image A and B, and it is carried out to piecemeal, obtain n to the image block of N * N and every pair of image block is pulled into row, be expressed as Ai and Bi, i=1,2, ..., n, the logarithm that n is image block, N is image block window size, and N>1, in example of the present invention, the value of N is respectively 8,16,32,64.
The compressed sensing observation process of image is a linear process, and for can Accurate Reconstruction, observing matrix and sparse transformation matrix will meet limited equidistant character.The observing matrix Φ that this example adopts image block
ifor the hadamard matrix of scramble, this observing matrix is uncorrelated with the sparse transformation matrix that most of fixing orthogonal basiss form, and meets limited equidistant character, and this character can guarantee the Accurate Reconstruction of image.Observing matrix Φ according to i to image block
i, obtain the observation vector y of two image blocks
aiand y
bi:
y
Ai=Φ
i×Ai
y
Bi=Φ
i×Bi,
Wherein, Φ
isize is M * N
2, sampling rate of the present invention is by Φ
iline number M control, in experiment, the sampling rate of image is made as
1<M<N, i=1,2 ....
Step 3, merges image block i according to the following rules, the observed reading y of the image block after being merged
bi:
(3a) according to the image block observation vector y of multiple focussing image A and B
ai, y
bi, and the ENERGY E of observation vector
ai, E
bi, calculating observation vector y
aiblending weight ω
awith observation vector y
biblending weight ω
b:
ω
B=1-ω
A,
Wherein, T is adjustable parameter, T ≠-1;
E
aimultiple focussing image A image block observation vector y
aienergy: E
ai=|| y
ai||
2 2;
E
bimultiple focussing image B image block observation vector y
bienergy: E
bi=|| y
bi||
2 2, in formula || ||
2two norms that represent vector;
DS (y
ai, y
bi) be observation vector y
aiand y
bidata similarity,
DS(y
Ai,y
Bi)=[c(y
Ai,y
Bi)·d(y
Ai,y
Bi)]
α·[a(y
Ai,y
Bi)]
β,
In formula, α is c (y
ai, y
bi) and d (y
ai, y
bi) index of product, β is a (y
ai, y
bi) index, α, β is adjustable parameter, the two desirable arbitrary value, c (y
ai, y
bi) presentation video piece observation vector y
aiand y
binumerical value intensity, d (y
ai, y
bi) presentation video piece observation vector y
aiand y
binumeric distribution intensity, a (y
ai, y
bi) presentation video piece observation vector y
aiand y
bidegree of correlation;
Described numerical value intensity c (y
ai, y
bi) be calculated as follows:
Wherein,
representative image piece observation vector y
aiaverage,
representative image piece observation vector y
biaverage,
representative image piece observation vector y
aiintermediate value,
representative image piece observation vector y
biintermediate value, c
1numerical value intensity c (y
ai, y
bi) mistake proofing coefficient, be used for preventing numerical value intensity c (y
ai, y
bi) denominator is zero error situation in computation process, gets c in the present invention
1=0.000001;
Described numeric distribution intensity d (y
ai, y
bi) be calculated as follows:
Wherein,
representative image piece observation vector y
aiaverage absolute deviation,
representative image piece observation vector y
biaverage absolute deviation,
representative image piece observation vector y
aiintermediate value absolute deviation,
representative image piece observation vector y
biintermediate value absolute deviation, c
2for numeric distribution intensity d (y
ai, y
bi) mistake proofing coefficient, be used for preventing numeric distribution intensity d (y
ai, y
bi) denominator is zero error situation in computation process, gets c in the present invention
2=0.000001;
Described degree of correlation a (y
ai, y
bi) be calculated as follows:
Wherein,
represent every couple of image block observation vector y of multiple focussing image A and B
aiand y
birelated coefficient,
every couple of image block image block observation vector y for multiple focussing image A and B
aiand y
bisymmetrical degree of uncertainty;
Data similarity DS (y
ai, y
bi) specific definition referring to < < Classification-basedimage-fusionframeworkfor compressiveimaging > >, JournalofElectronicImaging, vol.19, Issue3, pp.1-6;
(3b) according to the blending weight ω of every pair of image block
a, ω
b, the observation vector y to every pair of image block of multiple focussing image A and B
aiand y
bimerge the observation vector y of the image block after being merged
bi:
y
bi=ω
Ay
Ai+ω
By
Bi;
(3c) repeating step (3a) and (3b), until all image blocks are to all completing fusion.
Step 4, by all observation vector y that obtain
bibe integrated into row, obtain the observation vector y of fused images:
y=[y
b1 T,y
b2 T,...,y
bn T]
T,
Wherein, subscript T represents vectorial transposition.
Step 5, is reconstructed the observation vector of fused images, obtains rarefaction representation coefficient, then rarefaction representation coefficient is carried out to wavelet inverse transformation, the total focus image F after being merged.
(5a) to the observation vector y of fused images, adopt gradient projection method to be reconstructed under CDF9/7 wavelet basis, obtain rarefaction representation coefficient θ, this gradient projection method can utilize less measured value to obtain accurate reconstructed results and arithmetic speed very fast, the concrete steps of algorithm are referring to < < GradientProjectionforSprseReconstruction:Applicationto CompressedSensingandOtherInverseProblems > >, IEEEJournal, vol.1, Issue4, pp.586-597,
(5b) rarefaction representation coefficient θ is carried out to wavelet inverse transformation, the total focus image F after the total focus image block obtaining is combined into piece image and is just merged.
Effect of the present invention can further illustrate by experiment simulation below:
1, experiment condition and method
Hardware platform is: Intel (R) Core (TM) [email protected], 4GBRAM;
Software platform is: MATLABR2011a; In experiment, adopt two groups of multiple focussing images of registration, be Clock image and Pepsi image, image size is 512 * 512, two groups of multiple focussing images derive from image co-registration website: http://www.imagefusion.org/, Clock image is shown in shown in Fig. 2, wherein Fig. 2 (a) is the source images that Clock image focuses on the left side, Fig. 2 (b) is the source images that Clock image focuses on the right, Pepsi image as shown in Figure 3, wherein Fig. 3 (a) is the source images that Pepsi image focuses on the left side, and Fig. 3 (b) is the source images that Pepsi image focuses on the right.
The control methods adopting during experiment is existing three kinds of fusion methods, wherein:
Method 3 is the method based on entropy, see article < < Imagefusionincompressedsensing > >, inProc.IEEEInt.Conf.onImageProcess, pp.2205 – 2208,2009.
2, emulation content
Emulation one: by method of the present invention and existing three kinds of fusion methods, two width Clock images shown in Fig. 2 are carried out to image co-registration experiment, fusion results is shown in Fig. 4, wherein:
Fig. 4 (a) is the fusion results figure obtaining by existing method 1, and window size is 8 * 8;
Fig. 4 (b) is the fusion results figure obtaining by existing method 1, and window size is 16 * 16;
Fig. 4 (c) is the fusion results figure obtaining by existing method 1, and window size is 32 * 32;
Fig. 4 (d) is the fusion results figure obtaining by existing method 1, and window size is 64 * 64;
Fig. 4 (e) is the fusion results figure obtaining by existing method 2, and window size is 8 * 8;
Fig. 4 (f) is the fusion results figure obtaining by existing method 2, and window size is 16 * 16;
Fig. 4 (g) is the fusion results figure obtaining by existing method 2, and window size is 32 * 32;
Fig. 4 (h) is the fusion results figure obtaining by existing method 2, and window size is 64 * 64;
Fig. 4 is (i) the fusion results figure obtaining by existing method 3, and window size is 8 * 8;
Fig. 4 (j) is the fusion results figure obtaining by existing method 3, and window size is 16 * 16;
Fig. 4 (k) is the fusion results figure obtaining by existing method 3, and window size is 32 * 32;
Fig. 4 (l) is the fusion results figure obtaining by existing method 3, and window size is 64 * 64;
Fig. 4 (m) is the fusion results figure obtaining with the present invention, and window size is 8 * 8;
Fig. 4 (n) is the fusion results figure obtaining with the present invention, and window size is 16 * 16;
Fig. 4 (o) is the fusion results figure obtaining with the present invention, and window size is 32 * 32;
Fig. 4 (p) is the fusion results figure obtaining with the present invention, and window size is 64 * 64.
Emulation two: by method of the present invention and existing three kinds of fusion methods, two width Pepsi images shown in Fig. 3 are carried out to image co-registration experiment, fusion results is shown in Fig. 5, wherein:
Fig. 5 (a) is the fusion results figure obtaining by existing method 1, and window size is 8 * 8;
Fig. 5 (b) is the fusion results figure obtaining by existing method 1, and window size is 16 * 16;
Fig. 5 (c) is the fusion results figure obtaining by existing method 1, and window size is 32 * 32;
Fig. 5 (d) is the fusion results figure obtaining by existing method 1, and window size is 64 * 64;
Fig. 5 (e) is the fusion results figure obtaining by existing method 2, and window size is 8 * 8;
Fig. 5 (f) is the fusion results figure obtaining by existing method 2, and window size is 16 * 16;
Fig. 5 (g) is the fusion results figure obtaining by existing method 2, and window size is 32 * 32;
Fig. 5 (h) is the fusion results figure obtaining by existing method 2, and window size is 64 * 64;
Fig. 5 is (i) the fusion results figure obtaining by existing method 3, and window size is 8 * 8;
Fig. 5 (j) is the fusion results figure obtaining by existing method 3, and window size is 16 * 16;
Fig. 5 (k) is the fusion results figure obtaining by existing method 3, and window size is 32 * 32;
Fig. 5 (l) is the fusion results figure obtaining by existing method 3, and window size is 64 * 64;
Fig. 5 (m) is the fusion results figure obtaining with the present invention, and window size is 8 * 8;
Fig. 5 (n) is the fusion results figure obtaining with the present invention, and window size is 16 * 16;
Fig. 5 (o) is the fusion results figure obtaining with the present invention, and window size is 32 * 32;
Fig. 5 (p) is the fusion results figure obtaining with the present invention, and window size is 64 * 64.
3, experimental result
The fusion results of fusion results of the present invention and existing three kinds of methods is compared, evaluate effect of the present invention.Table 1 has provided Clock multi-focus image fusion objective evaluation index,
Table 2 has provided Pepsi multi-focus image fusion objective evaluation index.
Table 1Clock multi-focus image fusion objective evaluation index
Table 1 is continuous
Table 2Pepsi multi-focus image fusion objective evaluation index
In table 1, table 2, BlockSize is window size, and Qabf is edge conservation degree, and EI is entropy, and AG is average gradient, and MI is mutual information, and STD is standard deviation.
Edge conservation degree (Qabf): edge conservation degree is used for representing the maintenance degree of fused images to input picture marginal information, and span is [0,1], more approaches 1, illustrates that edge keeps better.
Entropy (EI): the number of image entropy representative image inclusion information amount, the quantity of information that the larger key diagram of its value looks like to comprise is more.
Average gradient (AG): average gradient has reflected minor detail contrast and the textural characteristics in image, the readability of simultaneous reactions image, its value is larger, and image is more clear.
Mutual information (MI): the number of the information that mutual information is used for representing that source images obtains from fused images, its value is larger, shows that the information that image obtains from source images is more.
Standard deviation (STD): standard deviation is evaluated image contrast size, and image contrast is larger, and the information of carrying is more, and syncretizing effect is better.
The present invention and existing method 1 are compared; from table 1 and table 2 liang group objective indicator; mutual information index of the present invention is better than method 1; but other indexs are all worse than method 1; this is because method 1 easily produces much noise and ribbon grain, makes objective indicator can not reflect really the quality of syncretizing effect; From fusion results Fig. 4, Fig. 5, visual effect of the present invention is better than the visual effect of method 1;
The present invention and existing method 2 are compared, from two groups of objective indicators of table 1 and table 2, at window size, get 32 * 32 o'clock, except entropy index, every objective indicator of the present invention is all more excellent.Because method 2 will first be reconstructed before fusion, the time that therefore fusion needs and the transmission quantity of data all double than the present invention; From Fig. 4, Fig. 5, the visual effect of visual effect of the present invention and method 2 is suitable;
The present invention and existing method 3 are compared, from two groups of objective indicators of table 1 and table 2, at window size, get 32 * 32 o'clock, except mutual information, every objective indicator of the present invention is all more excellent; From Fig. 4, Fig. 5, the visual effect of visual effect of the present invention and method 3 is suitable;
In sum, at window size, get 32 * 32 o'clock, the present invention all can obtain comparatively ideal effect aspect objective indicator and visual effect, thereby has proved that the performance of the present invention in multi-focus image fusion is better than existing method.
Claims (4)
1. the multi-focus image fusing method based on compressed sensing and energy rule, comprises the steps:
(1) input two width multiple focussing image A and B, and it is carried out to piecemeal, obtain n to the image block of N * N and every pair of image block is pulled into row, be expressed as Ai and Bi, i=1,2 ..., n, n is the logarithm of image block, and N is image block window size, N>1;
(2) the observing matrix Φ to image block according to i
i, obtain the observation vector y of two image blocks
aiand y
bi;
(3) according to the following rules i is merged the observed reading y of the image block after being merged to image block
bi:
(3a) according to the image block observation vector y of multiple focussing image A and B
ai, y
bi, and the ENERGY E of observation vector
ai, E
bi, calculate the blending weight of every pair of image block:
ω
B=1-ω
A,
Wherein, E
ai, E
bibe respectively image block observation vector y
ai, y
bienergy, DS (y
ai, y
bi) be observation vector y
ai, y
bidata similarity, ω
a, ω
bfor the blending weight of image block, T is adjustable parameter, T ≠-1;
(3b) according to the blending weight ω of every pair of image block
a, ω
b, the observation vector y to every pair of image block of multiple focussing image A and B
ai, y
bimerge the observation vector y of the image block after being merged
bi:
y
bi=ω
Ay
Ai+ω
By
Bi;
(3c) repeating step (3a) and (3b), until all image blocks are to all completing fusion;
(4) by all observation vector y that obtain
bibe integrated into row, obtain the observation vector y of fused images:
y=[y
b1 T,y
b2 T,...,y
bn T]
T,
Wherein, subscript T represents vectorial transposition;
(5) to the observation vector y of fused images, adopt gradient projection method to be reconstructed under CDF9/7 wavelet basis, obtain rarefaction representation coefficient θ; Again rarefaction representation coefficient θ is carried out to wavelet inverse transformation, the total focus image F after being merged.
2. according to the multi-focus image fusing method based on compressed sensing and energy rule described in claims 1, the observing matrix Φ to image block in wherein said (2)
ifor the hadamard matrix of scramble, to the i formula that observation adopts to image block, be:
y
Ai=Φ
i×Ai
y
Bi=Φ
i×Bi,
Wherein, Φ
isize is M * N
2, 1<M<N, i=1,2 ..., n.
3. according to the multi-focus image fusing method based on compressed sensing and energy rule described in claims 1, the image block observation vector y in wherein said (3a)
ai, y
bieNERGY E
ai, E
bi, be calculated as follows:
E
Ai=||y
Ai||
2 2
E
Bi=||y
Bi||
2 2,
Wherein, E
aimultiple focussing image A image block observation vector y
aienergy, E
bimultiple focussing image B image block observation vector y
bienergy, || ||
2two norms that represent vector.
4. according to the multi-focus image fusing method based on compressed sensing and energy rule described in claims 1, the data similarity DS (y in wherein said step (3a)
ai, y
bi), be calculated as follows:
DS(y
Ai,y
Bi)=[c(y
Ai,y
Bi)·d(y
Ai,y
Bi)]
α·[a(y
Ai,y
Bi)]
β,
Wherein, c (y
ai, y
bi) representative image piece observation vector y
ai, y
binumerical value intensity, d (y
ai, y
bi) representative image piece observation vector y
ai, y
binumeric distribution intensity, a (y
ai, y
bi) representative image piece observation vector y
ai, y
bidegree of correlation, α is c (y
ai, y
bi) and d (y
ai, y
bi) index of product, β is a (y
ai, y
bi) index, α, β is adjustable parameter, the two desirable arbitrary value, c (y
ai, y
bi), d (y
ai, y
bi), a (y
ai, y
bi) three's expression formula is as follows:
Wherein,
representative image piece observation vector y
aiaverage,
representative image piece observation vector y
biaverage,
representative image piece observation vector y
aiintermediate value,
the intermediate value of representative image piece observation vector yBi,
representative image piece observation vector y
aiaverage absolute deviation,
the average absolute deviation of representative image piece observation vector yBi,
representative image piece observation vector y
aiintermediate value absolute deviation,
representative image piece observation vector y
biintermediate value absolute deviation,
representative image piece observation vector y
ai, y
birelated coefficient,
representative image piece observation vector y
ai, y
bisymmetrical degree of uncertainty, c
1for c (y
ai, y
bi) mistake proofing coefficient, be used for preventing c (y
ai, y
bi) denominator is zero error situation in computation process, c
2for d (y
ai, y
bi) mistake proofing coefficient, be used for preventing d (y
ai, y
bi) denominator is zero error situation in computation process, gets c in experiment
1=c
2=0.000001.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310512370.9A CN103593833A (en) | 2013-10-25 | 2013-10-25 | Multi-focus image fusion method based on compressed sensing and energy rule |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310512370.9A CN103593833A (en) | 2013-10-25 | 2013-10-25 | Multi-focus image fusion method based on compressed sensing and energy rule |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103593833A true CN103593833A (en) | 2014-02-19 |
Family
ID=50083958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310512370.9A Pending CN103593833A (en) | 2013-10-25 | 2013-10-25 | Multi-focus image fusion method based on compressed sensing and energy rule |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103593833A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103903242B (en) * | 2014-04-14 | 2016-08-31 | 苏州经贸职业技术学院 | Adaptive targets compressed sensing fusion tracking method based on video sensor network |
CN106651749A (en) * | 2015-11-02 | 2017-05-10 | 福建天晴数码有限公司 | Graph fusion method and system based on linear equation |
CN107231250A (en) * | 2017-04-25 | 2017-10-03 | 全球能源互联网研究院 | One kind is based on electric network information physical system perception data compressive sampling method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101425137A (en) * | 2008-11-10 | 2009-05-06 | 北方工业大学 | Face Image Fusion Method Based on Laplacian Pyramid |
CN102254314A (en) * | 2011-07-17 | 2011-11-23 | 西安电子科技大学 | Visible-light/infrared image fusion method based on compressed sensing |
CN102393958A (en) * | 2011-07-16 | 2012-03-28 | 西安电子科技大学 | Multi-focus image fusion method based on compressive sensing |
CN103164850A (en) * | 2013-03-11 | 2013-06-19 | 南京邮电大学 | Method and device for multi-focus image fusion based on compressed sensing |
-
2013
- 2013-10-25 CN CN201310512370.9A patent/CN103593833A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101425137A (en) * | 2008-11-10 | 2009-05-06 | 北方工业大学 | Face Image Fusion Method Based on Laplacian Pyramid |
CN102393958A (en) * | 2011-07-16 | 2012-03-28 | 西安电子科技大学 | Multi-focus image fusion method based on compressive sensing |
CN102254314A (en) * | 2011-07-17 | 2011-11-23 | 西安电子科技大学 | Visible-light/infrared image fusion method based on compressed sensing |
CN103164850A (en) * | 2013-03-11 | 2013-06-19 | 南京邮电大学 | Method and device for multi-focus image fusion based on compressed sensing |
Non-Patent Citations (2)
Title |
---|
XIAOYAN LUO等: "《Classification-based image-fusion framework for compressive imaging》", 《JOURNAL OF ELECTRONIC IMAGING》 * |
贾建等: "《基于非下采样Contourlet变换的多传感器图像融合》", 《电子学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103903242B (en) * | 2014-04-14 | 2016-08-31 | 苏州经贸职业技术学院 | Adaptive targets compressed sensing fusion tracking method based on video sensor network |
CN106651749A (en) * | 2015-11-02 | 2017-05-10 | 福建天晴数码有限公司 | Graph fusion method and system based on linear equation |
CN106651749B (en) * | 2015-11-02 | 2019-12-13 | 福建天晴数码有限公司 | Graph fusion method and system based on linear equation |
CN107231250A (en) * | 2017-04-25 | 2017-10-03 | 全球能源互联网研究院 | One kind is based on electric network information physical system perception data compressive sampling method and device |
CN107231250B (en) * | 2017-04-25 | 2022-08-19 | 全球能源互联网研究院 | Perception data compression sampling method and device based on power grid information physical system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wei et al. | Uncertainty quantification in inverse scattering problems with Bayesian convolutional neural networks | |
Han et al. | A texture feature analysis for diagnosis of pulmonary nodules using LIDC-IDRI database | |
CN107704877A (en) | A kind of image privacy cognitive method based on deep learning | |
Savić et al. | Multifocus image fusion based on the first level of empirical mode decomposition | |
CN103729842A (en) | Fabric defect detection method based on local statistical characteristics and overall significance analysis | |
CN104268833B (en) | Image interfusion method based on translation invariant shearing wave conversion | |
JP2018512913A5 (en) | ||
CN106228505A (en) | A kind of robust general steganalysis method of picture material perception | |
CN101840573A (en) | Method for estimating pixel-level image fusion quality | |
CN102651124A (en) | Image fusion method based on redundant dictionary sparse representation and evaluation index | |
Khoshandam et al. | Marginal rates of substitution in the presence of non-discretionary factors: A data envelopment analysis approach | |
CN103593833A (en) | Multi-focus image fusion method based on compressed sensing and energy rule | |
CN103198456B (en) | Remote sensing image fusion method based on directionlet domain hidden Markov tree (HMT) model | |
Li et al. | Craniofacial reconstruction based on least square support vector regression | |
Koval et al. | Tractable optimal experimental design using transport maps | |
Yaman et al. | Recent theory and applications on inverse problems | |
Tudevdagva | A new evaluation model for e-learning programs | |
CN103559722B (en) | Based on the sequence image amount of jitter computing method of gray scale linear modelling | |
CN105224808B (en) | Projecting integral's function skin condition integrated evaluating method based on three-dimensional coordinate | |
Ibort et al. | On the tomographic description of classical fields | |
Jiang et al. | Computational approach to body mass index estimation from dressed people in 3D space | |
CN106203480A (en) | Nonlinear feature extraction based on data incomplete and sorting technique | |
CN105787432A (en) | Method for detecting human face shielding based on structure perception | |
Xiang et al. | A trust-based mixture of gaussian processes model for robust participatory sensing | |
Peng et al. | Combining interior and exterior characteristics for remote sensing image denoising |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20140219 |
|
RJ01 | Rejection of invention patent application after publication |