CN102968781A - Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation - Google Patents
Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation Download PDFInfo
- Publication number
- CN102968781A CN102968781A CN2012105290228A CN201210529022A CN102968781A CN 102968781 A CN102968781 A CN 102968781A CN 2012105290228 A CN2012105290228 A CN 2012105290228A CN 201210529022 A CN201210529022 A CN 201210529022A CN 102968781 A CN102968781 A CN 102968781A
- Authority
- CN
- China
- Prior art keywords
- coefficient
- source images
- band
- nsct
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention provides an image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation. According to the image fusion method, a learning dictionary is provided for the coefficients of a low-frequency sub-band lower in sparseness, and the common and special coefficients of a source image are extracted by virtue of spare representation, so that the purpose of improving the sparseness of the low-frequency sub-band can be realized; the weight fusion are horizontally adjusted in a self-adoption manner according to the movement of the special coefficient; and the coefficient of a high-frequency directional sub-band higher in sparseness is fused through absolute value of the directional sub-band with the same dimension by a maximizing method, thus the marked features of the source image can be captured, and as a result, the fusion effect is finally improved.
Description
Technical field
The present invention relates to a kind of image interfusion method.
Background technology
In recent years, based on non-downsampling Contourlet conversion (Non-Subsampled Contourlet Transform, NSCT) have translation invariant, multiresolution, multi-direction and anisotropic image representation ability with it, and can effectively overcome traditional wavelet and can not process 2D or the problem of higher-dimension singularity more, successfully be used for the image co-registration field and obtain more excellent syncretizing effect.Yet in the image co-registration problem, we wish that the image representation coefficient that extracts has outstanding sparse property and feature retentivity, just can obtain more excellent fusion results thereby only need to merge a small amount of coefficient.But it is very limited to obtain zero of being approximately of image low frequency sub-band coefficient through the NSCT conversion, and the low frequency sub-band information of presentation video that namely can not be sparse is if directly be unfavorable for our extraction source Characteristic of Image to its fusion.Consider that low frequency sub-band has comprised the main energy of image, determined to a great extent the quality of fusion results, so we wish by improving the degree of rarefication of low frequency sub-band coefficient, to obtain more excellent fusion results.
Summary of the invention
Relatively poor in order to overcome the low frequency sub-band coefficient degree of rarefication that comprises the main energy of image after prior art NSCT changes, be unfavorable for extracting the deficiency that useful information merges, the invention provides a kind of image interfusion method based on NSCT and rarefaction representation, to the relatively poor low frequency sub-band coefficient study dictionary of degree of rarefication, utilize the total and peculiar coefficient of rarefaction representation extraction source image, to reach the purpose that improves the low frequency sub-band degree of rarefication, adjust weight according to the activity level self-adaptation of peculiar coefficient again and merge; The higher high frequency direction sub-band coefficients of degree of rarefication is adopted direction subband absolute value and the method fusion of getting maximum under the same yardstick, to catch the notable feature in the source images, finally improve syncretizing effect.
The technical solution adopted for the present invention to solve the technical problems may further comprise the steps:
1 training dictionary part:
Suppose that source images has passed through registration, have K width of cloth size to be the source images of M * N, and be denoted as respectively I
1..., I
K
(1.1) decompose each width of cloth training image with NSCT, after decomposing through J level NSCT (J is generally 3 ~ 5 grades of decomposition), obtain 1 low frequency sub-band coefficient and
Individual high frequency direction sub-band coefficients, wherein l
jBe the Directional Decomposition progression under the yardstick j.Wherein, training image can be source images itself, also can be the image identical with the source images acquisition mode;
(1.2) initialization dictionary D ∈ R
N * m, wherein, n is the size of dictionary atom, m is the atomicity of every sub-dictionary.For the mistake completeness that guarantees dictionary and the complexity of calculating, usually get n=64, m=256;
(1.3) to the low frequency sub-band coefficient take step-length as 1, size is
Moving window according to extracting piece from left to bottom right order, piece is stretching and be arranged in order the composition matrix again;
(1.4) to above-mentioned matrix dictionary D of K-SVD Algorithm for Training, and preserve this dictionary;
2 image co-registration parts:
(2.1) decompose source images according to the method for step (1.1) with NSCT;
(2.2) according to following 5 steps, merge source images low frequency sub-band coefficient:
1. according to method in the step (1.3) the source images low frequency sub-band is arranged in matrix V
k, k=1 ..., K;
2. with the matrix V of all source images
kBe expressed as:
Wherein, α
CThe total rarefaction representation coefficient of expression, it is contained in all source images;
The peculiar rarefaction representation coefficient that represents k width of cloth image, it only is contained in the k width of cloth source images, and 0 expression size is the full null matrix of n * m, and D is the dictionary that step (1.4) trains.
Then (1) formula can be reduced to
V=D′α (2)
3. for so that in (2) formula α the most sparse, adopt orthogonal matching pursuit (OMP) Algorithm for Solving following formula:
4. according to the contribution of all source images to merging, the low frequency sub-band coefficient is merged according to following formula:
Wherein, n
iThe expression coefficient represents the activity factor of coefficient, and it has reacted the energy size of feature, i.e. significance level:
5. fused images low frequency sub-band coefficient can be reconstructed into:
V
f=Dα
f (6)
6. Ergodic Matrices V
f, each row in the matrix are arranged in
The piece of size is put into these pieces again according to the order of extracting
Correspondence position, and be averaged, namely the sub-band coefficients of same position is added up and divided by cumulative number of times, thereby obtains source images low-frequency subband fusion coefficient
(2.3) according to following 2 steps, merge the high-frequency sub-band coefficient:
1. calculating source images is 2 at yardstick
-lOn directional subband information, l is to be 2 at yardstick
-1The direction number of decomposing, rule of thumb, 2≤l≤4:
Wherein, V
L, i(n, m) is illustrated in yardstick 2
-l, the directional subband coefficient value on the i direction, (n, m) location of pixels.
2. choose direction subband absolute value and the method fusion of getting maximum under the same yardstick:
Wherein, 1≤l≤J, 1≤i≤l
j With
Represent respectively fused images and k
*Width of cloth source images is at yardstick 2
-l, the directional subband coefficient value on the i direction, (n, m) location of pixels; k
*In the expression K width of cloth source images, at the label of the source images of l directional subband information maximization;
Represent the sub-band information of k width of cloth source images on the l direction, can obtain according to the definition of (7) formula.
(2.4) to the low frequency sub-band coefficient after merging
And the high-frequency sub-band coefficient after merging
Carry out the NSCT inverse transformation, obtain final fused images F.
The invention has the beneficial effects as follows:
The present invention can Effective Raise image low frequency sub-band degree of rarefication after the NSCT conversion: by at NSCT territory learning dictionary, utilize rarefaction representation to find the solution the rarefaction representation coefficient of low frequency sub-band, significantly improve image low frequency sub-band degree of rarefication after the NSCT conversion, be more conducive to extract the essential characteristic of image, to improve image syncretizing effect.
The more single fusion method based on NSCT and rarefaction representation of the present invention all has advantage: the degree of rarefication of more single NSCT fusion method is higher, more effective extraction characteristics of image, the fusion method of more single rarefaction representation has multiple dimensioned, multi-direction analysis ability, more meets the eye-observation image mode.So the inventive method all has more excellent syncretizing effect than these two large class methods.
Description of drawings
Fig. 1 extracts total and characteristic feature legend, wherein, (a) be the infrared radiation source image, (b) be the visible light source image, (c) being the infrared image low frequency sub-band, (d) is the visible images low frequency sub-band, (e) is infrared image low frequency sub-band characteristic feature, (f) being visible images low frequency sub-band characteristic feature, (g) is infrared and visible light low frequency sub-band common characteristic.
Fig. 2 is the fusion results of several method, wherein, (a) is the DWT method, (b) is the NSCT method, (c) is the SOMP method, (d) is the JSR method, (e) is this paper method.
Fig. 3 is image interfusion method process flow diagram of the present invention.
Embodiment
A kind of image interfusion method based on NSCT and rarefaction representation, the method mainly comprise training dictionary and image co-registration two large divisions, are described below:
1 training dictionary part:
Suppose that source images has passed through registration, have K width of cloth size to be the source images of M * N, and be denoted as respectively I
1..., I
K
(1.1) decompose each width of cloth training image with NSCT, after decomposing through J level NSCT, obtain 1 low frequency sub-band coefficient and
Individual high frequency direction sub-band coefficients, wherein l
jBe the Directional Decomposition progression under the yardstick j.Wherein, training image can be source images itself, also can be the image identical with the source images acquisition mode;
(1.2) initialization dictionary D ∈ R
N * m, wherein, n is the size of dictionary atom, m is the atomicity of every sub-dictionary;
(1.3) to the low frequency sub-band coefficient take step-length as 1, size is
Moving window according to extracting piece from left to bottom right order, piece is stretching and be arranged in order the composition matrix again;
(1.4) to above-mentioned matrix dictionary D of K-SVD Algorithm for Training, and preserve this dictionary;
2 image co-registration parts:
(2.1) decompose source images with NSCT in the same step (1);
(2.2) according to following 5 steps, merge source images low frequency sub-band coefficient:
1. according to method in the step (3) the source images low frequency sub-band is arranged in matrix V
k, k=1 ..., K;
2. with the matrix V of all source images
kBe expressed as:
Wherein, α
CThe total rarefaction representation coefficient of expression, it is contained in all source images;
It only is contained in the corresponding width of cloth source images to represent peculiar rarefaction representation coefficient, and 0 expression size is the full null matrix of n * m.
Then (1) formula can be reduced to
V=D′α (2)
3. for so that in (2) formula α the most sparse, adopt orthogonal matching pursuit (OMP) Algorithm for Solving following formula:
4. according to the contribution of all source images to merging, the low frequency sub-band coefficient is merged according to following formula:
Wherein, n
iThe expression coefficient represents the activity factor of coefficient, and it has reacted the energy size of feature, i.e. significance level:
5. fused images low frequency sub-band coefficient can be reconstructed into:
V
f=Dα
f (6)
6. Ergodic Matrices V
f, each row in the matrix are arranged in
The piece of size is put into these pieces again according to the order of extracting
Correspondence position, and be averaged, namely the sub-band coefficients of same position is added up and divided by cumulative number of times, thereby obtains source images low-frequency subband fusion coefficient
(2.3) because the base of NSCT has abundant direction and shape, can catch notable feature in the image at the high frequency direction subband, such as edge, linear feature and zone boundary.And these notable features all show larger mould value on all directional subbands of same yardstick, and other non-notable feature coefficient module values are almost nil.So, according to following 2 steps, merge the high-frequency sub-band coefficient:
1. calculating source images is 2 at yardstick
-1On directional subband information:
Wherein, V
L, i(n, m) is illustrated in yardstick 2
-l, i direction, the directional subband coefficient value on (n, m) location of pixels.
2. choose direction subband absolute value and the method fusion of getting maximum under the same yardstick:
Wherein, 1≤l≤J, 1≤i≤l
j
(2.4) to the low frequency sub-band coefficient after merging
And the high-frequency sub-band coefficient after merging
Carry out the NSCT inverse transformation, obtain final fused images F.
The present invention is further described below in conjunction with drawings and Examples.
Example 1. utilizes rarefaction representation to improve NSCT low frequency sub-band degree of rarefication and the extraction source image is peculiar and the common characteristic example
The dictionary learning step is as follows in 1 example:
(1.1) decompose respectively infrared and the visible light source image with NSCT, adopt " 9-7 " turriform to decompose and " c-d " anisotropic filter group, the direction number that the high frequency layer is got is followed successively by 2
4, 2
3, 2
2, 2
2
(1.2) initialization dictionary D ∈ R
64 * 256
(1.3) to the low frequency sub-band coefficient take step-length as 1, size be 8 * 8 moving window according to extracting piece from order left to bottom right, piece is stretching and be arranged in order the composition matrix again, infrared low frequency sub-band matrix is designated as V
1Visible light low frequency sub-band matrix is designated as V
2
(1.4) to above-mentioned matrix dictionary D of K-SVD Algorithm for Training, K-SVD algorithm permissible error ε gets numerical value 0.01 commonly used, and preserves this dictionary;
Peculiar as follows with the common characteristic step to low frequency sub-band rarefaction representation and extraction source image in 2 examples:
(2.1) with infrared with the low frequency sub-band matrix representation visible light source image be:
Wherein, α
CThe total rarefaction representation coefficient of expression, it is contained in all source images;
With
Represent peculiar rarefaction representation coefficient, only be contained in respectively in infrared radiation source image and the visible light source image that 0 expression size is 64 * 256 full null matrix.
Order,
Then (4) formula can be reduced to
V=D′α (10)
(2.2) adopt OMP Algorithm for Solving following formula:
(2.3) try to achieve respectively the source images common characteristic matrix V I of extraction
CAnd infrared image characteristic feature matrix
With visible images characteristic feature matrix
VI
C=D×α
C,
(2.4) difference Ergodic Matrices VI
C,
With
Each row in the matrix are arranged in the piece of 8 * 8 sizes, again these pieces are put into respectively IC according to the order of extracting,
Correspondence position, and be averaged, namely the sub-band coefficients of same position is added up and divided by cumulative number of times, thereby obtains infrared and the total low frequency sub-band coefficient image I of visible images
C, infrared image and visible images be peculiar low frequency sub-band coefficient image separately
With
Fig. 1 (e)-(g) is respectively the image of reconstruct again behind the peculiar and common characteristic of extraction source image low frequency sub-band coefficient.The more black partial pixel value of image is more near 0, and null value is more sparse.Can find out that original infrared image and visible images degree of rarefication are relatively poor, not have the essential characteristic of extraction source image.And degree of rarefication of the present invention obviously improves, and extracted in the infrared low frequency sub-band feature such as distinctive roof road among the distinctive personage and trees profile and visible light low frequency sub-band, and the feature such as their building wall of having and road, more be beneficial to the subsequent treatment such as fusion.
Example 2. image co-registration examples of the present invention
The method that invention is proposed and tradition based on the image interfusion method of DWT and at present performance comparatively superior based on the NSCT image interfusion method
A mistake! Do not find Reference source.And based on the image interfusion method SOMP of rarefaction representation
A mistake! Do not find Reference source.With the JSR method
A mistake! Do not find Reference source.Relatively.Front two kinds of methods are based on the method for transform domain, rear two kinds of fusion methods that are based on the image area rarefaction representation.Adopt 240 * 320 sizes and the infrared and visible images through aiming in the experiment, wherein the small echo type of DWT decomposition is 3 grades of db4 small echos, and the NSCT parameter arranges and the document mistake! Do not find Reference source.Identical, namely " 9-7 " turriform is decomposed and " c-d " anisotropic filter group, and the direction number that the high frequency layer is got is followed successively by 2
4, 2
3, 2
2, 2
2The dictionary size of rarefaction representation is 64 * 256, ε=0.01.
Image interfusion method performing step based on NSCT and rarefaction representation in this example is as follows:
The dictionary learning step is identical with step in the example 1, and the image co-registration part steps is:
(1) decompose respectively infrared and the visible light source image with NSCT, the NSCT transformation parameter is consistent with the study part, namely adopts " 9-7 " turriform to decompose and " c-d " anisotropic filter group, and the direction number that the high frequency layer is got is followed successively by 2
4, 2
3, 2
2, 2
2
(2) merge source images low frequency sub-band coefficient:
1. by obtain the infrared and visible light common characteristic factor alpha in source with 2.1 and 2.2 steps in the example 1
C, source infrared image characteristic feature coefficient
And visible images characteristic feature coefficient
2. the low frequency sub-band coefficient is merged according to following formula:
Wherein, n
iThe expression coefficient represents the activity factor of coefficient, and it has reacted the energy size of feature, i.e. significance level:
5. fused images low frequency sub-band coefficient can be reconstructed into:
V
f=Dα
f (15)
6. Ergodic Matrices V
f, each row in the matrix are arranged in the piece of 8 * 8 sizes, again these pieces are put into according to the order of extracting
Correspondence position, and be averaged, namely the sub-band coefficients of same position is added up and divided by cumulative number of times, thereby obtains source images low-frequency subband fusion coefficient
(3) merge the high-frequency sub-band coefficient:
1. finding the solution source images is 2 at yardstick
-1On directional subband information:
Wherein, 1≤l≤4,1≤i≤l
j(l
1=2
4, l
2=2
3, l
3=2
2, l
4=2
2).
2. choose direction subband absolute value and the method fusion of getting maximum under the same yardstick:
(4) to the low frequency sub-band coefficient after merging
And the high-frequency sub-band coefficient after merging
Carry out the NSCT inverse transformation, obtain final fused images F.
Fusion results as shown in Figure 2, as seen from Figure 2, the house of DWT fusion results weakens with scene contrast on every side; Still texture is unintelligible by force for trees road stereovision in the NSCT fusion results, and personage and house are also outstanding not; SOMP and JSR personage are comparatively outstanding, but merge too smoothly to the obvious zone of the textural characteristics such as leaf; This paper method then merges the atural objects such as railing, house, leaf and road more clear, and the personage is also comparatively obvious, and continuity is good, and visual effect is best.
For the different fusion methods of quantitative evaluation are used for the performance of infrared and visual image fusion, this paper adopts index root mean square cross entropy RCE(Rooted Cross Entropy in the comparative approach), Q
W, Q
EAnd Q
AbfEstimate.Wherein, index RCE is used for estimating the comprehensive differences between fused images and source images, and is the smaller the better; Q
WThe fusion mass evaluation of source images and the weighting of fused images window, Q
EAnd Q
AbfMerge the situation at source images edge, Q from the local and whole fused images that reflected respectively
0, Q
W, Q
AbfValue all between [0,1], more show that near 1 fusion mass is better.
The performance index of several fusion methods of table 1
Table 1 is the performance index (wherein runic represents optimum desired value) of several fusion methods, the data of observation table 1 can be found out, with respect to the method that directly merges at domain of variation (DWT and NSCT) and in the method (SOMP and JSR) of image area single scale based on rarefaction representation, method in this paper namely can be carried out the degree of rarefication that multiscale analysis can further improve again the image representation coefficient to source images, strengthened the details expressive ability of fused images, extract more useful information from source images and also merged, so have more excellent syncretizing effect.
Claims (5)
1. the image interfusion method based on NSCT and rarefaction representation is characterized in that comprising the steps:
Suppose that source images has passed through registration, have K width of cloth size to be the source images of M * N, and be denoted as respectively I
1..., I
K
(1.1) decompose each width of cloth training image with NSCT, after decomposing through J level NSCT, obtain 1 low frequency sub-band coefficient and
Individual high frequency direction sub-band coefficients, wherein l
jBe the Directional Decomposition progression under the yardstick j;
(1.2) initialization dictionary D ∈ R
N * m, wherein, n is the size of dictionary atom, m is the atomicity of every sub-dictionary;
(1.3) to the low frequency sub-band coefficient take step-length as 1, size is
Moving window according to extracting piece from left to bottom right order, piece is stretching and be arranged in order the composition matrix again;
(1.4) to above-mentioned matrix dictionary D of K-SVD Algorithm for Training, and preserve this dictionary;
(2.1) decompose source images according to the method for step (1.1) with NSCT;
(2.2) according to following 5 steps, merge source images low frequency sub-band coefficient:
1. according to method in the step (1.3) the source images low frequency sub-band is arranged in matrix V
k, k=1 ..., K;
2. with the matrix V of all source images
kBe expressed as:
Wherein, α
CThe total rarefaction representation coefficient of expression, it is contained in all source images;
The peculiar rarefaction representation coefficient that represents k width of cloth image, it only is contained in the k width of cloth source images, and 0 expression size is the full null matrix of n * m;
Then (1) formula can be reduced to
V=D′α (2)
3. for so that in (2) formula α the most sparse, adopt orthogonal matching pursuit algorithm to find the solution following formula:
4. according to the contribution of all source images to merging, the low frequency sub-band coefficient is merged according to following formula:
Wherein, n
iThe expression coefficient represents the activity factor of coefficient, and it has reacted the energy size of feature, i.e. significance level:
5. fused images low frequency sub-band coefficient can be reconstructed into:
V
f=Dα
f (6)
6. Ergodic Matrices V
f, each row in the matrix are arranged in
The piece of size is put into these pieces again according to the order of extracting
Correspondence position, and be averaged, namely the sub-band coefficients of same position is added up and divided by cumulative number of times, thereby obtains source images low-frequency subband fusion coefficient
(2.3) according to following 2 steps, merge the high-frequency sub-band coefficient:
1. calculating source images is 2 at yardstick
-lOn directional subband information, l is to be 2 at yardstick
-1The direction number of decomposing:
Wherein, V
L, i(n, m) is illustrated in yardstick 2
-l, the directional subband coefficient value on the i direction, (n, m) location of pixels;
2. choose direction subband absolute value and the method fusion of getting maximum under the same yardstick:
Wherein, 1≤l≤J, 1≤i≤l
j With
Represent respectively fused images and k
*Width of cloth source images is at yardstick 2
-l, the directional subband coefficient value on the i direction, (n, m) location of pixels; k
*In the expression K width of cloth source images, at the label of the source images of l directional subband information maximization;
Represent the sub-band information of k width of cloth source images on the l direction;
2. the image interfusion method based on NSCT and rarefaction representation according to claim 1 is characterized in that: described
J be taken as 3 ~ 5.
3. the image interfusion method based on NSCT and rarefaction representation according to claim 1, it is characterized in that: described training image is source images itself or the image identical with the source images acquisition mode.
4. the image interfusion method based on NSCT and rarefaction representation according to claim 1 is characterized in that: described n=64, m=256.
5. the image interfusion method based on NSCT and rarefaction representation according to claim 1, it is characterized in that: described l span is 2≤l≤4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210529022.8A CN102968781B (en) | 2012-12-11 | 2012-12-11 | Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210529022.8A CN102968781B (en) | 2012-12-11 | 2012-12-11 | Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102968781A true CN102968781A (en) | 2013-03-13 |
CN102968781B CN102968781B (en) | 2015-01-28 |
Family
ID=47798907
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210529022.8A Expired - Fee Related CN102968781B (en) | 2012-12-11 | 2012-12-11 | Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102968781B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091343A (en) * | 2014-07-22 | 2014-10-08 | 西北工业大学 | Image quality evaluation method based on sparse structure |
CN104200451A (en) * | 2014-08-28 | 2014-12-10 | 西北工业大学 | Image fusion method based on non-local sparse K-SVD algorithm |
CN105894476A (en) * | 2016-04-21 | 2016-08-24 | 重庆大学 | Fused SAR image noise reduction processing method based on dictionary learning |
CN106056564A (en) * | 2016-05-27 | 2016-10-26 | 西华大学 | Edge sharp image fusion method based on joint thinning model |
CN106683066A (en) * | 2017-01-13 | 2017-05-17 | 西华大学 | Image fusion method based on joint sparse model |
CN106709896A (en) * | 2016-12-08 | 2017-05-24 | 江苏大学 | Medical image fusion method |
CN106886977A (en) * | 2017-02-08 | 2017-06-23 | 徐州工程学院 | A kind of many figure autoregistrations and anastomosing and splicing method |
CN106981058A (en) * | 2017-03-29 | 2017-07-25 | 武汉大学 | A kind of optics based on sparse dictionary and infrared image fusion method and system |
CN108765249A (en) * | 2018-05-18 | 2018-11-06 | 西安理工大学 | The color watermark method being combined based on direction anisotropic and singular value decomposition |
CN108898568A (en) * | 2018-04-25 | 2018-11-27 | 西北大学 | Image composition method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303764A (en) * | 2008-05-16 | 2008-11-12 | 西安电子科技大学 | Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave |
CN102521818A (en) * | 2011-12-05 | 2012-06-27 | 西北工业大学 | Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform) |
-
2012
- 2012-12-11 CN CN201210529022.8A patent/CN102968781B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303764A (en) * | 2008-05-16 | 2008-11-12 | 西安电子科技大学 | Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave |
CN102521818A (en) * | 2011-12-05 | 2012-06-27 | 西北工业大学 | Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform) |
Non-Patent Citations (2)
Title |
---|
SONGFENG YIN ET AL: "《Infrared and Visible Image Fusion based on NSCT and Fuzzy Logic》", 《PROCEEDING OF THE 2010 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION》, 7 August 2010 (2010-08-07), pages 671 - 675 * |
邢素霞等: "《基于NSCT变换的图像融合及鲁棒性分析》", 《红外技术》, vol. 33, no. 1, 31 January 2011 (2011-01-31), pages 45 - 55 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091343B (en) * | 2014-07-22 | 2017-01-18 | 西北工业大学 | image quality evaluation method based on sparse structure |
CN104091343A (en) * | 2014-07-22 | 2014-10-08 | 西北工业大学 | Image quality evaluation method based on sparse structure |
CN104200451A (en) * | 2014-08-28 | 2014-12-10 | 西北工业大学 | Image fusion method based on non-local sparse K-SVD algorithm |
CN104200451B (en) * | 2014-08-28 | 2017-05-10 | 西北工业大学 | Image fusion method based on non-local sparse K-SVD algorithm |
CN105894476B (en) * | 2016-04-21 | 2018-07-27 | 重庆大学 | SAR image noise reduction process method based on dictionary learning fusion |
CN105894476A (en) * | 2016-04-21 | 2016-08-24 | 重庆大学 | Fused SAR image noise reduction processing method based on dictionary learning |
CN106056564A (en) * | 2016-05-27 | 2016-10-26 | 西华大学 | Edge sharp image fusion method based on joint thinning model |
CN106056564B (en) * | 2016-05-27 | 2018-10-16 | 西华大学 | Edge clear image interfusion method based on joint sparse model |
CN106709896A (en) * | 2016-12-08 | 2017-05-24 | 江苏大学 | Medical image fusion method |
CN106683066A (en) * | 2017-01-13 | 2017-05-17 | 西华大学 | Image fusion method based on joint sparse model |
CN106886977A (en) * | 2017-02-08 | 2017-06-23 | 徐州工程学院 | A kind of many figure autoregistrations and anastomosing and splicing method |
CN106981058A (en) * | 2017-03-29 | 2017-07-25 | 武汉大学 | A kind of optics based on sparse dictionary and infrared image fusion method and system |
CN108898568A (en) * | 2018-04-25 | 2018-11-27 | 西北大学 | Image composition method and device |
CN108765249A (en) * | 2018-05-18 | 2018-11-06 | 西安理工大学 | The color watermark method being combined based on direction anisotropic and singular value decomposition |
CN108765249B (en) * | 2018-05-18 | 2022-01-07 | 西安理工大学 | Color watermarking method based on combination of directional anisotropy and singular value decomposition |
Also Published As
Publication number | Publication date |
---|---|
CN102968781B (en) | 2015-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102968781B (en) | Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation | |
CN101847257B (en) | Image denoising method based on non-local means and multi-level directional images | |
CN107341786A (en) | The infrared and visible light image fusion method that wavelet transformation represents with joint sparse | |
CN100557633C (en) | Multisource image anastomosing method based on pectination ripple and Laplace tower decomposition | |
CN100573559C (en) | Self-adapting multi-dimension veins image segmenting method based on small echo and average drifting | |
CN103093441B (en) | Based on the non-local mean of transform domain and the image de-noising method of two-varaible model | |
CN101493935B (en) | Synthetic aperture radar image segmentation method based on shear wave hidden Markov model | |
CN109658351B (en) | Hyperspectral image denoising method combining L0 gradient constraint and local low-rank matrix recovery | |
CN102063715B (en) | Method for fusing typhoon cloud pictures based on NSCT (Nonsubsampled Controurlet Transformation) and particle swarm optimization algorithm | |
CN107657217A (en) | The fusion method of infrared and visible light video based on moving object detection | |
CN104021537A (en) | Infrared and visible image fusion method based on sparse representation | |
Wang | A multi-scale approach for delineating individual tree crowns with very high resolution imagery | |
CN106981058A (en) | A kind of optics based on sparse dictionary and infrared image fusion method and system | |
CN102521818A (en) | Fusion method of SAR (Synthetic Aperture Radar) images and visible light images on the basis of NSCT (Non Subsampled Contourlet Transform) | |
CN104008537A (en) | Novel noise image fusion method based on CS-CT-CHMM | |
CN101615290A (en) | A kind of face image super-resolution reconstruction method based on canonical correlation analysis | |
CN102682306B (en) | Wavelet pyramid polarization texture primitive feature extracting method for synthetic aperture radar (SAR) images | |
CN102855616A (en) | Image fusion method based on multi-scale dictionary learning | |
CN101398487A (en) | Method for reducing synthetic aperture radar smudges noise | |
CN103679661A (en) | Significance analysis based self-adaptive remote sensing image fusion method | |
CN110570387B (en) | Image fusion method based on feature level Copula model similarity | |
CN103839243A (en) | Multi-channel satellite cloud picture fusion method based on Shearlet conversion | |
CN104268833A (en) | New image fusion method based on shift invariance shearlet transformation | |
CN103077507B (en) | Beta algorithm-based multiscale SAR (Synthetic Aperture Radar) image denoising method | |
CN104680182B (en) | Polarization SAR sorting technique based on NSCT and differentiation dictionary learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150128 Termination date: 20151211 |
|
EXPY | Termination of patent right or utility model |