CN1948995A - Multispectral and panchromatic image fusion method of supercomplex principal element weighting - Google Patents

Multispectral and panchromatic image fusion method of supercomplex principal element weighting Download PDF

Info

Publication number
CN1948995A
CN1948995A CN 200610118103 CN200610118103A CN1948995A CN 1948995 A CN1948995 A CN 1948995A CN 200610118103 CN200610118103 CN 200610118103 CN 200610118103 A CN200610118103 A CN 200610118103A CN 1948995 A CN1948995 A CN 1948995A
Authority
CN
China
Prior art keywords
image
centerdot
supercomplex
matrix
multispectral
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200610118103
Other languages
Chinese (zh)
Other versions
CN100465661C (en
Inventor
杨惠娟
张建秋
胡波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CNB2006101181033A priority Critical patent/CN100465661C/en
Publication of CN1948995A publication Critical patent/CN1948995A/en
Application granted granted Critical
Publication of CN100465661C publication Critical patent/CN100465661C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to super-complex pivot element weighting multi-spectrum and full color image fusing method. It includes the following steps: processing vectorization for each pixel value of the full color image, using super complex matrix to express the RGB showed multi-spectrum image and vectorization full color image, processing super-complex singular value decomposition to gain two super-complex matrix singular value, processing pivot element analyzing, using the maximum eigenvector as weighting value to process weighted image fusing. It considers the vector performance of the RGB showed multi-image to prevent color distortion for IHS and PCA fusing method. It is proved by analyzing and emulating that the method has no eye-able spectrum distortion, and is better than IHS, PCA and wavelet transformation fusing method.

Description

Multispectral and the panchromatic image fusion method of supercomplex principal element weighting
Technical field
The invention belongs to the image fusion technology field, be specifically related to a kind of multispectral and panchromatic image fusion method of supercomplex principal element weighting.
Background technology
At present, earth observation satellite provides many spaces, multiresolution, multidate and the multispectral image of increasing covering the same area, for carry out topographic mapping and map renewal, land use classes, crops and forest classified, ice and snow/flood monitoring etc. provides rich data.
Based on intensity-colourity-saturation degree (Intensity-Hue-Saturate, IHS) fusion method of conversion [1]Become a standard procedure of graphical analysis, the color that it can be used for the height correlation view data strengthens and improves fusion treatment such as spatial resolution.The IHS transform method of standard is applicable to following situation: full-colour image and be height correlation from the luminance component of multispectral acquisition.When the spectral range of full-colour image does not cover all wave bands of multispectral image or full-colour image and multispectral image is not when obtaining simultaneously, widely different between the luminance component that obtains of IHS conversion and the full-colour image serious spectrum distortion phenomenon can occur in the fusion results that this moment, the IHS transform method obtained so.Same, pivot analysis (PrincipleComponent Analysis, PCA) fusion method, the fusion that the first principal component that utilizes first principal component in the high-resolution full-colour image to replace the low resolution multispectral image simply carries out, can make and the information loss of some spectral characteristics in the low-resolution image first principal component therefore make that the spectrum distortion of fusion results image is serious [2]The high-pass filtering fusion method has kept the information of multispectral image, has but filtered many texture informations to high-definition picture filtering the time [3]The wavelet transformation fusion method can keep the spectral information in the multispectral image preferably, is subjected to the influence of wavelet decomposition progression and occurs blocking artifact easily in the result of inverse transformation but strengthen effect [4]
Said method all is certain composition of simple substitution, and does not consider the complete spectral information of full-colour image fully.Exist when mentioning real vector signal in the document [5] and be by each component and exist, ignoring and changing of arbitrary component all can not the original vector signal of reconstruct.In addition, simply vector signal is regarded as the combination of each component signal, may be destroyed the geometric relationship that should keep between each component of original vector signal.Distorted signals after the destruction of this geometric relationship will cause handling.
Summary of the invention
We know that each pixel of multispectral image is all represented with red (R), green (G), blue (B) trichromatic vector signal.If by certain conversion, IHS and PCA replace as the aforementioned, just may destroy the RGB three primary colors and have particular kind of relationship on the space.The destruction of the particular kind of relationship on the image RGB three primary colors space can cause handling the color distortion of back image.The image interfusion method that the present invention proposes is the RGB component with the expression multispectral image, carrying out integral body with super-complex vector describes, purpose is the RGB three primary colors particular kind of relationship spatially that makes the expression multispectral image, is maintained in follow-up processing, to avoid handling the color distortion of back image.
The image interfusion method that the present invention proposes is taken all factors into consideration multispectral complete spectrum information, when adopting the super-complex vector pixel directly to describe to the multispectral image of representing with RGB, each pixel value to full-colour image carries out vector quantization, is each pixel value of vector representation with a three-dimensional specifically [6],, each pixel value of full-colour image be multiply by 1/  for the amplitude that makes the vector pixel is consistent with the amplitude of scalar pixel; Carry out whole modeling with super-complex vector respectively to the full-colour image behind the vector quantization with the multispectral image that RGB represents then, and the model of setting up carried out the supercomplex svd, according to characteristic information is carried out pivot analysis, full-colour image and multispectral image are weighted fusion.Owing to utilized full-colour image and the complete information of multispectral image, used fusion results of the present invention can not produce color distortion.Given this, claim that the inventive method is the multispectral and panchromatic image fusion method of supercomplex principal element weighting.
The step of image interfusion method of the present invention is summarized as follows:
1) at first the multispectral image of low resolution is amplified to and the like that size M of full-colour picture * N, obtains image M S by linear interpolation algorithm Z i(i=1,2,3).
2) be three-dimensional vector with each pixel-expansion of full-colour image, obtain the full-colour image PSM behind the vector quantization i(i=1,2,3).
3) by formula (2) to MS iAnd PSM iVoxel in (i=1,2,3) carries out whole modeling with supercomplex respectively.
4) to two multispectral image MS with the amplification of supercomplex matrix representation iFull-colour image PSM behind (i=1,2,3) and the vector quantization i(i=1,2,3) carry out piecemeal respectively, and (the supercomplex svd (QSVD) of n * n) obtains MS respectively iAnd PSM iN singular value.
5) carry out pivot analysis to obtaining two row singular values: the matrix that two row singular values constitute is asked correlation matrix, ask the eigenwert and the proper vector of correlation matrix again.Eigenwert characteristic of correspondence vector with maximum is weighted image co-registration as weights.
In the said method, described interpolation algorithm Z adopts nearest field interpolation method, and its step is to seek hithermost pixel and its gray-scale value is composed to new pixel on original image, simply the multispectral image of low resolution expansion M * N size.
In the said method, the described supercomplex of utilizing is directly carried out modeling to the multispectral image of being represented by RGB three colouring components [7]: q (x, y)=r (x, y) i+g (x, y) j+b (x, y) k, specific as follows:
(1) wherein q ((x, y), (x, y), (x y) represents the RGB component of multispectral image respectively to b to g to r for x, y) the supercomplex model for being represented by R, G, B three colouring components.Above-mentioned expression is expressed as the integral body of a vector to RGB three colouring components of describing multispectral image by supercomplex, and so no matter by which kind of conversion, this vector globality can both keep, and the relative position between the RGB composition of vector pixel just can not change.
For the spectral information of the multispectral image of the spatial information that better merges high-resolution full-colour image and low resolution, we carry out the supercomplex modeling respectively to full-colour image and multispectral image.Yet,, need be three-dimensional vector pixel therefore, for the amplitude that has guaranteed the vector pixel is consistent with the scalar pixel with each scalar pixel-expansion of full-colour image because the pixel of full-colour image is a scalar for full-colour image [6], each pixel value of full-colour image PAN be multiply by 1/ , so just obtain the supercomplex model of the full-colour image of the multispectral image represented by RGB and vector quantization:
QPAN ( x , y ) = 1 3 PAN ( x , y ) i + 1 3 PAN ( x , y ) j + 1 3 PAN ( x , y ) k - - - ( 2 )
QZ(x,y)=R(x,y)i+G(x,y)j+B(x,y)k
Wherein, PAN (x y) is each pixel value of full-colour image, R (x, y), G (x, y), (x y) is respectively the RGB component of multispectral image to B.In order to obtain the feature of supercomplex model, need carry out the supercomplex svd.
Document [8]Verified supercomplex can be made svd, provide supercomplex svd (Quaternion Singular Value Decomposition below, QSVD) definition: for any order is that (order of supercomplex matrix QZ is n to n, and if only if its plural adjoint matrix χ QZOrder be 2n) supercomplex matrix QZ ∈ H N * n, have two unit supercomplex matrix U, V (supercomplex unit matrix W ∈ H N * nHave following character:
WW T=W TW=I n, I n∈ R N * nBe unit matrix) make:
QZ = U Σ r 0 0 0 V T
∑ wherein nBe real diagonal matrix, and the value (being the singular value of QZ) of n non-NULL is arranged.U∈H n×n,V∈H n×n.
QSVD can also be written as
Figure A20061011810300063
Wherein Represent supercomplex to shift adjoint operator, u iBe the column vector of left singular matrix, v iColumn vector for right singular matrix.σ iBe real singular value.Can be according to the supercomplex matrix H N * nWith complex matrix C 2n * 2n. relation to extrapolate the algorithm of QSVD as follows:
The singular value of supercomplex matrix QZ can be by playing plural adjoint matrix χ QZSVD decompose and to obtain.Complex matrix χ QZSVD be decomposed into: χ QZ = U χ QZ Σ 2 n 0 0 0 ( V χ QZ ) T = Σ n ′ 2 n σ n ′ u n ′ χ QZ v n ′ χ QZ T , Wherein U χ QZ ∈ C 2 n , V χ QZ ∈ C 2 n , u N 'Be the column vector of left singular matrix, v N 'Column vector for right singular matrix.σ N 'Be real singular value.And u n ′ χ QZ = u n ′ χ QZ ′ - ( u n ′ χ QZ ′ ′ ) * , v n ′ χ QZ = v n ′ χ QZ ′ - ( v n ′ χ QZ ′ ′ ) * , u n ′ χ QZ ∈ C 2 n ( u n ′ χ QZ ′ ∈ C n , u n ′ χ QZ ′ ′ ∈ C n ) ,
v n ′ χ QZ ∈ C 2 N ( v n ′ χ QZ ′ ∈ C N , v n ′ χ QZ ′ ′ ∈ C N ) , Σ 2 n = diag ( σ 1 , σ 1 , σ 2 , σ 2 , · · · , σ n ) .
So can be from complex matrix χ QZ∈ C 2n * 2nThe SVD decomposition amount in recover supercomplex matrix QZ ∈ H N * nSingular value.But its QSVD algorithm brief summary is as follows:
1) the plural adjoint matrix χ of calculating supercomplex matrix QZ QZSingular value SVD decompose, χ QZSingular value;
2) supercomplex singular value diagonal matrix sigma nValue and plural singular value diagonal matrix sigma 2nThe pass of value is: n '=2i-1;
3) supercomplex matrix QZ n is listed as right singular vector (or left singular vector) and complex matrix χ AIt is n '=2i-1 that right singular vector (or left singular vector) closes, and
u i = u n ′ χ QZ ′ + u n ′ χ QZ ′ ′ j
v i = v n ′ χ QZ ′ + v n ′ χ QZ ′ ′ j
The SVD that this shows n * n supercomplex matrix QZ decomposes and 2n * 2n plural number adjoint matrix χ QZSVD decompose equivalence.Thus, obtain the singular value of QZ.
In like manner, also can obtain the singular value of QPAN.Promptly calculate the plural adjoint matrix χ of supercomplex matrix QPAN PANSingular value, therefrom recover the singular value of supercomplex matrix QPAN.
By QSVD, obtain the singular value of QZ and the singular value of QPAN, next these singular values are made pivot analysis, concrete steps are as follows:
Usually, the raw data of establishing each source images is to be expressed as:
X = x 11 x 12 · · · x 1 n x 21 x 22 · · · x 2 n · · · · · · · · · · · · x m 1 x m 2 · · · x mn = ( x ij ) m × n - - - ( 1 )
Wherein m and n are respectively and wait to merge the number of source images and the pixel number in every width of cloth image, each line display one width of cloth source images to be merged of matrix in the following formula; To obtain MS by step (4) among the present invention iN singular value and PSM iN singular value as first row and second row of X, characterize the information of image with the eigenwert of image, ask the correlation matrix of X then, correlation matrix is done characteristic value decomposition, acquisition eigenwert and proper vector.At last, with the weights of maximum eigenwert characteristic of correspondence vector as QZ and QPAN, weighting fusion.
Embodiment
We use Landsat 7 ETM+ sensors can describe the heart of the present invention at the multispectral image and the full-colour image (north latitude 314460.0000N, east longitude 1215360.0000E) in the area, Shanghai of shooting on June 14th, 2000.Wherein, full-colour image has the spatial resolution of 15m, and multispectral image has the spatial resolution of 30m.
Because Landsat 7 ETM+ do not provide the true as a comparison multispectral of 15m resolution, in order to provide the true multispectral image of 30m resolution to come comparison with Landsat 7ETM+, therefore, we degenerate to 30m and 60m respectively with full-colour image and multispectral image.Full-colour image and 60m multispectral image to 30m merge, and the multispectral image of result that will merge and 30m resolution compares.
Below by simulating, verifying performance of the present invention.In order to weigh the enhancing of space information in the remote sensing image fusion process, this paper adopts the SDD parameter [9]Its fusion results is estimated, and the SDD parameter is the standard deviation that merges full-colour image and low resolution multispectral image difference, and it is defined as follows:
SDD = 1 MN Σ x Σ y ( ( F ( x , y ) - MS i ( x , y ) ) - ( F ‾ - MS ‾ i ) ) 2
In the formula, F is for merging the image that obtains, and F is the average of image pixel.In general, the SDD parameter of fused images is good with the SDD parameter that approaches high-resolution multispectral image, will be in the spatial information that comprises in this moment fused images and also comprised spatial information in the high-resolution multispectral image.If the SDD parameter that merges is bigger than the SDD parameter of high-resolution multispectral image, the too much panchromatic spatial information of possibility is dissolved in the multispectral image and is gone so, causes the change of the spectral characteristic of fused images.
In order to weigh the reservation situation of spectral signature in the remote sensing image fusion process, we adopt following statistical parameter:
1) Y-PSNR (PSNR)
If think fused images F (x, y) with canonical reference image R (x, difference y) is noise, and canonical reference image R (x y) is exactly information.The Y-PSNR PSNR of fused images is defined as [10]
PSNR = 10 × log 10 MN [ max ( F ( x , y ) ) - min ( F ( x , y ) ) ] Σ x Σ y [ R ( x , y ) ) - F ( x , y ) ) ] 2
The unit of Y-PSNR is a decibel (dB).In general, the Y-PSNR that calculates is just big more, illustrates that the spectral signature of fused images and canonical reference image is approaching more, and the effect of fusion is good more.
2) related coefficient (CC)
The related coefficient of fused images F and canonical reference image R can reflect the similarity degree of the spectral signature of two width of cloth images, and it is defined as follows:
cc = Σ x Σ y ( F ( x , y ) - F ‾ ) ( R ( x , y ) - R ‾ ) ( Σ x Σ y ( F ( x , y ) - F ‾ ) 2 ) ( Σ x Σ y R ( x , y ) - R ‾ 2 )
The related coefficient of calculating is big more, illustrates that the similarity degree of spectral signature of fused images and canonical reference image is high more, and the effect of fusion is good more.Y-PSNR and related coefficient are to calculate respectively on each wave band of fused images and high-resolution multi-spectral image.
3) relative global error (ERGAS)
Global error can reflect that the spectrum of fused images on each wave band changes situation relatively, and it is defined as follows [15]:
ERGAS = 100 h l 1 K Σ i = 1 K Σ x Σ y ( F i ( x , y ) - R i ( x , y ) ) 2 Σ x Σ y R i ( x , y )
Wherein, l is the resolution of low resolution multispectral image, and h is the resolution of high-resolution multi-spectral image, and K is the wave band that participates in fusion.The relative global error of calculating is more little, and fused images and canonical reference image are approaching more, and the effect of fusion is just good more.
In the remote sensing image fusion of reality is estimated, take all factors into consideration the SDD parameter of reflection spatial information enhancing and Y-PSNR PSNR, related coefficient CC, the relative global error ERGAS that the reflection spectral information keeps.The remote sensing image fusion method of an optimum not only should improve the spatial resolution of fused images, and requires to keep as much as possible the spectral signature of original image, therefore need average out between above-mentioned two class parameters.
Table 1 has provided the SDD parameter of various fused images.Wherein, the SDD parameter of 30m resolution multi-spectral image is used for doing standard.The SDD parameter of HIS conversion and PCA transform method correspondence is far longer than the SDD parameter of true picture as can be seen from the table, the information that the full-colour image in the multispectral image is dissolved in this explanation surpassed high-resolution multispectral image the spatial information that should comprise.In small wave converting method, the SDD parameter of each wave band is all very approaching, illustrates that the spatial information of being dissolved into the full-colour image on each wave band is similarly, however with the SDD parameter of true picture be differentiated.The present invention proposes the SDD parameter that the pairing SDD parameter of method approaches true picture most, and the SDD parameter distributions rule on each wave band is also similar to the SDD parameter distributions rule on each wave band of true picture, this has illustrated that the present invention can treat with a certain discrimination each wave band of multispectral image when incorporating the spatial information of full-colour image, with realistic situation.
Table 2 has provided various fusion methods at the statistical parameter that keeps on the spectral signature.The fusion method of multispectral image supercomplex principal element weighting all has bigger Y-PSNR PSNR and related coefficient CC on each wave band, illustrate that its spectral signature of fused images and the spectral signature of the multispectral image of 30m resolution are very approaching, the relative global error ERGAS of this method on all wave bands is minimum simultaneously, and this has illustrated the validity of method on the maintenance spectral signature that the present invention proposes.
The statistical parameter of the enhancing spatial information of table 1 fusion results shown in Figure 1
Parameter Wave band True picture IHS PCA Wavelet transformation This paper method
R 0.0990 0.2625 0.1954 0.0679 0.0910
SDD G 0.0960 0.2625 0.2681 0.0683 0.0826
B 0.0811 0.2622 0.3127 0.0638 0.0779
The statistical parameter of the maintenance spectral signature of table 2 fusion results shown in Figure 1
Parameter Wave band IHS PCA Wavelet transformation This paper method
PSNR R 13.1184 14.2312 21.12 22.7437
G 12.0878 11.4783 22.62 22.8135
B 11.8573 9.5342 21.02. 23.9611
CC R 0.4596 0.6123 0.92 0.9197
G 0.1853 0.1875 0.91 0.9324
B 0.1778 0.1321 0.95 0.9653
ERGAS 10.0477 12.4789 2.5515 2.8577
In order to further specify the validity of the inventive method, we degenerate to 60m and 120m respectively with full-colour image and multispectral image.Image to above-mentioned degeneration merges, and the multispectral image of result that will merge and 60m resolution compares.
Table 3 has provided the SDD parameter of various fused images.Wherein, the SDD parameter of 60m resolution multi-spectral image is used for doing standard.Table 4 has provided various fusion methods at the statistical parameter that keeps on the spectral signature.The fusion method of multispectral image supercomplex principal element weighting all has bigger Y-PSNR PSNR and related coefficient CC on each wave band, illustrate that its spectral signature of fused images and the spectral signature of the multispectral image of 60m resolution are very approaching, the relative global error ERGAS of this method on all wave bands is minimum simultaneously, and this has illustrated the validity of method on the maintenance spectral signature that the present invention proposes.
The statistical parameter of the enhancing spatial information of table 3 fusion results shown in Figure 2
Parameter Wave band True picture IHS PCA Wavelet transformation This paper method
SDD R 0.0969 0.2584 0.1712 0.0712 0.0909
G 0.0939 0.2587 0.2612 0.0711 0.0869
B 0.0793 0.2577 0.2124 0.0694 0.0818
The statistical parameter of the maintenance spectral signature of table 4 fusion results shown in Figure 2
Parameter Wave band IHS PCA Wavelet transformation This paper method
PSNR R 13.1312 14.2312 21.12 22.7437
G 12.0990 11.4783 22.62 22.8135
B 12.0488 9.5342 21.02. 23.9611
CC R 0.4511 0.6101 0.7453 0.8991
G 0.1868 0.1922 0.8991 0.9054
B 0.2024 0.1456 0.9501 0.9482
ERGAS 9.8949 10.4121 4.58 3.4002
List of references
[1]T.M.Tu,S.C.Su,H.C.Shyu,and P.S.Huang.A new look at HIS-like image fusionmethods[J].Inf.Fusion,vol.2,no.3,pp.177-186,2001.
[2]Yesou H,Besnus Y,Polet Y.Extraction of spectral information from landsat tm data andmerger with SPOT panchromatic imagery-A contribution to the study of Geological structure[J].ISPRS Journal of Photogrammetry and Remote Sens.,1993,48(5):23~26.
[3]Shettigara V K.A generalized component substitution technique for spatial enhancementofmultispectral images using a higher resolution data set[J].Photogrammetric Engineering andRemote Sens.,1992,58(5):561~567.
[4]Nunez J,Otazu X,ForsO,et al.Multiresolution based image fusion with additive waveletdecomposition[J].IEEE Transactions on Geosciences and Remote Sens.,1999,37(3):1024~1211.
[5]C.E.Moxey,S.J.Sangwine and T.A.Ell.Hypercomples correlation Techniques for vectorimage[J].IEEE Trans.Signal Processing vol.51,No.7,July.2003.pp 1941-1953.
[6]C.E.Moxey,S.J.Sangwine and T.A.Ell.Color-grayscale image registration usinghypercomples phase correlation[C].IEEE ICIP,2002,385-388
[7]S C Pei,J J Ding,J H Chang.Efficient implementation of quaternion Fourier transform,convolution,and correlation by 2-D FFT[J].IEEE Trans.Signal Processing vol.49,No.11,Nov.2001.pp 2783-2797.
[8]F Zhang.Quaternions and matrices of quaternions[J].Linear algebra and its applications,1997,pp.21-57
[9]M.Gonzalez-Audican,J.L Saleta,R.G.Catalan,et al.Fusion of multispectral andpanchromatic images using improve HIS and PCA mergers based on wavelet decomposition[J].IEEE Trans.Geosci.Remote Sens.,vol.42,no.6,pp.1291-1299,2004
[10] Wang Haihui, Peng Jiaxiong etc., multi-source remote sensing image syncretizing effect Study on Evaluation [J], computer engineering and application, vol.25, pp.33-37,2003

Claims (4)

1, a kind of multispectral and panchromatic image fusion method of supercomplex principal element weighting is characterized in that concrete steps are as follows:
(1) at first is amplified to and the like that size M of full-colour picture * N, obtains image XS by the multispectral image of linear interpolation algorithm with low resolution i(i=1,2,3);
(2) be three-dimensional vector with each pixel-expansion of full-colour image, obtain the full-colour image PSM behind the vector quantization i(i=1,2,3);
(3) to MS iAnd PSM iVoxel in (i=1,2,3) carries out integral body with supercomplex respectively and represents;
QPAN ( x , y ) = 1 3 PAN ( x , y ) i + 1 3 PAN ( x , y ) j + 1 3 PAN ( x , y ) k - - - ( 2 )
QZ(x,y)=R(x,y)i+G(x,y)j+B(x,y)k
Wherein, PAN (x y) is each pixel value of full-colour image, R (x, y), G (x, y), (x y) is respectively the RGB component of multispectral image to B;
(4) to two multispectral image MS with the amplification of supercomplex matrix representation iFull-colour image PSM behind (i=1,2,3) and the vector quantization i(i=1,2,3) carry out piecemeal respectively, and (the supercomplex svd of n * n) obtains MS respectively iAnd PSM iN singular value;
(5) and to the supercomplex singular value that obtains carry out pivot analysis, the matrix that the different value in the two row Rooms constitutes is asked correlation matrix, and obtain the eigenwert and the proper vector of correlation matrix, be weighted image co-registration F as weights with eigenvalue of maximum characteristic of correspondence vector.
2, method according to claim 1, it is characterized in that described interpolation algorithm Z adopts nearest field interpolation method, its step is to seek hithermost pixel and its gray-scale value is composed to new pixel on original image, simply the multispectral image of low resolution expansion M * N size.
3, method according to claim 1 is characterized in that the multispectral image MS after inserting iThe full-colour image PSM of (i=1,2,3) and vector quantization iTo carry out the step of supercomplex svd as follows for piecemeal after (i=1,2,3) supercomplex modeling: the plural adjoint matrix x that calculates supercomplex matrix QZ QZSvd, therefrom recover the singular value of supercomplex matrix QZ; The svd of the plural adjoint matrix of matrix QPAN is onlapped in calculating, therefrom recovers the singular value of supercomplex matrix QPAN.
4, method according to claim 1, the step that it is characterized in that the supercomplex singular value is carried out pivot analysis is as follows: the raw data of establishing each source images is to be expressed as:
X = x 11 x 12 · · · x 1 n x 21 x 22 · · · x 2 · · · · · · · · · · · · x m 1 x m 2 · · · x mn = ( x ij ) m × n - - - ( 1 )
Wherein m and n are respectively and wait to merge the number of source images and the pixel number in every width of cloth image, each line display one width of cloth source images to be merged of matrix in the following formula; To obtain MS by step (4) iN singular value and PSM iN singular value as first row and second row of X, characterize the information of image with the eigenwert of image, ask the correlation matrix of X then, correlation matrix is done characteristic value decomposition, acquisition eigenwert and proper vector.
CNB2006101181033A 2006-11-09 2006-11-09 Multispectral and panchromatic image fusion method of supercomplex principal element weighting Expired - Fee Related CN100465661C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2006101181033A CN100465661C (en) 2006-11-09 2006-11-09 Multispectral and panchromatic image fusion method of supercomplex principal element weighting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2006101181033A CN100465661C (en) 2006-11-09 2006-11-09 Multispectral and panchromatic image fusion method of supercomplex principal element weighting

Publications (2)

Publication Number Publication Date
CN1948995A true CN1948995A (en) 2007-04-18
CN100465661C CN100465661C (en) 2009-03-04

Family

ID=38018570

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2006101181033A Expired - Fee Related CN100465661C (en) 2006-11-09 2006-11-09 Multispectral and panchromatic image fusion method of supercomplex principal element weighting

Country Status (1)

Country Link
CN (1) CN100465661C (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183460B (en) * 2007-11-27 2010-10-13 西安电子科技大学 Color picture background clutter quantizing method
CN101216557B (en) * 2007-12-27 2011-07-20 复旦大学 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
CN102194221A (en) * 2011-04-14 2011-09-21 西北工业大学 Image fusion method for WorldView-2 remote sensing images
CN102270337A (en) * 2011-08-11 2011-12-07 西北工业大学 Image interpolation method for multispectral remote sensing image
CN102693551A (en) * 2011-03-22 2012-09-26 江苏瑞蚨通软件科技有限公司(中外合资) Method for realizing three-dimensional reconstruction by multi-spectral image fusion
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101247466B (en) * 2008-02-28 2010-12-15 复旦大学 Color distorted image estimation method based on hypercomplex number color rotation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1251145C (en) * 2003-11-27 2006-04-12 上海交通大学 Pyramid image merging method being integrated with edge and texture information

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101183460B (en) * 2007-11-27 2010-10-13 西安电子科技大学 Color picture background clutter quantizing method
CN101216557B (en) * 2007-12-27 2011-07-20 复旦大学 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
CN102693551A (en) * 2011-03-22 2012-09-26 江苏瑞蚨通软件科技有限公司(中外合资) Method for realizing three-dimensional reconstruction by multi-spectral image fusion
CN102194221A (en) * 2011-04-14 2011-09-21 西北工业大学 Image fusion method for WorldView-2 remote sensing images
CN102270337A (en) * 2011-08-11 2011-12-07 西北工业大学 Image interpolation method for multispectral remote sensing image
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion

Also Published As

Publication number Publication date
CN100465661C (en) 2009-03-04

Similar Documents

Publication Publication Date Title
CN111260576B (en) Hyperspectral unmixing algorithm based on de-noising three-dimensional convolution self-coding network
CN110660038B (en) Multispectral image and full-color image fusion method based on generation countermeasure network
Zhang et al. One-two-one networks for compression artifacts reduction in remote sensing
CN111127374B (en) Pan-sharing method based on multi-scale dense network
WO2018024030A1 (en) Saliency-based method for extracting road target from night vision infrared image
CN1948995A (en) Multispectral and panchromatic image fusion method of supercomplex principal element weighting
CN101216557B (en) Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
CN111080567A (en) Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
Saeedi et al. A new pan-sharpening method using multiobjective particle swarm optimization and the shiftable contourlet transform
CN102005037B (en) Multimodality image fusion method combining multi-scale bilateral filtering and direction filtering
Yilmaz et al. A theoretical and practical survey of image fusion methods for multispectral pansharpening
CN103810755B (en) Compressed sensing spectrum picture method for reconstructing based on documents structured Cluster rarefaction representation
CN102982517B (en) Remote-sensing image fusion method based on local correlation of light spectrum and space
CN105139339A (en) Polarization image super-resolution reconstruction method based on multi-level filtering and sample matching
CN105160647A (en) Panchromatic multi-spectral image fusion method
CN103116881A (en) Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion
CN114254715A (en) Super-resolution method, system and application of GF-1WFV satellite image
CN103679661A (en) Significance analysis based self-adaptive remote sensing image fusion method
CN111008936A (en) Multispectral image panchromatic sharpening method
Zhang et al. Translate SAR data into optical image using IHS and wavelet transform integrated fusion
CN114266957A (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN116563101A (en) Unmanned aerial vehicle image blind super-resolution reconstruction method based on frequency domain residual error
Sulaiman et al. A robust pan-sharpening scheme for improving resolution of satellite images in the domain of the nonsubsampled shearlet transform
CN1296871C (en) Remote sensitive image fusing method based on residual error
CN105260992B (en) The traffic image denoising method reconstructed based on robust principal component decomposition and feature space

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090304

Termination date: 20111109