CN103745487B - The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori - Google Patents

The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori Download PDF

Info

Publication number
CN103745487B
CN103745487B CN201310713709.1A CN201310713709A CN103745487B CN 103745487 B CN103745487 B CN 103745487B CN 201310713709 A CN201310713709 A CN 201310713709A CN 103745487 B CN103745487 B CN 103745487B
Authority
CN
China
Prior art keywords
theta
distribution
wavelet
matrix
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310713709.1A
Other languages
Chinese (zh)
Other versions
CN103745487A (en
Inventor
魏巍
张艳宁
张磊
李飞
孟庆洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201310713709.1A priority Critical patent/CN103745487B/en
Publication of CN103745487A publication Critical patent/CN103745487A/en
Application granted granted Critical
Publication of CN103745487B publication Critical patent/CN103745487B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Complex Calculations (AREA)

Abstract

The invention discloses the mixed compression sensing method of a kind of Bayes's EO-1 hyperion solution based on structural sparse priori, for solving the technical problem of the mixed Compression of hyperspectral images cognitive method low precision of existing combined spectral solution.Technical scheme is that compression process mixes the low-rank character excavating high-spectral data inherence by linear solution, uses wavelet transformation that Abundances converts to structural sparse signal, uses compressed sensing to obtain compression data afterwards.Process of reconstruction, suitable end member matrix is selected from library of spectra, introduce the structural sparse priori of Abundances wavelet coefficient, then use the Bayesian inference method accurate reconstruction Abundances matrix based on gibbs sampler, finally use linear mixed model to rebuild initial data.About 10% is promoted relative to background technology compressed sensing class arithmetic accuracy.

Description

Bayes hyperspectral unmixing compressive sensing method based on structured sparse prior
Technical Field
The invention relates to a hyperspectral unmixing compression sensing method, in particular to a Bayes hyperspectral unmixing compression sensing method based on structured sparse prior.
Background
Due to the huge data volume of the hyperspectral images and the compression algorithm with high compression ratio, the hyperspectral image compression method has profound significance for acquisition, transmission and processing of the hyperspectral images. The existing hyperspectral image compression algorithms are mainly divided into two types, one type is a traditional compression method based on information coding, the method is expanded from a compression method of a common image, and focuses on removing redundancy inside each wave band and between wave bands in the hyperspectral image for compression, wherein the method comprises clustered pulse-based differential-coding (CDPCM), three-dimensional wavelet transform (3D-DWT), three-dimensional discrete cosine transform (3D-DCT) and the like, but the compression algorithm can only be compressed after image acquisition, still needs a large amount of hardware for acquiring and temporarily storing data, and the compression ratio is usually low; another method is a compression method based on compressive sensing (compressive sensing), which can accurately reconstruct the original sparse signal by collecting a small number of sample points, and provides a theoretical basis for reducing the hardware consumption required for collecting data, and the method focuses more on the reconstruction process.
The literature, "Acompressivegiven and unknown matching spectral image compression sensing, IEEETransactionson Imageprocessing,2012,21(3):1200 1210" discloses a hyperspectral image compression sensing method combining spectral unmixing. Decomposing hyperspectral data into an abundance value matrix and an end member matrix through linear unmixing; the gradient of the abundance value matrix in the spatial dimension has sparsity, and the compressed reconstruction can be carried out by adopting a compressed sensing mode. Firstly, obtaining compressed data by using a compressed sensing matrix; selecting proper end members by combining the spectrum library; then, reconstructing an abundance value matrix containing sparse gradients by using a compressed sensing method; and finally, reconstructing hyperspectral data to be acquired by using the linear mixed model. However, the structural sparsity of a deeper layer of an abundance value matrix cannot be excavated through the gradient sparsity, and the method for reconstructing the abundance value by using the augmented Lagrange method cannot adaptively select the proportion factor of the gradient sparsity regular term. These two factors directly affect the reconstruction accuracy of the compression algorithm.
Disclosure of Invention
In order to overcome the defect that the existing hyperspectral image compressed sensing method combining spectrum unmixing is poor in precision, the invention provides a Bayes hyperspectral unmixing compressed sensing method based on structured sparse prior. According to the method, the inherent low-rank property of the hyperspectral data is explored through linear unmixing in the compression process, the abundance value is converted into a structured sparse signal through wavelet transformation, and then compressed data are obtained through compressed sensing. In the reconstruction process, a proper end member matrix is selected from a spectrum library, structured sparse prior of an abundance value wavelet coefficient is introduced, then a Bayesian inference method based on Gibbs sampling is used for accurately reconstructing the abundance value matrix, and finally a linear mixed model is used for reconstructing original data. The data of the invention is on the urban data shot by a HYDICE satellite, when the compression ratio is 1:16, the normalized mean square error is less than 0.4, and on the Indiana data shot by AVIRIS, when the compression ratio is 1:16, the normalized mean square error is also less than 0.4, and the precision is improved by about 10 percent compared with the precision of a compressed sensing algorithm in the background technology.
The technical scheme adopted by the invention for solving the technical problems is as follows: a Bayes hyperspectral unmixing compressive sensing method based on structured sparse prior is characterized by comprising the following steps:
step one, aiming at high spectrum datanpRepresenting the total number of pixels, n, contained in the hyperspectral data spacebRepresenting the number of bands, x, contained in the hyperspectral datai,i=1,...,nbThe image representing each band is spread by rows to form a column vector. Each pixel is at nbThe different reflection values over the individual wavelength bands constitute the spectrum of the pixel. This spectrum is described using a linear mixing model. The linear mixture model considers any one mixtureThe resultant spectrum is a linear combination of all end members, and the proportion of the end members in the mixed spectrum is called abundance value. X represents a linear combination of the abundance value matrix and the end-member matrix:
X=HW(1)
wherein,is a matrix of abundance values and is,represents the proportion of the ith end member in the mixed spectrum of each pixel, neThe number of end-members is indicated,is an end-member matrix in whichRepresenting each end member.
Using a discrete wavelet transform for the abundance value:
Θ=ΨTH(2)
Y=ΨTX=ΨTHW=ΘW
wherein,is a wavelet basis and is a composite of a plurality of wavelets,wavelet coefficients representing abundance values, and Y represents wavelet coefficients of the entire data.Represents hiHas structural sparsity.
Secondly, reconstructing an abundance value matrix by utilizing the wavelet coefficient, and finally realizing the reconstruction of the whole data:
1. generation of m × n using random variables satisfying a Gaussian distributionpA matrix of sizes, and each column is normalized to obtain a random perceptual matrixSampling the wavelet transformed original data by phi to obtain compressed data
F=ΦY=ΦΨTX(3)
Wherein m represents a pair length of npAfter compression of the signal, typically m < np
2. For a scale-limited scenario, the influence of environmental factors on the spectrum is ignored, assuming a limited number of end-members. Using the ASTER spectral library, portions of the spectrum are selectively extracted from the spectral library as end-members for a particular scene. Finally determining that n is containedeA matrix W of individual end-members.
3. And estimating an abundance value matrix from the compressed data by adopting a Bayesian inference mode.
(1) Obtaining data obtained by compressing wavelet coefficients of an abundance value matrix independently by using a least square method through formula (2) and formula (3)
G=FWT(WWT)-1Phi theta + N (4) where,representing the noise term introduced using the least squares method.
(2) Obtaining a vector form g of the formula (4)i=Φθi+niAssuming noise niObedience mean 0 and covarianceGaussian distribution ofAnd the noise vectors are independently and identically distributed, then theta is obtainedi,i=1,2,...,neLikelihood function of (d):
p ( g i | &theta; i , &beta; i - 1 ) = ( 2 &pi; &beta; i - 1 ) - m 2 exp ( - 1 2 &beta; i - 1 | | g i - &Phi;&theta; i | | 2 2 ) - - - ( 5 )
βifor the accuracy of the noise vector, β is specifiediIs a gamma distribution as follows:
p(βi00)=Gamma(κ00)(6)
κ0=10-60=10-6the shape parameter and the scale parameter of the gamma distribution.
(3) Establishing thetaiThe structured sparsity of each element in (a). The structured sparse prior is mainly reflected in a tree structure of wavelet coefficients, when a parent node is 0, a child node thereof is highly likely to be 0, and vice versa:
p ( &theta; s , ji | &beta; s , i - 1 ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( 0 , &beta; s , i - 1 ) - - - ( 7 )
wherein, s represents the total number of scales of the wavelet transform, and θ represents the total number of scales of the wavelet transform, wherein s is 1s,jiDenotes thetaiThe jth element at scale s.0Is a distribution that is completely centered at 0. Pis,jiIs a proportionality coefficient whens,jiTowards 0, thetas,jiIs 0, otherwise, pis,jiWhen going to 1, thetas,jiFrom a gaussian distribution. Pis,jiDetermined by the tree structure of the wavelet transform:
&pi; s , ji = &pi; r ifs = 1 &pi; s 0 if 2 &le; s &le; L , &theta; pa ( s , ji ) = 0 &pi; s 1 if 2 &le; s &le; L , &theta; pa ( s , ji ) &NotEqual; 0 - - - ( 8 )
θpa(s,ji)is thetas,jiParent node of when thetapa(s,ji)When equal to 0, pis,jiSet to a high probability of 0When theta ispa(s,ji)Not equal to 0, pis,jiSet to a high probability of 1βs,iIs thetas,jiThe accuracy of the gaussian distribution in the prior. PirAnd βs,iThe prior distribution of (a) is as follows:
&pi; r ~ Beta ( &epsiv; 0 r , &gamma; 0 r ) , &pi; s 0 ~ Beta ( &epsiv; 0 s , &gamma; 0 s ) , &pi; s 1 ~ Beta ( &epsiv; 1 s , &gamma; 1 s ) , &beta; s , i ~ Gamma ( &kappa; 1 , &upsi; 1 ) - - - ( 9 )
wherein, &kappa; 1 = 10 - 6 , &upsi; 1 = 10 - 6 , [ &epsiv; 0 r , &gamma; 0 r ] = [ 0.9,0.1 ] &times; N 1 , [ &epsiv; 0 s , &gamma; 0 s , ] = [ 1 / n p , 1 / n p ] &times; N s , Nsthe number of wavelet coefficients of L layers is expressed as the s-th 1.
(4) Sampling wavelet coefficient thetas,ji. The wavelet coefficient theta is obtained through the steps (2) and (3)s,jiLikelihood function of thetas,jiAnd other prior distributions of unknown parameters. The wavelet coefficient theta to be estimated is obtained through Bayesian inference as follows:
p(Θ|G)∝∫p(G|Θ,β)p(Θ|Ω)p(Ω)p(β)dΩ(10)
wherein, &Omega; = { { &beta; s , i } i = 1 : n e s = 2 : L , &pi; r , { &pi; 0 s , &pi; 1 s } s = 2 : L } , &beta; = [ &beta; 1 , &beta; 2 , . . . , &beta; n e ] T assuming that the elements within β and Ω are independently identically distributed for all noise accuracies, p (β) and p (Ω) are as follows:
p ( &beta; ) = &Pi; i = 1 n e Gamma ( &kappa; 0 , &upsi; 0 ) , p ( &Omega; ) = &Pi; i = 1 n e &Pi; s = 1 L Gamma ( &kappa; 1 , &upsi; 1 ) &times; Beta ( &epsiv; 0 r , &gamma; 0 r ) &times; &Pi; s = 1 L Beta ( &epsiv; 0 s , &gamma; 0 s ) Beta ( &epsiv; 1 s , &gamma; 1 s ) - - - ( 11 )
a markov monte carlo method based on gibbs sampling is employed to approximate the a posteriori distribution of the variables to be estimated by extracting samples from the corresponding conditional distribution. Thetas,jiThe gibbs sampler as follows:
p ( &theta; s , ji | ~ ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( &mu; s , ji , &beta; s , ji - 1 ) &beta; s , ji = &beta; s , i + &beta; i &Phi; j T &Phi; j , &mu; sji = &beta; s , ji - 1 &Phi; j T [ g i - &Sigma; k &NotEqual; j n p &Phi; k T &Phi; k ] - - - ( 12 )
wherein-represents a further parameter, Φ, required in the conditional distributionjRepresents phijColumn j in (d).
(5) Sampling noise accuracy βi. The sampling distribution satisfies the following gamma distribution:
p ( &beta; i | ~ ) = Gamma ( &kappa; &beta; i , &upsi; &beta; i ) , &kappa; &beta; i = &kappa; 0 + m 2 , &upsi; &beta; i = &upsi; 0 + 1 2 ( g i - &Phi;&theta; i ) T ( g i - &Phi;&theta; i ) - - - ( 13 )
(6) sampling hyper-parametersThe specific sampling distribution is as follows:
p ( &beta; s , i | ~ ) = Gamma ( &kappa; &beta; s , i , &upsi; &beta; s , i ) , &kappa; &beta; s , i = &kappa; 1 + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &upsi; &beta; s , j = &upsi; 1 + 1 2 &Sigma; j = 1 N s &theta; s , ji 2 - - - ( 14 )
p ( &pi; r | ~ ) = Gamma ( &epsiv; &pi; r , &gamma; &pi; r ) , &epsiv; &pi; r = &epsiv; 0 r + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &gamma; &pi; r = &gamma; 0 r + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 ) - - - ( 15 )
p ( &pi; s , i 0 | ~ ) = Gamma ( &epsiv; &pi; s , i 0 , &gamma; &pi; s , i 0 ) &epsiv; &pi; s , i 0 = &epsiv; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) = 0 ) , &gamma; &pi; s , i 0 = &gamma; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) = 0 ) - - - ( 16 )
p ( &pi; s , i 1 | ~ ) = Gamma ( &epsiv; &pi; s , i 1 , &gamma; &pi; s , i 1 ) &epsiv; &pi; s , i 1 = &epsiv; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) , &gamma; &pi; s , i 1 = &gamma; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) - - - ( 17 )
wherein 1(x) is 1 when x is true, otherwise it is 0.
(7) And (4), (5) and (6) are circularly and sequentially executed for each wavelet coefficient in the theta to carry out sampling until a convergence condition or a termination condition is met, and finally estimated wavelet coefficients are obtainedMsIn order to count the total number of samples,for the result of each sampling. The termination condition used is the number of iterations, where the sampler stabilization process is 400 times and the sampling process is 200 times.
4. Obtaining reconstructed hyperspectral data from the linear mixture model (1)
The invention has the beneficial effects that: according to the method, the inherent low-rank property of the hyperspectral data is explored through linear unmixing in the compression process, the abundance value is converted into a structured sparse signal through wavelet transformation, and then compressed data are obtained through compressed sensing. In the reconstruction process, a proper end member matrix is selected from a spectrum library, structured sparse prior of an abundance value wavelet coefficient is introduced, then a Bayesian inference method based on Gibbs sampling is used for accurately reconstructing the abundance value matrix, and finally a linear mixed model is used for reconstructing original data. The data of the invention is on the urban data shot by a HYDICE satellite, when the compression ratio is 1:16, the normalized mean square error is less than 0.4, and on the Indiana data shot by AVIRIS, when the compression ratio is 1:16, the normalized mean square error is also less than 0.4, and the precision is improved by about 10 percent compared with the precision of a compressed sensing algorithm in the background technology.
The present invention will be described in detail with reference to the following embodiments.
Detailed Description
The Bayes hyperspectral unmixing compressive sensing method based on the structured sparse prior specifically comprises the following steps:
for one high spectrum datanpRepresenting the total number of pixels, n, contained in the hyperspectral data spacebRepresenting the number of bands, x, contained in the hyperspectral datai,i=1,...,nbThe image representing each band is spread by rows to form a column vector. Each pixel is at nbThe different reflection values over the individual wavelength bands constitute the spectrum of the pixel. Typically, the only spectrum that a pure substance has is called an end member. Due to factors such as low spatial resolution, the spectrum of the pixel is often a mixture of the spectra of various pure terrestrial objects. This spectral mixture phenomenon can be described using a linear mixture model or a non-linear mixture model. Only linear hybrid models are of interest herein. The model considers that any one mixed spectrum is a linear combination of all end members, and the proportion of the end members in the mixed spectrum is called abundance value. X can be expressed as a linear combination of an abundance value matrix and an end-member matrix, as follows:
X=HW(1)
wherein,is a matrix of abundance values and is,represents the proportion of the ith end member in the mixed spectrum of each pixel, neThe number of end-members is indicated,is an end-member matrix in whichRepresenting each end member. Since adjacent pixels have similar abundance values for each end-member, the abundance value matrix inherits the similarity of spatially adjacent pixels. To tap signals with sparse structures from the abundance value matrix, a Discrete Wavelet Transform (DWT) is used for abundance values:
Θ=ΨTH(2)
Y=ΨTX=ΨTHW=ΘW
wherein,is a wavelet basis and is a composite of a plurality of wavelets,wavelet coefficients representing abundance values, and Y represents wavelet coefficients of the entire data.Represents hiHas structural sparsity. The method specifically includes the steps that sparse wavelet coefficients need to be reconstructed by using a compressed sensing technology for compressed data, then a fullness value matrix is reconstructed by using the wavelet coefficients, and finally the reconstruction of the whole data is achieved, wherein the method specifically includes the following steps:
1. and obtaining compressed data.
Generation of m × n using random variables satisfying a Gaussian distributionpA matrix of sizes, and each column is normalized to obtain a random perceptual matrixThe wavelet transformed raw data is sampled using phi,obtaining compressed dataThe following were used:
F=ΦY=ΦΨTX(3)
wherein m represents a pair length of npAfter compression of the signal, typically m < np
2. And selecting end members.
For a scale-limited scenario, the influence of environmental factors on the spectrum is ignored, assuming a limited number of end-members. The ASTER spectral library is used, and partial spectrums are selectively extracted from the spectral library as end members according to specific scenes. Finally determining that n is containedeA matrix W of individual end-members. Usually, only a small number of end members are included in the fixed scene, so that the low rank characteristic of the hyperspectral data can be mined by linear unmixing.
3. And reconstructing an abundance value matrix through Bayesian inference.
For accurate reconstruction of the raw data, bayesian inference is employed herein to estimate a matrix of abundance values from the compressed data. The specific implementation process comprises the following steps:
(1) obtaining data in which wavelet coefficients of an abundance value matrix are individually compressed by using the least square method (Leastquares) by equations (2) and (3)
G=FWT(WWT)-1=ΦΘ+N(4)
Wherein,representing the noise term introduced using the least squares method.
(2) A gaussian likelihood function of the wavelet coefficients is constructed. Obtaining a vector form g of the formula (4)i=Φθi+niAssuming noise niObedience mean 0 and covarianceGaussian distribution ofAnd the noise vectors are independently and identically distributed, theta can be obtainedi,i=1,2,...,neLikelihood function of (d):
p ( g i | &theta; i , &beta; i - 1 ) = ( 2 &pi; &beta; i - 1 ) - m 2 exp ( - 1 2 &beta; i - 1 | | g i - &Phi;&theta; i | | 2 2 ) - - - ( 5 )
βifor the accuracy of the noise vector, β is specified hereiniIs a gamma distribution as follows:
p(βi00)=Gamma(κ00)(6)
κ0=10-60=10-6the shape parameter and the scale parameter of the gamma distribution.
(3) Establishing thetaiThe structured sparsity of each element in (a). The structured sparse prior is mainly reflected in a tree structure of wavelet coefficients, when a parent node is 0, a child node thereof is highly likely to be 0, and vice versa:
p ( &theta; s , ji | &beta; s , i - 1 ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( 0 , &beta; s , i - 1 ) - - - ( 7 )
wherein, s represents the total number of scales of the wavelet transform, and θ represents the total number of scales of the wavelet transform, wherein s is 1s,jiDenotes thetaiThe jth element at scale s.0Is a distribution that is completely centered at 0. Pis,jiIs a proportionality coefficient whens,jiTowards 0, thetas,jiIs 0, otherwise, pis,jiWhen going to 1, thetas,jiFrom a gaussian distribution. Pis,jiDetermined by the tree structure of the wavelet transform:
&pi; s , ji = &pi; r ifs = 1 &pi; s 0 if 2 &le; s &le; L , &theta; pa ( s , ji ) = 0 &pi; s 1 if 2 &le; s &le; L , &theta; pa ( s , ji ) &NotEqual; 0 - - - ( 8 )
θpa(s,ji)is thetas,jiParent node of when thetapa(s,ji)When equal to 0, pis,jiSet to a high probability of 0When theta ispa(s,ji)Not equal to 0, pis,jiSet to a high probability of 1βs,iIs thetas,jiEssence of a prior medium Gaussian distributionAnd (4) degree. PirAnd βs,iThe prior distribution of (a) is as follows:
&pi; r ~ Beta ( &epsiv; 0 r , &gamma; 0 r ) , &pi; s 0 ~ Beta ( &epsiv; 0 s , &gamma; 0 s ) , &pi; s 1 ~ Beta ( &epsiv; 1 s , &gamma; 1 s ) , &beta; s , i ~ Gamma ( &kappa; 1 , &upsi; 1 ) - - - ( 9 )
wherein, &kappa; 1 = 10 - 6 , &upsi; 1 = 10 - 6 , [ &epsiv; 0 r , &gamma; 0 r ] = [ 0.9,0.1 ] &times; N 1 , [ &epsiv; 0 s , &gamma; 0 s , ] = [ 1 / n p , 1 / n p ] &times; N s , Nsthe number of wavelet coefficients of L layers is expressed as the s-th 1.
(4) Sampling wavelet coefficient thetas,ji. The wavelet coefficient theta is obtained through the steps (2) and (3)s,jiLikelihood function of thetas,jiAnd other prior distributions of unknown parameters. The wavelet coefficient Θ to be estimated can be obtained by bayesian inference as follows:
p(Θ|G)∝∫p(G|Θ,β)p(Θ|Ω)p(Ω)p(β)dΩ(10)
wherein, &Omega; = { { &beta; s , i } i = 1 : n e s = 2 : L , &pi; r , { &pi; 0 s , &pi; 1 s } s = 2 : L } , &beta; = [ &beta; 1 , &beta; 2 , . . . , &beta; n e ] T assuming that the elements within β and Ω are independently identically distributed for all noise accuracies, p (β) and p (Ω) are as follows:
p ( &beta; ) = &Pi; i = 1 n e Gamma ( &kappa; 0 , &upsi; 0 ) , p ( &Omega; ) = &Pi; i = 1 n e &Pi; s = 1 L Gamma ( &kappa; 1 , &upsi; 1 ) &times; Beta ( &epsiv; 0 r , &gamma; 0 r ) &times; &Pi; s = 1 L Beta ( &epsiv; 0 s , &gamma; 0 s ) Beta ( &epsiv; 1 s , &gamma; 1 s ) - - - ( 11 )
since equation (10) is difficult to solve, the markov monte carlo method based on gibbs sampling is employed herein. The a posteriori distribution of the desired estimated variable is approximated by taking samples from the corresponding conditional distribution. Thetas,jiThe gibbs sampler as follows:
p ( &theta; s , ji | ~ ) = ( 1 - &pi; s , ji ) &delta; 0 + &pi; s , ji N ( &mu; s , ji , &beta; s , ji - 1 ) &beta; s , ji = &beta; s , i + &beta; i &Phi; j T &Phi; j , &mu; sji = &beta; s , ji - 1 &Phi; j T [ g i - &Sigma; k &NotEqual; j n p &Phi; k T &Phi; k ] - - - ( 12 )
wherein-represents a further parameter, Φ, required in the conditional distributionjRepresents phijColumn j in (d).
(5) Sampling noise accuracy βi. The sampling distribution satisfies the following gamma distribution:
p ( &beta; i | ~ ) = Gamma ( &kappa; &beta; i , &upsi; &beta; i ) , &kappa; &beta; i = &kappa; 0 + m 2 , &upsi; &beta; i = &upsi; 0 + 1 2 ( g i - &Phi;&theta; i ) T ( g i - &Phi;&theta; i ) - - - ( 13 )
(6) sampling hyper-parametersThe specific sampling distribution is as follows:
p ( &beta; s , i | ~ ) = Gamma ( &kappa; &beta; s , i , &upsi; &beta; s , i ) , &kappa; &beta; s , i = &kappa; 1 + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &upsi; &beta; s , j = &upsi; 1 + 1 2 &Sigma; j = 1 N s &theta; s , ji 2 - - - ( 14 )
p ( &pi; r | ~ ) = Gamma ( &epsiv; &pi; r , &gamma; &pi; r ) , &epsiv; &pi; r = &epsiv; 0 r + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 ) , &gamma; &pi; r = &gamma; 0 r + 1 2 &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 ) - - - ( 15 )
p ( &pi; s , i 0 | ~ ) = Gamma ( &epsiv; &pi; s , i 0 , &gamma; &pi; s , i 0 ) &epsiv; &pi; s , i 0 = &epsiv; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) = 0 ) , &gamma; &pi; s , i 0 = &gamma; 0 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) = 0 ) - - - ( 16 )
p ( &pi; s , i 1 | ~ ) = Gamma ( &epsiv; &pi; s , i 1 , &gamma; &pi; s , i 1 ) &epsiv; &pi; s , i 1 = &epsiv; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji &NotEqual; 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) , &gamma; &pi; s , i 1 = &gamma; 1 s + &Sigma; j = 1 N s 1 ( &theta; s , ji = 0 , &theta; pa ( s , ji ) &NotEqual; 0 ) - - - ( 17 )
wherein 1(x) is 1 when x is true, otherwise it is 0.
(7) And (4), (5) and (6) are circularly and sequentially executed for each wavelet coefficient in the theta to carry out sampling until a convergence condition or a termination condition is met, and finally estimated wavelet coefficients are obtainedMsIn order to count the total number of samples,for the result of each sampling. The termination condition used herein is the number of iterations, where the sampler stabilizes 400 times and the sampling 200 times.
4. And (5) reconstructing hyperspectral data.
Obtaining reconstructed hyperspectral data from the linear mixture model (1)

Claims (1)

1. A Bayes hyperspectral unmixing compressive sensing method based on structured sparse prior is characterized by comprising the following steps:
step one, aiming at high spectrum datanpRepresenting the total number of pixels, n, contained in the hyperspectral data spacebRepresenting the number of bands, x, contained in the hyperspectral datai,i=1,...,nbThe image representing each band is line-wiseUnfolding the formed column vector; each pixel is at nbThe different reflection values on each band constitute the spectrum of the pixel; the spectrum is described by a linear mixed model; the linear mixed model considers that any mixed spectrum is a linear combination of all end members, and the proportion of the end members in the mixed spectrum is called a abundance value; x represents a linear combination of the abundance value matrix and the end-member matrix:
X=HW(1)
wherein,is a matrix of abundance values and is,represents the proportion of the ith end member in the mixed spectrum of each pixel, neThe number of end-members is indicated,is an end-member matrix in whichRepresenting each end member;
using a discrete wavelet transform for the abundance value:
wherein,is a wavelet basis and is a composite of a plurality of wavelets,wavelet coefficients representing abundance values, Y represents wavelet coefficients of the entire data;represents hiHas structural sparsity;
secondly, reconstructing an abundance value matrix by utilizing the wavelet coefficient, and finally realizing the reconstruction of the whole data:
step 1, generating m × n by using random variables satisfying Gaussian distributionpA matrix of sizes, and each column is normalized to obtain a random perceptual matrixSampling the wavelet transformed original data by phi to obtain compressed data
F=ΦY=ΦΨTX(3)
Wherein m represents a pair length of npAfter compression of the signal, typically m < np
Step 2, for a scene with limited scale, neglecting the influence of environmental factors on the spectrum, and assuming that the number of end members is limited; using an ASTER spectral library, and selectively extracting partial spectrums from the spectral library as end members aiming at a specific scene; finally determining that n is containedeA matrix W of individual end members;
step 3, estimating an abundance value matrix from the compressed data by adopting a Bayesian inference mode;
(1) obtaining data obtained by compressing wavelet coefficients of an abundance value matrix independently by using a least square method through formula (2) and formula (3)
G=FWT(WWT)-1=ΦΘ+N(4)
Wherein,representing a noise term introduced using a least squares method;
(2) obtaining a vector form g of the formula (4)i=Φθi+niAssuming noise niObedience mean 0 and covarianceGaussian distribution ofAnd the noise vectors are independently and identically distributed, then theta is obtainedi,i=1,2,...,neLikelihood function of (d):
βifor the accuracy of the noise vector, β is specifiediIs a gamma distribution as follows:
p(βi00)=Gamma(κ00)(6)
κ0=10-60=10-6is the shape parameter and the scale parameter of the gamma distribution;
(3) establishing thetaiStructured sparse prior of each element in (a); the structured sparse prior is mainly reflected in a tree structure of wavelet coefficients, when a parent node is 0, a child node thereof is highly likely to be 0, and vice versa:
wherein, s represents the total number of scales of the wavelet transform, and θ represents the total number of scales of the wavelet transform, wherein s is 1s,jiDenotes thetaiThe jth element at scale s;0is a distribution that is completely centered at 0; pis,jiIs a proportionality coefficient whens,jiTowards 0, thetas,jiIs 0, otherwise, pis,jiWhen going to 1, thetas,jiFrom a gaussian distribution; pis,jiDetermined by the tree structure of the wavelet transform:
θpa(s,ji)is thetas,jiParent node of when thetapa(s,ji)When equal to 0, pis,jiSet to a high probability of 0When theta ispa(s,ji)Not equal to 0, pis,jiSet to a high probability of 1βs,iIs thetas,jiThe accuracy of the gaussian distribution in the prior; pir And βs,iThe prior distribution of (a) is as follows:
wherein, κ1=10-61=10-6, NsThe number of wavelet coefficients of the L layer is expressed as the s-th 1.;
(4) sampling wavelet coefficient thetas,ji(ii) a The wavelet coefficient theta is obtained through the steps (2) and (3)s,jiLikelihood function of thetas,jiThe structured sparse prior of (a), and prior distribution of other unknown parameters; the wavelet coefficient theta to be estimated is obtained by Bayesian extrapolation as followsObtaining:
p(Θ|G)∝∫p(G|Θ,β)p(Θ|Ω)p(Ω)p(β)dΩ(10)
wherein,assuming that the elements within β and Ω are independently identically distributed for all noise accuracies, p (β) and p (Ω) are as follows:
adopting a Markov Monte Carlo method based on Gibbs sampling, and approximating the posterior distribution of the variables to be estimated by extracting samples from corresponding condition distribution; thetas,jiThe gibbs sampler as follows:
wherein-represents a further parameter, Φ, required in the conditional distributionjRepresents phijJ-th column in (1);
(5) sampling noise accuracy βi(ii) a The sampling distribution satisfies the following gamma distribution:
(6) sampling hyper-parametersThe specific sampling distribution is as follows:
wherein 1(x) is 1 when x is true, otherwise 0;
(7) and (4), (5) and (6) are circularly and sequentially executed for each wavelet coefficient in the theta to carry out sampling until a convergence condition or a termination condition is met, and finally estimated wavelet coefficients are obtainedMsIn order to count the total number of samples,for each sampled result; the used termination condition is iteration times, wherein the sampler is stabilized for 400 times, and the sampling process is 200 times;
step 4, obtaining the reconstructed hyperspectral data according to the linear mixed model (1)
CN201310713709.1A 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori Active CN103745487B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310713709.1A CN103745487B (en) 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310713709.1A CN103745487B (en) 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori

Publications (2)

Publication Number Publication Date
CN103745487A CN103745487A (en) 2014-04-23
CN103745487B true CN103745487B (en) 2016-07-06

Family

ID=50502502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310713709.1A Active CN103745487B (en) 2013-12-20 2013-12-20 The mixed compression sensing method of Bayes's EO-1 hyperion solution based on structural sparse priori

Country Status (1)

Country Link
CN (1) CN103745487B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103969634B (en) * 2014-04-29 2016-08-24 西安电子科技大学 Objective attribute target attribute feature extracting method based on complete polarization attribute scattering center model
CN104091368B (en) * 2014-07-22 2017-01-11 西北工业大学 Hyperspectral demixing compressed sensing method based on spatial-spectral three-dimensional sparse prior
CN104463223B (en) * 2014-12-22 2020-04-24 西安电子科技大学 Hyperspectral image group sparse unmixing method based on space spectrum information abundance constraint
CN104734724B (en) * 2015-03-16 2017-11-24 西北工业大学 Based on the Compression of hyperspectral images cognitive method for weighting Laplce's sparse prior again
CN105427351B (en) * 2015-11-02 2018-12-14 西北工业大学 Compression of hyperspectral images cognitive method based on manifold structure sparse prior
CN106227705A (en) * 2016-09-20 2016-12-14 北京邮电大学 A kind of method and device of data collection
CN106504214B (en) * 2016-10-31 2019-03-05 西京学院 The high spectrum image Banded improvement removing method of wavelet transformation and local interpolation fusion
CN106842172B (en) * 2016-12-22 2019-02-26 西北工业大学 A kind of submarine target structural sparse feature extracting method
CN107886113B (en) * 2017-10-27 2021-05-11 成都中星世通电子科技有限公司 Electromagnetic spectrum noise extraction and filtering method based on chi-square test
CN112348123A (en) * 2020-12-08 2021-02-09 武汉卓尔数字传媒科技有限公司 User clustering method and device and electronic equipment
CN113435366A (en) * 2021-06-30 2021-09-24 南京理工大学 Multi-time hyperspectral image Bayesian unmixing method for wavelet domain

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359049A (en) * 2007-08-01 2009-02-04 北京师范大学 Remote sensing image fusion method
CN103024398A (en) * 2013-01-15 2013-04-03 山东大学 Sparse matrix based compressed sensing processing method for hyperspectral remote sensing images
CN103247034A (en) * 2013-05-08 2013-08-14 中国科学院光电研究院 Sparse-spectrum-dictionary hyperspectral image reconstruction method by using compressed sensing
CN103400341A (en) * 2013-07-03 2013-11-20 西安电子科技大学 Method for recovering hyperspectral data by combining space and spectral domains based on compressive sensing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359049A (en) * 2007-08-01 2009-02-04 北京师范大学 Remote sensing image fusion method
CN103024398A (en) * 2013-01-15 2013-04-03 山东大学 Sparse matrix based compressed sensing processing method for hyperspectral remote sensing images
CN103247034A (en) * 2013-05-08 2013-08-14 中国科学院光电研究院 Sparse-spectrum-dictionary hyperspectral image reconstruction method by using compressed sensing
CN103400341A (en) * 2013-07-03 2013-11-20 西安电子科技大学 Method for recovering hyperspectral data by combining space and spectral domains based on compressive sensing

Also Published As

Publication number Publication date
CN103745487A (en) 2014-04-23

Similar Documents

Publication Publication Date Title
CN103745487B (en) The mixed compression sensing method of Bayes&#39;s EO-1 hyperion solution based on structural sparse priori
CN111260576B (en) Hyperspectral unmixing algorithm based on de-noising three-dimensional convolution self-coding network
CN106709881B (en) A kind of high spectrum image denoising method decomposed based on non-convex low-rank matrix
CN103247034B (en) A kind of compressed sensing high spectrum image reconstructing method based on sparse spectrum dictionary
CN104952050B (en) High spectrum image adaptive de mixing method based on region segmentation
CN103871087B (en) The mixed compression sensing method of EO-1 hyperion solution based on three-dimensional total variation sparse prior
CN104063897B (en) Method for reconstructing is perceived based on the satellite Compression of hyperspectral images for scheming sparse regularization
CN106447630B (en) The high spectrum image sharpening method decomposed based on probability matrix
CN113327218B (en) Hyperspectral and full-color image fusion method based on cascade network
CN107945129B (en) MRI image reconstruction method
CN107918710B (en) Convex optimization-based design method of non-downsampling image filter bank
CN104933685A (en) Hyper-spectral compressive imaging method based on three-dimensional tensor compressed sensing
Martin et al. A new technique for hyperspectral compressive sensing using spectral unmixing
Mat Noor et al. Investigation into lossless hyperspectral image compression for satellite remote sensing
Martin et al. Hyperspectral coded aperture (HYCA): A new technique for hyperspectral compressive sensing
CN102163338B (en) Efficient reconstruction method in compression perceptual system
Iqbal et al. Deep seismic cs: A deep learning assisted compressive sensing for seismic data
Li et al. Compression of multispectral images with comparatively few bands using posttransform Tucker decomposition
CN104036509B (en) Method for unmixing hyperspectral mixed pixel based on compressed sensing
CN109559357B (en) Wavelet packet threshold-based image block compressed sensing reconstruction method
CN108765350B (en) Aerospace-oriented optical remote sensing image quantization filtering method
CN108460777B (en) Extraction, blocking, compression and reconstruction method for hyperspectrum of plant
CN111243047A (en) Image compression sensing method based on self-adaptive nonlinear network and related product
CN108734672B (en) Hyperspectral data unmixing method based on spectral library cutting and collaborative sparse regression
CN111397733B (en) Single/multi-frame snapshot type spectral imaging method, system and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant