CN104866905B - A kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes - Google Patents

A kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes Download PDF

Info

Publication number
CN104866905B
CN104866905B CN201510204653.6A CN201510204653A CN104866905B CN 104866905 B CN104866905 B CN 104866905B CN 201510204653 A CN201510204653 A CN 201510204653A CN 104866905 B CN104866905 B CN 104866905B
Authority
CN
China
Prior art keywords
dictionary
mrow
msub
msubsup
tensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510204653.6A
Other languages
Chinese (zh)
Other versions
CN104866905A (en
Inventor
孙艳丰
句福娇
胡永利
尹宝才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201510204653.6A priority Critical patent/CN104866905B/en
Publication of CN104866905A publication Critical patent/CN104866905A/en
Application granted granted Critical
Publication of CN104866905B publication Critical patent/CN104866905B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes, it can not only utilize study to arrive sparse dictionary, but also the variance of error in rarefaction representation is may learn, and the study of tensor dictionary can utilize the spatial structural form of initial data in higher-dimension tensor data.The method comprising the steps of:(1) the one-dimensional dictionary learning of beta processes;(2) the tensor dictionary learning of beta processes;(3) Posterior distrbutionp of all variables is solved;(4) sampled using gibbs method.

Description

Learning method of nonparametric sparse tensor dictionary based on beta process
Technical Field
The invention belongs to the technical field of sparse coding, and particularly relates to a learning method of a nonparametric sparse tensor dictionary based on a beta process.
Background
Sparse representation or sparse coding is a signalCan be approximately expressed as an overcomplete dictionary D ═ D1,d2,...,dM]in the past decades, sparse representation becomes a very popular tool in image denoising, image super-resolution reconstruction, classification, face recognition and other applications, mathematically, sparse representation is to approximate a signal x to be linear representation of a dictionary D and a sparse coefficient alpha so as to solve the original signal and the reconstruction error of the original signal, i.e., i.i.x-D alpha i2To the minimum optimization problem of (1).
Finding a dictionary that makes the signal as sparse as possible is a key issue in sparse representation. The optimal dictionary D in the MOD (method of orientations) method is obtained by calculating the pseudo-inverse of the sparse coefficient matrix. Lee et al transform the dictionary learning problem into a least squares problem and then into a Lagrangian dual problem for solution. Aharon et al propose a categorical K-SVD algorithm to learn an overcomplete sparse dictionary. The probability model of dictionary learning is proposed to go back to 2003. In the probabilistic model, Paisley and Carin learn a sparse dictionary using a non-parametric Bayesian method based on the beta process. This dictionary learning method has been applied to many problems of image processing. Due to the introduction of the beta process, the importance of each atom of the dictionary is deduced by a nonparametric Bayes method.
In practical applications, we do not know the noise very much, so it is not reasonable to assume in advance the noise variance size in the sparsely represented model. On the other hand, the sample set may have errors, and therefore, the confidence level of the sample set cannot be determined. Nonparametric bayesian methods are suitable for both cases and are in practice used more and more extensively, for example in Matrix decomposition, i.e. PMF (Probabilistic Matrix Factorization).
In image processing, a dictionary learning method is often applied to convert 2D data into a vector form operation. And when an image is generated into a vector, the structural information of the original image is destroyed, and the relationship existing among the pixels cannot be utilized. Meanwhile, due to the increase of the sample dimension, more sample data is needed to ensure the algorithm accuracy. Therefore, researchers are more inclined to study dictionary learning methods for 2D data or multidimensional data.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the learning method of the nonparametric sparse tensor dictionary based on the beta process can be used for learning the sparse dictionary and learning the variance of errors in sparse representation, and the learning of the tensor dictionary in high-dimensional tensor data can utilize the space structure information of original data.
The technical solution of the invention is as follows: the non-parametric sparse tensor dictionary learning method based on the beta process comprises the following steps:
(1) learning a one-dimensional dictionary in a beta process;
(2) tensor dictionary learning in a beta process;
(3) solving the posterior distribution of all variables;
(4) sampling was performed using the gibbs method.
The invention populates dictionary learning in the one-dimensional beta process to dictionary learning of high-order tensor, then solves posterior distribution of all variables, and utilizes Gibbs method to sample, so that not only can a sparse dictionary be learned, but also the variance of errors in sparse representation can be learned, and the learning of a tensor dictionary in high-dimensional tensor data can utilize the space structure information of original data.
Detailed Description
The non-parametric sparse tensor dictionary learning method based on the beta process comprises the following steps:
(1) learning a one-dimensional dictionary in a beta process;
(2) tensor dictionary learning in a beta process;
(3) solving the posterior distribution of all variables;
(4) sampling was performed using the gibbs method.
The invention populates dictionary learning in the one-dimensional beta process to dictionary learning of high-order tensor, then solves posterior distribution of all variables, and utilizes Gibbs method to sample, so that not only can a sparse dictionary be learned, but also the variance of errors in sparse representation can be learned, and the learning of a tensor dictionary in high-dimensional tensor data can utilize the space structure information of original data.
Preferably, the one-dimensional dictionary learning model in the step (1) is formula (5):
x=Dα+ε (5)
whereinIs a one-dimensional signal and is,is a dictionary, the coefficient α is controlled by two parameters α ═ z omicron w, where omicron denotes the hadamard product, z is a binary variable, each of its components represents whether the value of the corresponding position of the coefficient α is 0, where z is determined by equation (6):
zk~Bernoulli(πk),πk~Beta(a/J,b(J-1)/J) (6)
wherein z iskDenotes the kth component of z, a and b are two parameters of the beta distribution.
Preferably, in the step (2)
Given training sampleEach sample is an N-order tensorEach sample is takenThe corresponding nuclear tensor is divided into two parts
For each sample, the formula (1) is written:
wherein the variableIs determined by the beta procedure, DnRepresenting a dictionary on modulo n, anEach of (1)Component is noted asAssuming that each component is independent and follows the same Gaussian distributionAccuracy of gammabError term εiIs independently subjected to the same mean value of 0 with an accuracy of gammaeThe prior distribution of all dictionary atoms (columns) is set as 0, and the covariance matrix is the gaussian distribution of the identity matrix, then the hierarchical structure of tensor dictionary learning is expressed as formula (2):
whereinRepresents DnThe (c) th atom of (a),gaussian distribution representing tensor, assuming equal size of dictionary on each modulus Andis K × … × K in size,- (i) representing Πi,i2,…,iN) A value of an element, the value of which is expressed byThe distributions assumed in the above hierarchical model all belong to a set of conjugate exponential distributions,
the likelihood function of equation (2) is equation (3):
wherein D ═ { D ═ D1,D2,…,DN},
Preferably, the step (3) comprises the following substeps:
(3.1) sampling each atom of the modulo-n dictionary according to equation (4):
whereinAndare respectively a sampleAnd sparse coefficientIn the form of a matrix expansion of the modulus n,is composed ofAnd the k-th row of
ThenIs expressed as formula (7):
wherein
(3.2) pairing according to the formula (11)Andsampling each element of (1): write (1) in the form of a vector:
i.e. xi=D(biοzi)
WhereinAnd is
In this way it is possible to obtain,the sampling of each element of (a) is:
wherein,p0=1-πk.
and is
In addition, the first and second substrates are,the sampling of each element of (a) is:
wherein,
(3.3) sampling each element in Π according to equation (8):
m represents the number of samples, K represents the number of columns of the dictionary
(3.4) vs. gamma according to equation (9)bSampling:
m represents the number of samples, K represents the number of columns of the dictionary
(3.5) vs. gamma according to equation (10)eSampling:
m represents the number of samples, K represents the number of columns of the dictionary, DnA dictionary representing the nth modulo direction.
Preferably, the method further comprises step (5): optimization formula (12) using K-SVD algorithm
Wherein
The method is described in more detail below.
One-dimensional dictionary learning of 1 beta process
Consider a dictionary learning model:
x=Dα+ε
whereinIs a one-dimensional signal and is,is a dictionary. To be receivedinspired by probability factor analysis, Paisley et al propose a probabilistic model learned by a sparse dictionary controlled by the beta process, in which it is assumed that the coefficient α can be controlled by two parameters α ═ z w.
zk~Bernoulli(πk),πk~Beta(a/J,b(J-1)/J)
Wherein z iskRepresenting the k-th component of z. a and b are two parameters of the beta distribution. When J → ∞, the above process is called the beta process.
Tensor dictionary learning of 2 beta process
Given training sampleEach sample is an N-order tensorAssuming each sampleThe corresponding nuclear tensor can be divided into two parts
That is, for each sample it can be written:
wherein the variableEach component ofAll determined by the beta procedure, DnRepresenting a dictionary on modulo n. For convenience, we will refer toEach component of (a) is denoted asAssuming that each component is independent and follows the same Gaussian distributionAccuracy of gammab. Error term epsiloniIs independently subjected to the same mean value of 0 with an accuracy of gammaeA gaussian distribution of (a). The prior distribution of all dictionary atoms (columns) is set to mean 0 and the covariance matrix is the gaussian distribution of the identity matrix. The hierarchy of tensor dictionary learning can be expressed as:
herein, theRepresents DnThe (c) th atom of (a),representing a gaussian distribution of tensors. In the following calculations we assume that the dictionaries on each module have the same size Andthe size of (d) is K × … × K.- (i) representing Πi,i2,…,iN) A value of an element, the value of which is expressed byThe probability of (c). The distributions assumed in the above hierarchical model all belong to a conjugate exponential distribution set, so gibbs sampling can be used to infer parameters in the model.
The likelihood function of the above hierarchical model is:
wherein D ═ D1,D2,…,DN},Andthis way the posterior distribution of all variables can be derived successively using gibbs sampling.
3 Gibbs sampling
1) Sampling each atom of the modulo n dictionary: to calculate DnIn the posterior distribution under all other variables, we find the sum D in the likelihood function (1)nAll items that are relevant. First, defineAnd andare respectively a sampleAnd sparse coefficientModulo n matrix expansion form.Is composed ofThe k-th row of (1). And order
Therefore, there are:
thenThe posterior distribution of (a) can be expressed as:
wherein
2) To pairAndsampling each element of (1): first, the representation of the formula (1) tensor dictionary is formalized as a vector operation: x is the number ofi=D(biοzi). Wherein xi,bi,ziAre all vectors, and
order:
z is obtained by calculationikObeying bernoulli distribution:
whereinp0=1-πk. Pi: ve (Π) and pikRepresenting the kth element of pi.
Can also be calculated to obtainikObeying a gaussian distribution:
wherein the mean and covariance are:
3) sampling each element in Π: finding out the neutralization pi in (3)kThe associated terms, one can obtain:
so that pikObeying the beta distribution:
4) for gammabSampling: finding out (3) neutralization gammabThe associated terms, calculated, can be:
5) for gammaeSampling: finding out the neutralization gamma in (3) by the same methodeThe associated terms, calculated, can be:
4 improved dictionary learning algorithm
As can be seen from equation (4), the dictionary DnObeys a gaussian distribution and the logarithm of its posterior distribution is:
where C is a constant. When the parameter gamma is of high ordereAt fixation, maximizing the above logarithmic function is equivalent to minimizing the following optimization function:
whereinTherefore, the optimization problem can be solved by using the K-SVD algorithm. The dictionary learning method is called an improved beta process tensor dictionary learning method.
The experimental effect of the method is explained below.
Applying a nonparametric tensor dictionary learning algorithm of a beta process to video reconstruction and image denoising. The experimental software environment is matlab R2012b, the hardware environment is Intel Core 2 Duo T6400 CPU (2.00GHz) +12G RAM.
1 reconstruction of video sequences
Video reconstruction experiments were performed in the DynTex + + database. This database contains video sequences of 345 different scenes. Two typical video sequences, 'spring' and 'river' were selected. There is irregular motion in the 'spring' sequence and the motion is discontinuous, while the 'river' exhibits smooth motion and regular motion as a whole. Each dynamic sequence contains 50 frames of images, each frame of image having a size of 150 x 150, all sequences being color video. In the experiment, three channels of R, G and B are respectively reconstructed.
Blocks of samples are randomly extracted from a given video sequence, each block of samples being a third order tensor. The object being to learn a dictionary D1,D2And D3Wherein D is1,D2Dictionaries in the row and column directions, respectively, D3A dictionary representing the time direction. Three dictionaries are initialized randomly with parameters set to a0=b01 and c0=d0=e0=f0=1e-4. Each component of Π is set to 1. And after learning the dictionary, solving the sparse coefficient by utilizing an Orthogonal Matching Pursuit (OMP) algorithm. Then, three video sequences of R, G and B are reconstructed. The reconstruction result is evaluated using the average reconstruction error of the video sequence. Mean error definitionComprises the following steps:
whereinAndrespectively, the original and reconstructed video sequences. N denotes the number of frames of the video sequence. M is the number of pixels per frame image.
Table 1 shows the reconstruction results for sample blocks of different sizes (size 10)-3). Therefore, the reconstruction error is smaller when the sample block is smaller. Therefore in the following test, a sample block size of 4 × 4 × 4 was chosen.
4×4×4 5×5×5 6×6×6 7×7×7 8×8×8
river 4.27 4.31 6.21 6.80 8.72
spring 2.93 3.46 4.20 4.91 6.82
TABLE 1
The original video sequence can be well reconstructed by using the proposed tensor dictionary learning method.
2 image denoising
The second experiment is used for explaining the denoising effect of the proposed nonparametric tensor dictionary learning algorithm. The 256 × 256 images are added with gaussian noise having standard deviations of 2,5, and 10, respectively, and the effect of denoising the images is considered. This experiment can be seen as a special case of second-order tensor dictionary learning when N is 2 in model (2). For convenience, a nonparametric dictionary in a beta process is recorded as 2D-BP, an improved random two-dimensional dictionary learning method is recorded as 2D-IPBP, and a learning algorithm of a one-dimensional beta process random dictionary is recorded as 1D-BP.
In the training phase, the dictionary is learned from noisy images. The dictionaries in a given two-dimensional dictionary learning model have separate forms, i.e.D1And D2Respectively, in the row and column directions. The dictionary in the 1D-BP algorithm is unstructured and the size of the dictionary is set to 64 multiplied by 256, while two dictionaries D in two-dimensional dictionary learning1And D2The sizes are all 8 × 16. All training sample blocks are 8 x 8 in size. And after the dictionary is obtained, solving the sparse coefficient by using an orthogonal matching pursuit algorithm. The PSNR is used to measure the denoising effect.
In addition to the proposed 2D-IPBP algorithm, three other methods are involved (i) two-dimensional synthetic model-based dictionary learning (2D-SSM), (ii) separate dictionary learning and fast shrinkage threshold solution sparse coefficient algorithm (FISTA + segerable SeDiL). (iii) FISTA + unstructured SeDiL. (iv) The K-SVD algorithm. The first two algorithms are two-dimensional separation structure dictionary learning methods, and the last two algorithms are one-dimensional dictionary learning methods. Table 2 shows the results of denoising with variance of noise of 5 and 10, respectively. The method can obtain results comparable to or better than the other four methods. For example, when the noise variance is 5, i.e. the PSNR of the noisy image is 34.15, the PSNR of the result of denoising the 'peppers' image using the present algorithm is 38.31, which is 1dB higher than the PSNR of the discrete dictionary learning (SeDiL) algorithm. This indicates that the method is reasonably efficient.
TABLE 2
The following is mainly compared with the 1D-BP algorithm. Table 3 lists the results of denoising on three noisy images of 'house', 'peppers' and 'camera'. It can be seen from the graph that the denoising time of the two-dimensional non-parametric random dictionary method is much shorter than that of the one-dimensional dictionary. And the smaller the variance of the noise, the worse the denoising result of the 1D-BP method. When the PSNR of the noise image is 42.11, the PSNR of the denoised image obtained by the 1D-BP algorithm is not greatly improved, and is not improved in some cases. The method is superior to a one-dimensional nonparametric dictionary learning method in both denoising time and denoising result.
TABLE 3
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (3)

1. A learning method of a non-parametric sparse tensor dictionary based on a beta process for video reconstruction or image denoising is characterized by comprising the following steps: when the method is applied to video reconstruction, the training samples are sample blocks randomly extracted from a given video sequence, and when the method is applied to image denoising, the training samples are sample blocks randomly extracted from the given video sequence, and the method comprises the following steps:
(1) learning a one-dimensional dictionary in a beta process;
(2) tensor dictionary learning in a beta process;
(3) solving the posterior distribution of all variables;
(4) sampling by utilizing a Gibbs method;
the one-dimensional dictionary learning model in the step (1) is a formula (5):
x=Dα+∈ (5)
whereinIs a one-dimensional signal and is,is a dictionary, the coefficient alpha is controlled by two parametersWhereinexpressed hadamard product, z is a binary variable, each component of which represents whether the value of the corresponding position of the coefficient α is 0, wherein z is determined by equation (6):
zk~Bernoulli(πk),πk~Beta(a/J,b(J-1)/J) (6)
wherein z iskThe kth component representing z, a and b are two parameters of the beta distribution;
in the step (2)
Given training sampleEach sample is an N-order tensorEach sample is takenThe corresponding nuclear tensor is divided into two parts
For each sample, the formula (1) is written:
wherein the variableIs determined by the beta procedure, DnRepresenting a dictionary on modulo n, anEach component of (a) is denoted asAssuming that each component is independent and follows the same Gaussian distributionAccuracy of gammabError term εiIs independently subjected to the same mean value of 0 with an accuracy of gammaeThe prior distribution of all dictionary atoms or dictionary columns is set as 0, and the covariance matrix is the gaussian distribution of the identity matrix, so that the hierarchical structure of tensor dictionary learning is expressed as formula (2):
whereinRepresents DnThe (c) th atom of (a),gaussian distribution representing tensor, assuming equal size of dictionary on each modulus Andthe size of (A) is K.times.K,- (i) representing Πi,i2,…,iN) A value of an element, the value of which is expressed byThe distributions assumed in the above hierarchical model all belong to a set of conjugate exponential distributions,
the likelihood function of equation (2) is equation (3):
wherein D ═ { D ═ D1,D2,…,DN},
2. The method for learning a non-parametric sparse tensor dictionary based on beta process for video reconstruction or image denoising as recited in claim 1, wherein: the step (3) comprises the following sub-steps:
(3.1) sampling each atom of the modulo-n dictionary according to equation (4):
whereinThe kth column of the modulo n dictionary is represented,andare respectively a sampleAnd sparse coefficientIn the form of a matrix expansion of the modulus n,is composed ofAnd the k-th row of
ThenIs expressed as formula (7):
wherein
(3.2) pairing according to the formula (11)Andsampling each element of (1): write (1) in the form of a vector:
whereinAnd is
In this way it is possible to obtain,the sampling of each element of (a) is:
<mrow> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>z</mi> <mrow> <mi>i</mi> <mi>k</mi> </mrow> </msub> <mo>|</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>D</mi> <mo>,</mo> <msub> <mi>b</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>&amp;pi;</mi> <mi>k</mi> </msub> <mo>,</mo> <msub> <mi>&amp;gamma;</mi> <mi>e</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>B</mi> <mi>e</mi> <mi>r</mi> <mi>n</mi> <mi>o</mi> <mi>u</mi> <mi>l</mi> <mi>l</mi> <mi>i</mi> <mrow> <mo>(</mo> <mfrac> <msub> <mi>p</mi> <mn>1</mn> </msub> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>+</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow>
wherein,p0=1-πk.
and is
In addition, the first and second substrates are,the sampling of each element of (a) is:
wherein,
(3.3) sampling each element in Π according to equation (8):
m represents the number of samples, and K represents the column number of the dictionary;
(3.4) vs. gamma according to equation (9)bSampling:
m represents the number of samples, K represents the number of columns of the dictionary
(3.5) according to the disclosureFormula (10) vs. gammaeSampling:
m represents the number of samples, K represents the number of columns of the dictionary, DnA dictionary representing the nth modulo direction.
3. The learning method of the non-parametric sparse tensor dictionary based on the beta process as recited in claim 2, wherein: the method further comprises the step (5): optimization formula (12) using K-SVD algorithm
<mrow> <mi>E</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>M</mi> </munderover> <mo>|</mo> <mo>|</mo> <msubsup> <mover> <mi>X</mi> <mo>~</mo> </mover> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mi>i</mi> </msubsup> <mo>-</mo> <msubsup> <mi>d</mi> <mi>k</mi> <mi>n</mi> </msubsup> <mo>&amp;CenterDot;</mo> <msubsup> <mi>c</mi> <mrow> <mi>n</mi> <mi>k</mi> </mrow> <mi>i</mi> </msubsup> <mo>|</mo> <msubsup> <mo>|</mo> <mi>F</mi> <mn>2</mn> </msubsup> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <msubsup> <mi>d</mi> <mi>k</mi> <mi>n</mi> </msubsup> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow>
Wherein
CN201510204653.6A 2015-04-27 2015-04-27 A kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes Active CN104866905B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510204653.6A CN104866905B (en) 2015-04-27 2015-04-27 A kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510204653.6A CN104866905B (en) 2015-04-27 2015-04-27 A kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes

Publications (2)

Publication Number Publication Date
CN104866905A CN104866905A (en) 2015-08-26
CN104866905B true CN104866905B (en) 2018-01-16

Family

ID=53912725

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510204653.6A Active CN104866905B (en) 2015-04-27 2015-04-27 A kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes

Country Status (1)

Country Link
CN (1) CN104866905B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106447632B (en) * 2016-09-23 2019-04-02 西北工业大学 A kind of RAW image denoising method based on rarefaction representation
CN107561576B (en) * 2017-08-31 2023-10-20 中油奥博(成都)科技有限公司 Seismic signal recovery method based on dictionary learning regularized sparse representation
CN108280466B (en) * 2018-01-12 2021-10-29 西安电子科技大学 Polarization SAR (synthetic aperture radar) feature classification method based on weighted nuclear norm minimization
CN109712074A (en) * 2018-12-20 2019-05-03 黑龙江大学 The remote sensing images super-resolution reconstruction method of two-parameter beta combine processes dictionary
CN113989406B (en) * 2021-12-28 2022-04-01 成都理工大学 Tomography gamma scanning image reconstruction method based on sparse tensor dictionary learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8396310B1 (en) * 2009-09-30 2013-03-12 Rockwell Collins, Inc. Basis learning for sparse image representation and classification and low data rate compression
CN103077507B (en) * 2013-01-25 2015-06-17 西安电子科技大学 Beta algorithm-based multiscale SAR (Synthetic Aperture Radar) image denoising method

Also Published As

Publication number Publication date
CN104866905A (en) 2015-08-26

Similar Documents

Publication Publication Date Title
CN104866905B (en) A kind of learning method of the sparse tensor dictionary of nonparametric based on beta processes
Vaswani et al. Recursive recovery of sparse signal sequences from compressive measurements: A review
Cao et al. Recovering low-rank and sparse matrix based on the truncated nuclear norm
Wen et al. Structured overcomplete sparsifying transform learning with convergence guarantees and applications
Ravishankar et al. Sparsifying transform learning with efficient optimal updates and convergence guarantees
Van Nguyen et al. Kernel dictionary learning
He et al. Robust principal component analysis based on maximum correntropy criterion
CN108416723B (en) Lens-free imaging fast reconstruction method based on total variation regularization and variable splitting
Abou-Moustafa et al. A note on metric properties for some divergence measures: The Gaussian case
CN106326871B (en) A kind of robust human face recognition methods decomposed based on dictionary with rarefaction representation
Li et al. A fast algorithm for learning overcomplete dictionary for sparse representation based on proximal operators
Naderahmadian et al. Correlation based online dictionary learning algorithm
CN109887050A (en) A kind of code aperture spectrum imaging method based on self-adapting dictionary study
CN110717519A (en) Training, feature extraction and classification method, device and storage medium
Vaswani Nonconvex structured phase retrieval: A focus on provably correct approaches
Guth et al. Phase collapse in neural networks
Wen et al. Learning overcomplete sparsifying transforms with block cosparsity
US8953892B2 (en) Efficient inner product computation for image and video analysis
Sezer et al. Face recognition with independent component-based super-resolution
Loza et al. A robust maximum correntropy criterion for dictionary learning
Han et al. Tensor robust principal component analysis with side information: Models and applications
Liang et al. Simple alternating minimization provably solves complete dictionary learning
Golmohammady et al. K-LDA: An algorithm for learning jointly overcomplete and discriminative dictionaries
He et al. Convex optimization based low-rank matrix decomposition for image restoration
Gkillas et al. Fast sparse coding algorithms for piece-wise smooth signals

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant