CN112257739A - Sparse representation classification method based on disturbance compressed sensing - Google Patents

Sparse representation classification method based on disturbance compressed sensing Download PDF

Info

Publication number
CN112257739A
CN112257739A CN202010890870.6A CN202010890870A CN112257739A CN 112257739 A CN112257739 A CN 112257739A CN 202010890870 A CN202010890870 A CN 202010890870A CN 112257739 A CN112257739 A CN 112257739A
Authority
CN
China
Prior art keywords
class
obtaining
sparse representation
disturbance
sparse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010890870.6A
Other languages
Chinese (zh)
Other versions
CN112257739B (en
Inventor
曹坤
徐文波
崔宇鹏
许文俊
何新辉
田克冈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwen Huafeng Beijing Technology Co ltd
Beijing University of Posts and Telecommunications
Original Assignee
Longwen Huafeng Beijing Technology Co ltd
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwen Huafeng Beijing Technology Co ltd, Beijing University of Posts and Telecommunications filed Critical Longwen Huafeng Beijing Technology Co ltd
Priority to CN202010890870.6A priority Critical patent/CN112257739B/en
Publication of CN112257739A publication Critical patent/CN112257739A/en
Application granted granted Critical
Publication of CN112257739B publication Critical patent/CN112257739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a sparse representation classification method based on disturbance compressed sensing, which comprises the following steps: obtaining a sparse representation model, obtaining a disturbance dictionary corresponding to each category of test samples, solving an optimization problem to obtain sparse vectors, obtaining residual errors corresponding to each category of test samples, and obtaining a category with the minimum residual error. The existing sparse representation method mostly assumes that a training sample can linearly represent a test sample, but the assumption is often not true in a practical system. Aiming at the problem, the sparse representation is classified and modeled into a disturbance compressed sensing model, and a better dictionary matrix and a sparse coefficient vector are obtained by solving a disturbance reconstruction problem, so that the classification accuracy is improved.

Description

Sparse representation classification method based on disturbance compressed sensing
Technical Field
The invention relates to the technical field of perception, in particular to a sparse representation classification method based on perturbation compressed perception.
Background
The basic idea of Sparse Representation Classification (SRC) is: test samples are represented using training samples and representation sparsity is constrained using an L0 sparse norm or an L1 convex norm. At present, sparse representation classification schemes are widely applied to scenes such as handwriting recognition and face recognition, and are expanded to machine learning technologies such as semi-supervised classification and dimension reduction. The mathematical model of the sparse representation classification method is
y=Dα
Wherein y ∈ RMFor the test specimens, D ∈ RM×NFor the matrix formed by the training samples, x ∈ iNα∈RMThe vector is sparsely represented, i.e. alpha has at most only K non-zero values.
In an actual sparse representation classification system, training samples and test samples used by the system are inevitably affected by various non-ideal factors, such as gaussian noise necessarily contained in the samples, light, dust and occlusion in image samples, and even slight distortion of the samples caused by hardware problems. If the influence of the factors is not considered in the design of the sparse representation classification system, the accuracy of the sparse representation classification system in practical application is reduced.
Disclosure of Invention
In order to solve the limitations and defects in the prior art, the invention provides a sparse representation classification method based on perturbation compressed sensing, which comprises the following steps:
obtaining a sparse representation model according to a test sample input by a system, wherein a calculation formula of the sparse representation model is as follows:
y=(D+E)α
Figure RE-GDA0002844977060000021
wherein E is a disturbance matrix representing the known matrix D and the ideal dictionary matrix
Figure RE-GDA00028449770600000210
The error between;
obtaining a disturbance dictionary corresponding to the test sample of each category, wherein the calculation formula is as follows:
Hl:yl=(D+Ell
wherein, y1For class 1 test specimens, E1Disturbance dictionary, α, for class 1 test samples1Sparse vectors corresponding to the type 1 test samples;
solving the following optimization problem to obtain sparse vectors:
Figure RE-GDA0002844977060000022
wherein the content of the first and second substances,
Figure RE-GDA0002844977060000023
λ is a regularization parameter;
obtaining the residual error corresponding to each category of test samples, wherein the calculation formula is as follows:
Figure RE-GDA0002844977060000024
obtain class min r with minimum residualj(y)||2
Specifically, the corresponding solving step includes:
step 101: initialization r0=y,
Figure RE-GDA0002844977060000025
k=1;
Step 102: obtaining training sample with highest correlation
Figure RE-GDA0002844977060000026
Adding the training sample into a candidate support set T ═ Tubui;
step 103: for each candidateClass j, selecting all training samples T belonging to class j in the supporting setjAnd corresponding coefficient
Figure RE-GDA0002844977060000027
Calculating residual error r corresponding to j-th classj=y-D[Tj]xj
Step 104: obtaining the class with the smallest residual
Figure RE-GDA0002844977060000028
Step 105: obtaining a correction parameter corresponding to the category with the minimum residual error, wherein the calculation formula is as follows:
Figure RE-GDA0002844977060000029
step 106: obtaining an optimized dictionary of classes with minimum residuals, the calculation formula is as follows:
Figure RE-GDA0002844977060000031
Figure RE-GDA0002844977060000032
step 107: updating residual errors
Figure RE-GDA0002844977060000033
Step 108: if K is K, go to step 109, otherwise, reset K to K +1 and return to step 102;
step 109: obtaining coefficients of support set position
Figure RE-GDA0002844977060000034
The rest positions are set to zero x [ T ]c]=0。
Optionally, the step 102 includes:
obtain the most relevantHigh set of S training samples
Figure RE-GDA0002844977060000035
And adding the training sample into a candidate support set T ═ Turi.
Optionally, the method further includes:
step 201: initialization r0=y,
Figure RE-GDA0002844977060000036
Step 202: obtaining K training sample sets with highest correlation
Figure RE-GDA0002844977060000037
Adding the training sample set into a candidate support set
Figure RE-GDA0002844977060000038
Step 203: calculating coefficients
Figure RE-GDA0002844977060000039
Selecting the training samples with the first K maximum coefficients to form a candidate support set
Figure RE-GDA00028449770600000310
Step 204: for each candidate category j, selecting all training samples T belonging to the jth category in the support setjAnd training sample T of class jjCorresponding coefficient
Figure RE-GDA00028449770600000311
Computing class j training samples TjCorresponding residual error rj=y-D[Tj]xj
Step 205: obtaining the class with the smallest residual
Figure RE-GDA00028449770600000312
Step 206: computing class j training samples TjCorresponding correctionParameters, the calculation formula is as follows:
Figure RE-GDA00028449770600000313
step 207: computing class j training samples TjThe calculation formula of the optimized dictionary is as follows:
Figure RE-GDA00028449770600000314
Figure RE-GDA00028449770600000315
step 208: updating residual errors
Figure RE-GDA0002844977060000041
Step 209: if rk||2≥||rk-1||2Executing step 2010, otherwise, returning to step 202;
step 2010: calculating coefficients for support set positions
Figure RE-GDA0002844977060000042
The rest positions are set to zero x [ T ]c]=0。
The invention has the following beneficial effects:
the sparse representation classification method based on the perturbation compressed sensing provided by the embodiment comprises the following steps: obtaining a sparse representation model, obtaining a disturbance dictionary corresponding to the test sample of each category, solving an optimization problem to obtain a sparse vector, obtaining a residual error corresponding to the test sample of each category, and obtaining the category with the minimum residual error.
In the embodiment, a better dictionary matrix and a better sparse coefficient vector are obtained by solving the disturbance reconstruction problem, so that the classification accuracy is improved. Even if there is disturbance in the training sample used, the technical scheme provided by the embodiment can obtain excellent performance, good robustness and wide application range.
Drawings
Fig. 1 is a schematic diagram illustrating a relationship between classification accuracy of CDPGOMPs and sample length according to an embodiment of the present invention.
Fig. 2 is a classification accuracy chart of the cdsps algorithm compared with the OMP algorithm and the SP algorithm according to an embodiment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following describes in detail a sparse representation classification method based on perturbation compressive sensing provided by the present invention with reference to the accompanying drawings.
Example one
Since the technical solution provided by the present embodiment is designed based on the perturbation compressive sensing technology, the related background of perturbation compressive sensing needs to be introduced. Compressive sensing is a milestone theory applied to signal processing that allows compression to be done while the signal is being acquired. Specifically, the compressed sensing technology obtains compressed low-dimensional data by projecting a signal by using a random measurement matrix, and ensures that an original signal can be correctly reconstructed from the low-dimensional compressed data.
In a sense, the compressed sensing theory is to sample information rather than data, and the dimension of a compressed sample is determined by the structure and content of the information in the signal on the premise of ensuring that the original signal can be reconstructed. The compressed sensing technology faces the following problems in practical application. Since the compressed sensing technology uses special hardware to directly obtain compressed data, and some unknown non-ideal factors are often included in a hardware system, the signal process experienced by the compressed data often has a slight difference from the original design of the system.
That is, there is often inconsistency between the measurement matrix preset by the system and the unknown measurement matrix actually used, which is a problem of disturbance of the measurement matrix. And the compressed sensing problem under the disturbance condition is also called as the disturbance compressed sensing problem.
The theoretical basis of the classification scheme based on sparse representation isThere is a high degree of similarity between samples belonging to the same class. Therefore, the test sample should be linearly representable by training samples belonging to the same class. Suppose D ═ D in a given training set0,D1,...,Dj,...,DL-1]Has L classes of data, and a training data set of each class
Figure RE-GDA0002844977060000051
Includes Q training samples corresponding to the category
Figure RE-GDA0002844977060000052
Conventional sparse representation classification schemes attempt to linearly (sparsely) represent test samples by selecting a combination of K training samples. And the K training samples come from a training dictionary D consisting of all training samples belonging to different classes. The class label of the test sample y is given by the set of samples D that yields the lowest reconstruction error (residual)jThe category to which it belongs.
In an ideal case, if the class of a known test sample y is j, y can be approximately represented as
Figure RE-GDA0002844977060000053
Wherein alpha isjIs a jth class training sample DjThe coefficient of (a). In the actual scenario, the correct label to which the test sample y belongs is unknown. Thus, for a test sample y of an unknown class, the traditional sparse representation classification design idea is to use all training samples in the dictionary D for representation, i.e.
y≈Dα (1)
Wherein the content of the first and second substances,
Figure RE-GDA0002844977060000061
αja coefficient vector corresponding to the jth class sample. Ideally, the coefficient vector α should be sparse, i.e., divided by the position corresponding to the training samples belonging to the same class as the test sample toIn addition, all other terms are zero. Thus, the sparse solution α in the formula can be obtained by solving the following optimization problem:
Figure RE-GDA0002844977060000062
where the L0 norm is defined as the number of non-zero elements in the vector α.
However, in practical scenarios, the sparse vector α obtained by the system often contains non-zero elements belonging to a plurality of different classes of training samples. Therefore, the sparse representation classification system constructs a corresponding sparse vector containing only the non-zero elements of the class training sample for each class. That is, for the j-th class, only the elements of H ≦ Q training samples belonging to the class are retained, and the corresponding sparse coefficient vector form is
Figure RE-GDA0002844977060000063
The mark alphaj={αj,1,αj,2,...,αj,HIs the coefficient vector corresponding to the jth class sample, and the corresponding training sample is Dj=[d1,d2,...,dH]Then the residual error corresponding to the jth class is defined as
rj(y)=||y-Djαj||2,j=0,1,2,...,L-1
Finally, the class of the test sample y is then determined to be the class with the smallest residual error, i.e. the class with the smallest residual error
label(y)=arg minj{||rj(y)||2}
It can be seen that the core of the sparse representation classification scheme lies in the sparse representation model formed by the formula (1) and the corresponding optimization problem (formula (2)) and the solving algorithm. At present, various methods are available to find an approximate solution to the problem, for example, reconstruction algorithms based on an orthogonal matching pursuit algorithm, a generalized orthogonal matching pursuit algorithm, a subspace pursuit algorithm, and the like.
Conventional sparse representation classification schemes assume that test samples are located in a subspace formed by training samples belonging to the same class, i.e., any test sample can be linearly represented by a training sample in the same class. However, in practical applications, due to the influence of non-ideal factors such as noise in samples, most of test samples cannot be completely represented linearly by training samples, which causes the decrease of the correlation between the sparse coefficient vector α and the test sample y, and further causes the poor classification performance of the sparse representation classification scheme.
The present embodiment is based on the problem that a more optimal dictionary matrix is designed and used for representing the test sample. Dictionary matrix used in the embodiment
Figure RE-GDA0002844977060000064
The method is obtained by optimizing a traditional dictionary based on a disturbance compression sensing technology, and can better represent a test sample. Thus, a high correlation is maintained between the resulting sparse coefficient vector and the test sample. Therefore, the embodiment improves the sparse representation model in the sparse representation classification system and the corresponding optimization problem thereof, and provides different specific derivation algorithms.
To more accurately represent the test sample, the present embodiment will use an optimized dictionary matrix
Figure RE-GDA0002844977060000071
Instead of the dictionary matrix used in conventional sparse representation classification. The dictionary matrix
Figure RE-GDA0002844977060000072
On the basis of a matrix D formed by training samples, an unknown disturbance matrix E for correction is introduced to obtain a more accurate representation model, so that the classification accuracy is improved. Therefore, the key technical problem of the embodiment is to solve an optimized dictionary matrix based on the perturbation compressive sensing technology
Figure RE-GDA0002844977060000073
(i.e., unknown disturbance matrix E) and its corresponding sparse coefficientsAnd (5) vector quantity.
The embodiment provides a sparse representation classification method based on perturbation compressed sensing, which comprises the following steps: obtaining a sparse representation model, obtaining a disturbance dictionary corresponding to the test sample of each category, solving an optimization problem to obtain a sparse vector, obtaining a residual error corresponding to the test sample of each category, and obtaining the category with the minimum residual error.
The existing sparse representation method mostly assumes that a training sample can linearly represent a test sample, but the assumption is often not true in a practical system. For the problem, the sparse representation classification is modeled into a disturbance compressed sensing model, and a better dictionary matrix and a sparse coefficient vector are obtained by solving a disturbance reconstruction problem, so that the classification accuracy is improved.
Since most of the test samples cannot be completely represented linearly by the training samples in practical applications, the more accurate model should be formula (1) compared to formula (1)
y=Dα+y
Wherein, yRepresenting the error between the test sample y and its linear representation D α.
Since the matrix D cannot completely linearly represent the test samples, this problem necessarily affects the classification. In this regard, the present embodiment uses a more optimal dictionary matrix
Figure RE-GDA0002844977060000074
To linearly represent the test specimen y. Wherein the matrix D is a known matrix formed by all training samples, and the unknown matrix E represents the known matrix D and an ideal dictionary matrix
Figure RE-GDA0002844977060000075
The error between. If the embodiment can construct a more ideal dictionary matrix
Figure RE-GDA0002844977060000076
This embodiment enables a more accurate linearity to be constructed between the test sample and the training sample. In general yAlso dependent on the sparse vector alphaThen, y is equal to D α + yTransformation of representation model into
y=(D+E)α (3)
The sparse representation vector alpha is solved based on the model, and the classification accuracy can be effectively improved. It can be found that the model in the above equation is completely consistent with the model of perturbation compressed sensing. Therefore, the present embodiment will design a classification framework and a corresponding sparse solution algorithm based on the perturbation compressive sensing theory. In order to be consistent with the compressed sensing of disturbances, the matrix E will be referred to as the disturbance matrix hereinafter.
Dictionary
Figure RE-GDA0002844977060000081
Consists of two parts, of which only the matrix D consisting of training samples is known and the other part of the matrix E is unknown. In addition, since different classes of training data tend to be generated independently of each other, it can be assumed that the corresponding perturbations in each class are different from each other. Thus, the present embodiment designs a different perturbation dictionary for each class of test sample, i.e.
Hl:yl=(D+Ell (4)
From the above point of view, to obtain the matrix ElAnd a sparse coefficient vector alphalThis embodiment will solve the following optimization problem:
Figure RE-GDA0002844977060000082
wherein the content of the first and second substances,
Figure RE-GDA0002844977060000083
λ is the regularization parameter. In the embodiment, a better dictionary matrix and a better sparse coefficient vector are obtained by solving the disturbance reconstruction problem, so that the classification accuracy is improved. Thus, the present embodiment provides a framework of sparse representation classification method based on perturbation compressed sensing as shown in table 1 below.
Table 1: sparse representation classification framework based on disturbance compressed sensing
Figure RE-GDA0002844977060000084
For the above classification framework, this embodiment proposes a variety of solving algorithms. First, the present embodiment proposes a Class-based perturbation Orthogonal Matching Pursuit (CDPOMP) algorithm according to a Perturbed Orthogonal Matching Pursuit (POMP) algorithm, which finds a suitable candidate support set in each loop and optimizes the dictionary matrix of each Class independently, as shown in table 2. Next, the present embodiment will explain the dictionary optimization mechanism in the CDPOMP algorithm in detail.
According to the model given in equation (4), the corresponding perturbations for each category should be different. To achieve this, the present embodiment designs a dictionary optimization mechanism in a Perturbation Orthogonal Matching Pursuit (POMP) algorithm based on a dictionary correction mechanism in the algorithm.
The original mechanism uses a complete support set consisting of training samples of multiple classes. In the mechanism improved by the present embodiment, the correction parameter in each class is calculated based only on the training samples belonging to the corresponding class in the support set. Since the training samples of each class are different from the training samples of other classes, the optimized dictionaries computed for the different classes are also different. The improved dictionary correction mechanism requires separate computation for each possible class to generate a different correction dictionary for each class in each iteration.
In order to reduce the computational complexity, the present embodiment further optimizes the mechanism. In each iteration, the corresponding correction parameters are generated only for the training samples and residuals corresponding to the class to which the test sample is most likely to belong (i.e. steps 3-6 in table 2 below). Therefore, the algorithm provided by the embodiment can ensure that the optimized dictionaries in each category are different from each other, and improve the classification performance with lower complexity.
Table 2: disturbance orthogonal matching tracking algorithm based on category
Figure RE-GDA0002844977060000101
Further, in the present embodiment, based on the framework of table 1, the Generic Orthogonal Matching Pursuit (GOMP) algorithm proposes the Class-based perturbation Generalized Orthogonal Matching Pursuit (cdpgold) with better performance, as shown in table 3. In the algorithm, the support set selection mechanism is optimized, the speed of selecting the correct non-zero position is increased, and the classification performance is further improved. The performance of the CDPOMP algorithm described in table 2 is limited by the error in the dictionary optimization mechanism calculations. Only one non-zero position is added to the support set in each cycle of the CDPOMP algorithm. Thus, all training samples in the support set may all belong to the wrong category, which results in the dictionary correction mechanism inevitably working in the wrong way. In the CDPGOMP algorithm as an improved algorithm, the speed of expanding the support set is significantly increased, and the CDPGOMP algorithm contains more training samples. Therefore, there is a higher probability that training samples belonging to the correct class exist in the support set, and there is also a higher probability that the dictionary correction mechanism works based on the correct class. That is, the present embodiment effectively avoids the occurrence of a situation in which the dictionary correction mechanism operates in an erroneous manner, thereby reducing the rate of erroneous classification.
Table 3: disturbance generalized orthogonal matching tracking algorithm based on category
Figure RE-GDA0002844977060000111
Finally, in the present embodiment, a Class-dependent Perturbed Subspace Pursuit (cdsps) algorithm is proposed based on a Subspace Pursuit (SP) algorithm according to the framework of table 1, as shown in table 4. The cdsps algorithm also selects and adds a number of training samples with the largest amplitude correlation into the support set in one cycle (step 2 in table 4). After that, a support set refining mechanism is introduced, and the support set is trimmed by the corresponding projection coefficients (step 3 in table 4). The mechanism can effectively eliminate a part of training samples belonging to wrong categories in the support set, thereby further reducing the possibility that the dictionary correction mechanism is incorrect in operation. Meanwhile, in order to reduce complexity, the cdsps algorithm optimizes the residual calculation modes (step 4 in table 4) of each category, and multiplexes the projection coefficients in step 3, thereby avoiding performing pseudo-inverse operation for many times and reducing the complexity of the cdsps algorithm.
Table 4: disturbance subspace tracking algorithm based on category
Figure RE-GDA0002844977060000121
The sparse representation classification scheme provided by the embodiment is evaluated by taking a traditional sparse representation classification method (using an OMP/SP algorithm to solve an optimization problem) as a reference. Fig. 1 is a schematic diagram illustrating a relationship between classification accuracy of a CDPGOMP and a sample length according to a first embodiment of the present invention, and fig. 2 is a classification accuracy diagram of a cdpgsp algorithm according to a first embodiment of the present invention, compared with an OMP algorithm and an SP algorithm. As shown in fig. 1 and 2, the sample dataset of the experiment was an MNIST handwritten digital dataset comprising 70000 handwritten digital images of 28 × 28 pixels. In the experiment, the training set contained 60000 samples and the test set contained 10000 samples. The experimental results are averaged 1000 times, each time, the raw data are compressed into compressed samples with the length M through a measurement matrix randomly generated according to a standard, and 500 training samples and 100 testing samples are randomly selected from each category.
Experiments show that the performance of the CDPSP scheme provided by the embodiment is far better than that of the OMP scheme and the SP scheme, so that the sparse representation classification method based on disturbance compressed sensing provided by the embodiment obtains a better dictionary matrix and a better sparse coefficient vector by solving a disturbance reconstruction problem, and the classification accuracy is improved.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (4)

1. A sparse representation classification method based on perturbation compressed sensing is characterized by comprising the following steps:
obtaining a sparse representation model according to a test sample input by a system, wherein a calculation formula of the sparse representation model is as follows:
y=(D+E)α
Figure RE-FDA0002844977050000011
wherein E is a disturbance matrix representing the known matrix D and the ideal dictionary matrix
Figure RE-FDA0002844977050000012
The error between;
obtaining a disturbance dictionary corresponding to the test sample of each category, wherein the calculation formula is as follows:
Hl:yl=(D+Ell
wherein, ylFor class I test specimens, ElDisturbance dictionary, α, for class I test sampleslSparse vectors corresponding to the class I test samples;
solving the following optimization problem to obtain sparse vectors:
Figure RE-FDA0002844977050000013
wherein the content of the first and second substances,
Figure RE-FDA0002844977050000014
λ is a regularization parameter;
obtaining the residual error corresponding to each category of test samples, wherein the calculation formula is as follows:
Figure RE-FDA0002844977050000015
obtain class min r with minimum residualj(y)||2
2. The sparse representation classification method based on perturbation compressed sensing of claim 1, further comprising:
step 101: initialization r0=y,
Figure RE-FDA0002844977050000016
k=1;
Step 102: obtaining training sample with highest correlation
Figure RE-FDA0002844977050000017
Adding the training sample into a candidate support set T ═ Tubui;
step 103: for each candidate class j, selecting all training samples T belonging to the jth class in the support setjAnd corresponding coefficient
Figure RE-FDA0002844977050000021
Calculating residual error r corresponding to j-th classj=y-D[Tj]xj
Step 104: obtaining the class with the smallest residual
Figure RE-FDA0002844977050000022
Step 105: obtaining a correction parameter corresponding to the category with the minimum residual error, wherein the calculation formula is as follows:
Figure RE-FDA0002844977050000023
step 106: obtaining an optimized dictionary of classes with minimum residuals, the calculation formula is as follows:
Figure RE-FDA0002844977050000024
Figure RE-FDA0002844977050000025
step 107: updating residual errors
Figure RE-FDA0002844977050000026
Step 108: if K is K, go to step 109, otherwise, reset K to K +1 and return to step 102;
step 109: obtaining coefficients of support set position
Figure RE-FDA0002844977050000027
The rest positions are set to zero x [ T ]c]=0。
3. The sparse representation classification method based on perturbation compressed sensing of claim 2, wherein the step 102 comprises:
obtaining S training sample sets with highest correlation
Figure RE-FDA0002844977050000028
And adding the training sample into a candidate support set T ═ Turi.
4. The sparse representation classification method based on perturbation compressed sensing of claim 1, further comprising:
step 201: initialization r0=y,
Figure RE-FDA0002844977050000029
Step 202: obtaining K training sample sets with highest correlation
Figure RE-FDA00028449770500000210
Adding the training sample set into a candidate support set
Figure RE-FDA00028449770500000211
Step 203: calculating coefficients
Figure RE-FDA00028449770500000212
Selecting the training samples with the first K maximum coefficients to form a candidate support set
Figure RE-FDA0002844977050000031
Step 204: for each candidate category j, selecting all training samples T belonging to the jth category in the support setjAnd training sample T of class jjCorresponding coefficient
Figure RE-FDA0002844977050000032
Computing class j training samples TjCorresponding residual error rj=y-D[Tj]xj
Step 205: obtaining the class with the smallest residual
Figure RE-FDA0002844977050000033
Step 206: computing class j training samples TjThe corresponding correction parameters are calculated according to the following formula:
Figure RE-FDA0002844977050000034
step 207: computing class j training samples TjThe calculation formula of the optimized dictionary is as follows:
Figure RE-FDA0002844977050000035
Figure RE-FDA0002844977050000036
step 208: updating residual errors
Figure RE-FDA0002844977050000037
Step 209: if | rk2≥‖rk-12Executing step 2010, otherwise, returning to step 202;
step 2010: calculating coefficients for support set positions
Figure RE-FDA0002844977050000038
The rest positions are set to zero x [ T ]c]=0。
CN202010890870.6A 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing Active CN112257739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010890870.6A CN112257739B (en) 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010890870.6A CN112257739B (en) 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing

Publications (2)

Publication Number Publication Date
CN112257739A true CN112257739A (en) 2021-01-22
CN112257739B CN112257739B (en) 2023-12-22

Family

ID=74223828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010890870.6A Active CN112257739B (en) 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing

Country Status (1)

Country Link
CN (1) CN112257739B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115790815A (en) * 2023-01-17 2023-03-14 常熟理工学院 Method and system for rapidly identifying disturbance of distributed optical fiber sensing system
CN116582132A (en) * 2023-07-06 2023-08-11 广东工业大学 Compressed sensing reconstruction method and system based on improved structured disturbance model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345621A (en) * 2013-07-09 2013-10-09 东南大学 Face classification method based on sparse concentration index
WO2015048232A1 (en) * 2013-09-26 2015-04-02 Tokitae Llc Systems, devices, and methods for classification and sensor identification using enhanced sparsity
CN104951787A (en) * 2015-06-17 2015-09-30 江苏大学 Power quality disturbance identification method for distinguishing dictionary learning under SRC framework
CN105574475A (en) * 2014-11-05 2016-05-11 华东师范大学 Common vector dictionary based sparse representation classification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345621A (en) * 2013-07-09 2013-10-09 东南大学 Face classification method based on sparse concentration index
WO2015048232A1 (en) * 2013-09-26 2015-04-02 Tokitae Llc Systems, devices, and methods for classification and sensor identification using enhanced sparsity
CN105574475A (en) * 2014-11-05 2016-05-11 华东师范大学 Common vector dictionary based sparse representation classification method
CN104951787A (en) * 2015-06-17 2015-09-30 江苏大学 Power quality disturbance identification method for distinguishing dictionary learning under SRC framework

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
沈跃;张瀚文;刘国海;刘慧;陈兆岭;: "基于判别字典学习的电能质量扰动识别方法", 仪器仪表学报, no. 10 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115790815A (en) * 2023-01-17 2023-03-14 常熟理工学院 Method and system for rapidly identifying disturbance of distributed optical fiber sensing system
CN116582132A (en) * 2023-07-06 2023-08-11 广东工业大学 Compressed sensing reconstruction method and system based on improved structured disturbance model
CN116582132B (en) * 2023-07-06 2023-10-13 广东工业大学 Compressed sensing reconstruction method and system based on improved structured disturbance model

Also Published As

Publication number Publication date
CN112257739B (en) 2023-12-22

Similar Documents

Publication Publication Date Title
Wang et al. Balanced distribution adaptation for transfer learning
Aljundi et al. Landmarks-based kernelized subspace alignment for unsupervised domain adaptation
Yang et al. Subspace clustering via good neighbors
Zhang A survey of unsupervised domain adaptation for visual recognition
Cai et al. Rapid robust principal component analysis: CUR accelerated inexact low rank estimation
Ebadi et al. Foreground segmentation with tree-structured sparse RPCA
CN110647912A (en) Fine-grained image recognition method and device, computer equipment and storage medium
Yang et al. Efficient and robust multiview clustering with anchor graph regularization
CN105528620B (en) method and system for combined robust principal component feature learning and visual classification
CN110008844B (en) KCF long-term gesture tracking method fused with SLIC algorithm
CN112257739B (en) Sparse representation classification method based on disturbance compressed sensing
CN109117860B (en) Image classification method based on subspace projection and dictionary learning
Agarwal et al. Least squares revisited: Scalable approaches for multi-class prediction
Zou et al. Quaternion block sparse representation for signal recovery and classification
CN109657693B (en) Classification method based on correlation entropy and transfer learning
CN109815440B (en) Dimension reduction method combining graph optimization and projection learning
Guo et al. Accelerating patch-based low-rank image restoration using kd-forest and Lanczos approximation
CN110826554B (en) Infrared target detection method
CN105389560B (en) Figure optimization Dimensionality Reduction method based on local restriction
CN108492283B (en) Hyperspectral image anomaly detection method based on band-constrained sparse representation
Wang et al. Robust variable screening for regression using factor profiling
US11756319B2 (en) Shift invariant loss for deep learning based image segmentation
Lv et al. A robust mixed error coding method based on nonconvex sparse representation
Zhang et al. Nonlinear dictionary learning based deep neural networks
Wu et al. Nonconvex approach for sparse and low-rank constrained models with dual momentum

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant