CN112257739B - Sparse representation classification method based on disturbance compressed sensing - Google Patents

Sparse representation classification method based on disturbance compressed sensing Download PDF

Info

Publication number
CN112257739B
CN112257739B CN202010890870.6A CN202010890870A CN112257739B CN 112257739 B CN112257739 B CN 112257739B CN 202010890870 A CN202010890870 A CN 202010890870A CN 112257739 B CN112257739 B CN 112257739B
Authority
CN
China
Prior art keywords
class
disturbance
obtaining
sparse representation
samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010890870.6A
Other languages
Chinese (zh)
Other versions
CN112257739A (en
Inventor
曹坤
徐文波
崔宇鹏
许文俊
何新辉
田克冈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longwen Huafeng Beijing Technology Co ltd
Beijing University of Posts and Telecommunications
Original Assignee
Longwen Huafeng Beijing Technology Co ltd
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longwen Huafeng Beijing Technology Co ltd, Beijing University of Posts and Telecommunications filed Critical Longwen Huafeng Beijing Technology Co ltd
Priority to CN202010890870.6A priority Critical patent/CN112257739B/en
Publication of CN112257739A publication Critical patent/CN112257739A/en
Application granted granted Critical
Publication of CN112257739B publication Critical patent/CN112257739B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2136Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on sparsity criteria, e.g. with an overcomplete basis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a sparse representation classification method based on disturbance compressed sensing, which comprises the following steps: obtaining a sparse representation model, obtaining a disturbance dictionary corresponding to each category test sample, solving an optimization problem to obtain sparse vectors, obtaining residual errors corresponding to each category test sample, and obtaining a category with the minimum residual error. Most of the existing sparse representation methods assume that the training samples can represent the test samples linearly, however, this assumption is often not true in practical systems. Aiming at the problem, the invention models the sparse representation classification as a disturbance compressed sensing model, and obtains a better dictionary matrix and sparse coefficient vector by solving a disturbance reconstruction problem, thereby improving the classification accuracy.

Description

Sparse representation classification method based on disturbance compressed sensing
Technical Field
The invention relates to the technical field of perception, in particular to a sparse representation classification method based on disturbance compressed sensing.
Background
The basic idea of sparse representation classification (Sparse Representation Classification, SRC) is: the test samples are represented by training samples, and the representation sparsity is constrained by an L0 sparse norm or an L1 convex norm. The current sparse representation classification scheme is widely applied to the scenes of handwriting recognition, face recognition and the like, and is expanded to the machine learning technologies of semi-supervised classification, dimension reduction and the like. The mathematical model of the sparse representation classification method is
y=Dα
Wherein y is E R M To test samples, D ε R M×N For training the matrix of samples, x ε i N α∈R M For sparse representation vectors, i.e., α has at most K non-zero values.
In practical sparse representation classification systems, training samples and test samples used by the system are inevitably affected by various non-ideal factors, such as gaussian noise that must be contained in the samples, light, dust and obstruction in the image samples, and even slight distortion of the samples caused by hardware problems. If the influence of these factors is not considered in the design of the sparse representation classification system, the accuracy of the sparse representation classification system in practical application is reduced.
Disclosure of Invention
In order to solve the limitations and defects existing in the prior art, the invention provides a sparse representation classification method based on disturbance compressed sensing, which comprises the following steps:
according to a test sample input by a system, a sparse representation model is obtained, and the calculation formula of the sparse representation model is as follows:
y=(D+E)α
wherein E is a perturbation matrix representing the known matrix D and the ideal dictionary matrixErrors between;
obtaining a disturbance dictionary corresponding to each class of test sample, wherein the calculation formula is as follows:
H l :y l =(D+E ll
wherein the method comprises the steps of,y 1 For class 1 test samples, E 1 Disturbance dictionary corresponding to class 1 test sample, alpha 1 Sparse vectors corresponding to the class 1 test samples;
solving the following optimization problem to obtain sparse vectors:
wherein,λ is a regularization parameter;
residual errors corresponding to the test samples of each category are obtained, and the calculation formula is as follows:
obtain class min r with minimum residual error j (y)|| 2
Specifically, the corresponding solving step includes:
step 101: initializing r 0 =y,k=1;
Step 102: obtaining training sample with highest correlationAdding the training sample into a candidate support set T=TU i;
step 103: for each candidate class j, selecting all training samples T belonging to the j-th class in the support set j And corresponding coefficientCalculating the residual error r corresponding to the j-th class j =y-D[T j ]x j
Step 104: obtain the minimumClass of residuals
Step 105: obtaining a correction parameter corresponding to the category with the minimum residual error, wherein the calculation formula is as follows:
step 106: an optimized dictionary of the class with the smallest residual is obtained, and the calculation formula is as follows:
step 107: updating residual errors
Step 108: if k=k, go to step 109, otherwise reset k=k+1 and return to step 102;
step 109: obtaining coefficients of support set positionThe rest positions are set to zero x [ T ] c ]=0。
Optionally, the step 102 includes:
obtaining S training sample sets with highest correlationAnd adding the training sample into a candidate support set T=TU.I.
Optionally, the method further comprises:
step 201: initializing r 0 =y,
Step 202: obtaining K training sample sets with highest correlationAdding the training sample set to the candidate support set +.>
Step 203: calculating coefficientsSelecting training samples with the first K largest coefficients to form a candidate support set
Step 204: for each candidate class j, selecting all training samples T belonging to the j-th class in the support set j Training sample T of class j j Corresponding coefficientsCalculate training sample T of class j j Corresponding residual r j =y-D[T j ]x j
Step 205: obtaining a class with minimal residual error
Step 206: calculate training sample T of class j j The corresponding correction parameters are calculated as follows:
step 207: calculate training sample T of class j j The calculation formula is as follows:
step 208: updating residual errors
Step 209: if r k || 2 ≥||r k-1 || 2 Executing step 2010, otherwise returning to step 202;
step 2010: calculating coefficients of support set positionsThe rest positions are set to zero x [ T ] c ]=0。
The invention has the following beneficial effects:
the sparse representation classification method based on disturbance compressed sensing provided by the embodiment comprises the following steps: obtaining a sparse representation model, obtaining a disturbance dictionary corresponding to each class of test samples, solving an optimization problem to obtain sparse vectors, obtaining residual errors corresponding to each class of test samples, and obtaining a class with the minimum residual error.
According to the embodiment, the disturbance reconstruction problem is solved to obtain a better dictionary matrix and a sparse coefficient vector, so that the classification accuracy is improved. Even if disturbance exists in the used training samples, the technical scheme provided by the embodiment can obtain excellent performance, and has good robustness and wide application range.
Drawings
Fig. 1 is a schematic diagram of a relationship between classification accuracy and sample length of CDPGOMP according to a first embodiment of the present invention.
Fig. 2 is a classification accuracy chart of the CDPSP algorithm compared with OMP algorithm and SP algorithm according to the first embodiment of the present invention.
Detailed Description
In order to enable those skilled in the art to better understand the technical scheme of the invention, the sparse representation classification method based on disturbance compressed sensing provided by the invention is described in detail below with reference to the accompanying drawings.
Example 1
Since the technical scheme provided by the embodiment is designed based on the disturbance compressed sensing technology, the related background of disturbance compressed sensing needs to be introduced. Compressed sensing is a milestone theory applied to signal processing that allows compression to be accomplished while the signal is being acquired. Specifically, the compressed sensing technology obtains compressed low-dimensional data by using a random measurement matrix to project signals, and ensures that the original signals can be correctly reconstructed from the low-dimensional compressed data.
In a sense, the compressed sensing theory is to sample information instead of data, and the dimension of the compressed sample is determined by the structure and content of the information in the signal on the premise that the original signal can be reconstructed. The compressed sensing technology faces the following problems in practical application. Because compressed sensing technology uses special hardware to directly obtain compressed data, and a hardware system often contains some unknown non-ideal factors, a small difference exists between the signal process undergone by the compressed data and the original design of the system.
That is, there is often an inconsistency between the measurement matrix preset by the system and the unknown measurement matrix actually used, which is a disturbance problem of the measurement matrix. The compressed sensing problem in the disturbance situation is also called disturbance compressed sensing problem.
The theory basis of the sparse representation classification scheme is that samples belonging to the same class have high similarity. Thus, the test samples should be linearly representable by training samples belonging to the same class. Suppose that in a given training set d= [ D ] 0 ,D 1 ,...,D j ,...,D L-1 ]There are L categories of data, each category of training data setComprising Q training samples corresponding to the category +.>
Traditional sparse representation classification schemes attempt to represent test samples linearly (sparsely) by selecting a combination of K training samples. The K training samples come from a training dictionary D composed of all training samples belonging to different classes. The class label of the test sample y is composed of the sample group D that can produce the lowest reconstruction error (residual) j The category to which it belongs.
In an ideal case, if the class of test sample y is known to be j, y can be approximated as
Wherein alpha is j Is training sample D of the j-th class j Is a coefficient of (a). In an actual scenario, the correct tag to which the test sample y belongs is not known. Thus, for the test samples y of unknown class, the conventional sparse representation classification design concept is to use all training samples in dictionary D for representation, i.e
y≈Dα (1)
Wherein,α j coefficient vectors corresponding to samples of class j. Ideally, the coefficient vector α should be sparse, i.e., zero for all but the position corresponding to the training samples belonging to the same class as the test sample. Thus, the sparse solution α in the formula can be obtained by solving the following optimization problem:
wherein the L0 norm is defined as the number of non-zero elements in the vector α.
However, in a practical scenario, the system derived sparse vector α often contains non-zero elements of training samples belonging to multiple different classes. Thus, a sparse representation classification system constructs, for each class, a corresponding sparse vector that contains only non-zero elements of that class's training samples. That is, for the j-th class, only the elements of H.ltoreq.Q training samples belonging to that class are retained, the corresponding sparse coefficient vector form is
Marking alpha j ={α j,1 ,α j,2 ,...,α j,H The j-th sample corresponds to a coefficient vector, and the corresponding training sample is D j =[d 1 ,d 2 ,...,d H ]Then the residual corresponding to class j is defined as
r j (y)=||y-D j α j || 2 ,j=0,1,2,...,L-1
Finally, the class of the test sample y is then determined as the class with the smallest residual, i.e
label(y)=arg min j {||r j (y)|| 2 }
It can be seen that the core of the sparse representation classification scheme is a sparse representation model formed by the formula (1) and a corresponding optimization problem (formula (2)) and a solving algorithm thereof. Various methods are available to find an approximate solution to the problem, such as reconstruction algorithms based on orthogonal matching pursuit algorithms, generalized orthogonal matching pursuit algorithms, subspace pursuit algorithms, etc.
Traditional sparse representation classification schemes assume that test samples are located in a subspace consisting of training samples belonging to the same class, i.e., any test sample can be represented linearly by training samples in the same class. However, in practical applications, most test samples cannot be represented by training samples completely linearly due to the influence of non-ideal factors such as noise in the samples, which results in a decrease in correlation between the sparse coefficient vector α and the test samples y, and thus in poor classification performance of the sparse representation classification scheme.
It is based on this problem that the present embodiment is designed to use a more optimal dictionary matrix for representing the test samples. Dictionary matrix used in the present embodimentThe method is optimized by a traditional dictionary based on a disturbance compressed sensing technology, and can better represent a test sample. Thus, a high correlation is maintained between the resulting sparse coefficient vector and the test sample. Therefore, the embodiment improves the sparse representation model and the optimization problem corresponding to the sparse representation model in the sparse representation classification system, and provides different specific derivative algorithms.
To more accurately represent the test samples, the present embodiment will use an optimized dictionary matrixInstead of the dictionary matrix used in traditional sparse representation classification. The dictionary matrix->On the basis of a matrix D formed by training samples, an unknown disturbance matrix E for correction is introduced to obtain a more accurate representation model, so that the classification accuracy is improved. Therefore, the key technical problem of the embodiment is to solve the optimized dictionary matrix based on the disturbance compressed sensing technology>(i.e., the unknown disturbance matrix E) and its corresponding sparse coefficient vector.
The embodiment provides a sparse representation classification method based on disturbance compressed sensing, which comprises the following steps: obtaining a sparse representation model, obtaining a disturbance dictionary corresponding to each class of test samples, solving an optimization problem to obtain sparse vectors, obtaining residual errors corresponding to each class of test samples, and obtaining a class with the minimum residual error.
Most of the existing sparse representation methods assume that the training samples can represent the test samples linearly, however, this assumption is often not true in practical systems. Aiming at the problem, the embodiment models sparse representation classification as a disturbance compressed sensing model, and obtains a better dictionary matrix and sparse coefficient vectors by solving a disturbance reconstruction problem, thereby improving classification accuracy.
Since in practical applications most test samples are not in fact completely linearly represented by training samples, a more accurate model should be than equation (1)
y=Dα+y
Wherein y is Representing the error between the test sample y and its linear representation dα.
This problem necessarily has an impact on classification, since the matrix D does not fully represent the test sample linearly. In this regard, the present embodiment uses a more optimal dictionary matrixTo linearly represent the test sample y. Wherein the matrix D is a known matrix composed of all training samples, and the unknown matrix E represents the known matrix D and the ideal dictionary matrix +.>Errors between them. If this embodiment is able to construct a more ideal dictionary matrix +.>The present embodiment can construct a more accurate linearity between the test sample and the training sample. Generally y Also depending on the sparse vector α, then y=dα+y Conversion of a representation model into
y=(D+E)α (3)
Based on the model, the sparse representation vector alpha is solved, so that the classification accuracy can be effectively improved. It can be found that the model in the above equation is completely consistent with the model of disturbance compressed sensing. Therefore, the embodiment designs a classification framework and a corresponding sparse solving algorithm based on the disturbance compressed sensing theory. In order to be consistent with the disturbance compressed sensing, the matrix E will be referred to as disturbance matrix hereinafter.
Dictionary for dictionaryConsists of two parts, of which only the matrix D consisting of training samples is known and the other part of the matrix E is unknown. In addition, since training data of different categories are often generated independently, it may be assumed that the corresponding perturbations in each category are different from each other. Thus, the present embodiment designs a different perturbation dictionary for each class of test samples, i.e
H l :y l =(D+E ll (4)
Based on the above, in order to obtain matrix E l And a sparse coefficient vector alpha l The present embodiment will solve the following optimization problem:
wherein,lambda is a regularization parameter. According to the embodiment, the disturbance reconstruction problem is solved to obtain a better dictionary matrix and a sparse coefficient vector, so that the classification accuracy is improved. Thus, the present embodiment provides a sparse representation classification method framework based on disturbance compressed sensing as shown in table 1 below.
Table 1: sparse representation classification framework based on disturbance compressed sensing
For the above classification framework, this embodiment proposes various solving algorithms. First, the present embodiment proposes a Class-based disturbance orthogonal matching pursuit (Class-Depend Perturbed Orthogonal Matching Pursuit, CDPOMP) algorithm according to a disturbance orthogonal matching pursuit (Perturbed Orthogonal Matching Pursuit, POMP) algorithm, which finds an appropriate candidate support set in each cycle and optimizes the dictionary matrix of each Class independently, as shown in table 2. Next, the present embodiment will explain in detail the dictionary optimization mechanism in the CDPOMP algorithm.
The perturbations corresponding to each category should be different according to the model given in equation (4). To achieve this, the present embodiment is based on a dictionary correction mechanism in a disturbance orthogonal matching pursuit (Perturbed Orthogonal Matching Pursuit, POMP) algorithm, and designs a dictionary optimization mechanism in the algorithm.
The original mechanism used a complete support set consisting of multiple classes of training samples. In the mechanism of the present embodiment improvement, correction parameters in each class are calculated based only on training samples belonging to the corresponding class in the support set. Because the training samples of each category are different from the training samples of the other categories, the optimized dictionary calculated for the different categories is also different. The improved dictionary correction mechanism requires separate computations for each possible class to generate a different correction dictionary for each class in each iteration.
In order to reduce the computational complexity, the present embodiment further optimizes the mechanism. In each iteration, the corresponding correction parameters are generated only for the training samples and residuals corresponding to the class to which the test sample most likely belongs (i.e. steps 3-6 in Table 2 below). Therefore, the algorithm provided by the embodiment can ensure that the optimized dictionaries in each category are different from each other, and the classification performance is improved with lower complexity.
Table 2: class-based disturbance orthogonal matching pursuit algorithm
Still further, this embodiment proposes a Class-based generalized orthogonal matching pursuit (Class-Depend Perturbed Generalized Orthogonal Matching Pursuit, CDPGOMP) with better performance based on a generalized orthogonal matching pursuit (Generalized Orthogonal Matching Pursuit, GOMP) algorithm according to the framework of table 1, as shown in table 3. In the algorithm, the embodiment optimizes a support set selection mechanism, and improves the speed of selecting the correct non-zero position, thereby further improving the classification performance. The performance of the CDPOMP algorithm described in table 2 is limited by errors in the dictionary optimization mechanism calculation. Only one non-zero position is added to the support set in each cycle of the CDPOMP algorithm. Thus, all training samples in the support set may all fall into the wrong category, which results in the dictionary correction mechanism inevitably working in the wrong way. In the CDPGOMP algorithm as an improved algorithm, the development speed of the support set is remarkably improved, and more training samples are contained. Thus, there is a higher likelihood that training samples belonging to the correct class exist in the support set, and the dictionary correction mechanism works based on the correct class. That is, the present embodiment effectively avoids the occurrence of a case where the dictionary correction mechanism works in an erroneous manner, thereby reducing the rate of erroneous classification.
Table 3: class-based disturbance generalized orthogonal matching pursuit algorithm
Finally, this embodiment proposes a Class-based disturbance Subspace tracking (CDPSP) algorithm based on a Subspace tracking (SP) algorithm according to the framework of Table 1, as shown in Table 4. The CDPSP algorithm also selects and adds multiple training samples with the greatest amplitude correlation into the support set in one cycle (step 2 in table 4). After that, a support set refining mechanism is introduced, and the support set is subjected to a pruning operation by the corresponding projection coefficients (step 3 in table 4). The mechanism can effectively exclude a part of training samples belonging to the error category in the support set, thereby further reducing the possibility of incorrect operation of the dictionary correction mechanism. Meanwhile, in order to reduce complexity, the CDPSP algorithm optimizes the residual calculation modes (step 4 in table 4) of each category, multiplexes the projection coefficients in step 3, and avoids performing pseudo-inverse operation for multiple times so as to reduce the complexity of the CDPSP algorithm.
Table 4: class-based disturbance subspace tracking algorithm
In the embodiment, the sparse representation classification scheme proposed in the embodiment is evaluated based on a traditional sparse representation classification method (using an OMP/SP algorithm to solve an optimization problem). Fig. 1 is a schematic diagram of a relationship between classification accuracy and sample length of CDPGOMP according to the first embodiment of the present invention, and fig. 2 is a classification accuracy chart of CDPSP algorithm according to the first embodiment of the present invention compared with OMP algorithm and SP algorithm. As shown in fig. 1 and 2, the sample dataset of the experiment is an MNIST handwritten digital dataset comprising 70000 sheets of 28 x 28 pixels handwritten digital images. In the experiment, the training set contained 60000 samples and the test set contained 10000 samples. The results of the experiments were averaged 1000 times, each time by compressing these raw data into compressed samples of length M according to a standard randomly generated measurement matrix, and 500 training samples and 100 test samples were randomly selected for each class.
Experiments show that the CDPSP scheme provided by the embodiment has far better performance than the OMP scheme and the SP scheme, so that the sparse representation classification method based on disturbance compressed sensing provided by the embodiment obtains a better dictionary matrix and a better sparse coefficient vector by solving a disturbance reconstruction problem, and classification accuracy is improved.
It is to be understood that the above embodiments are merely illustrative of the application of the principles of the present invention, but not in limitation thereof. Various modifications and improvements may be made by those skilled in the art without departing from the spirit and substance of the invention, and are also considered to be within the scope of the invention.

Claims (4)

1. A sparse representation classification method based on disturbance compressed sensing is characterized by comprising the following steps:
according to a test sample input by a system, the test sample comprises an MNIST handwriting digital data set, the MNIST handwriting digital data set comprises 70000 handwriting digital images of 28 x 28 pixels, a training set comprises 60000 samples, the test set comprises 10000 samples, a sparse representation model is obtained, and a calculation formula of the sparse representation model is as follows:
y=(D+E)α
wherein E is a perturbation matrix representing the known matrix D and the ideal dictionary matrixErrors between;
obtaining a disturbance dictionary corresponding to each class of test sample, wherein the calculation formula is as follows:
H l :y l =(D+E ll
wherein y is l For class I test samples, E l For the disturbance dictionary corresponding to the class I test sample, alpha l Sparse vectors corresponding to the class I test samples;
solving the following optimization problem to obtain sparse vectors:
wherein,λ is a regularization parameter;
residual errors corresponding to the test samples of each category are obtained, and the calculation formula is as follows:
obtain class min r with minimum residual error j (y)|| 2
2. The sparse representation classification method based on disturbance compressed sensing of claim 1, further comprising:
step 101: initializing r 0 =y,k=1;
Step 102: obtaining training sample with highest correlationAdding the training sample into a candidate support set T=TU i;
step 103: for each candidate class j, selecting all training samples T belonging to the j-th class in the support set j And corresponding coefficientCalculating the residual error r corresponding to the j-th class j =y-D[T j ]x j
Step 104: obtaining a class with minimal residual error
Step 105: obtaining a correction parameter corresponding to the category with the minimum residual error, wherein the calculation formula is as follows:
step 106: an optimized dictionary of the class with the smallest residual is obtained, and the calculation formula is as follows:
step 107: updating residual errors
Step 108: if k=k, go to step 109, otherwise reset k=k+1 and return to step 102;
step 109: obtaining coefficients of support set positionThe rest positions are set to zero x [ T ] c ]=0。
3. The sparse representation classification method based on disturbance compressed sensing of claim 2, wherein the step 102 comprises:
obtaining S training sample sets with highest correlationAnd adding the training sample into a candidate support set T=TU.I.
4. The sparse representation classification method based on disturbance compressed sensing of claim 1, further comprising:
step 201: initializing r 0 =y,
Step 202: obtaining K training sample sets with highest correlationAdding the training sample set to the candidate support set/>
Step 203: calculating coefficientsSelecting training samples with the first K largest coefficients to form a candidate support set
Step 204: for each candidate class j, selecting all training samples T belonging to the j-th class in the support set j Training sample T of class j j Corresponding coefficientsCalculate training sample T of class j j Corresponding residual r j =y-D[T j ]x j
Step 205: obtaining a class with minimal residual error
Step 206: calculate training sample T of class j j The corresponding correction parameters are calculated as follows:
step 207: calculate training sample T of class j j The calculation formula is as follows:
step 208: updating residual errors
Step 209: if r k || 2 ≥||r k-1 || 2 Executing step 2010, otherwise returning to step 202;
step 2010: calculating coefficients of support set positionsThe rest positions are set to zero x [ T ] c ]=0。
CN202010890870.6A 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing Active CN112257739B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010890870.6A CN112257739B (en) 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010890870.6A CN112257739B (en) 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing

Publications (2)

Publication Number Publication Date
CN112257739A CN112257739A (en) 2021-01-22
CN112257739B true CN112257739B (en) 2023-12-22

Family

ID=74223828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010890870.6A Active CN112257739B (en) 2020-08-29 2020-08-29 Sparse representation classification method based on disturbance compressed sensing

Country Status (1)

Country Link
CN (1) CN112257739B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115790815B (en) * 2023-01-17 2023-05-16 常熟理工学院 Disturbance quick identification method and system for distributed optical fiber sensing system
CN116582132B (en) * 2023-07-06 2023-10-13 广东工业大学 Compressed sensing reconstruction method and system based on improved structured disturbance model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345621A (en) * 2013-07-09 2013-10-09 东南大学 Face classification method based on sparse concentration index
WO2015048232A1 (en) * 2013-09-26 2015-04-02 Tokitae Llc Systems, devices, and methods for classification and sensor identification using enhanced sparsity
CN104951787A (en) * 2015-06-17 2015-09-30 江苏大学 Power quality disturbance identification method for distinguishing dictionary learning under SRC framework
CN105574475A (en) * 2014-11-05 2016-05-11 华东师范大学 Common vector dictionary based sparse representation classification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345621A (en) * 2013-07-09 2013-10-09 东南大学 Face classification method based on sparse concentration index
WO2015048232A1 (en) * 2013-09-26 2015-04-02 Tokitae Llc Systems, devices, and methods for classification and sensor identification using enhanced sparsity
CN105574475A (en) * 2014-11-05 2016-05-11 华东师范大学 Common vector dictionary based sparse representation classification method
CN104951787A (en) * 2015-06-17 2015-09-30 江苏大学 Power quality disturbance identification method for distinguishing dictionary learning under SRC framework

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于判别字典学习的电能质量扰动识别方法;沈跃;张瀚文;刘国海;刘慧;陈兆岭;;仪器仪表学报(第10期);全文 *
沈跃 ; 张瀚文 ; 刘国海 ; 刘慧 ; 陈兆岭 ; .基于判别字典学习的电能质量扰动识别方法.仪器仪表学报.2015,(第10期),全文. *

Also Published As

Publication number Publication date
CN112257739A (en) 2021-01-22

Similar Documents

Publication Publication Date Title
Aljundi et al. Landmarks-based kernelized subspace alignment for unsupervised domain adaptation
Wang et al. Correntropy matching pursuit with application to robust digit and face recognition
CN112257739B (en) Sparse representation classification method based on disturbance compressed sensing
US20210117733A1 (en) Pattern recognition apparatus, pattern recognition method, and computer-readable recording medium
CN115331088B (en) Robust learning method based on class labels with noise and imbalance
CN110188827B (en) Scene recognition method based on convolutional neural network and recursive automatic encoder model
CN110008844B (en) KCF long-term gesture tracking method fused with SLIC algorithm
CN105528620B (en) method and system for combined robust principal component feature learning and visual classification
He et al. Infrared target tracking based on robust low-rank sparse learning
US20200104635A1 (en) Invertible text embedding for lexicon-free offline handwriting recognition
CN116127953B (en) Chinese spelling error correction method, device and medium based on contrast learning
CN115690152A (en) Target tracking method based on attention mechanism
Lu et al. Incremental Dictionary Learning for Unsupervised Domain Adaptation.
CN114911958A (en) Semantic preference-based rapid image retrieval method
CN116468991A (en) Incremental-like non-supervision domain self-adaptive image recognition method based on progressive calibration
CN107292855B (en) Image denoising method combining self-adaptive non-local sample and low rank
Zou et al. Quaternion block sparse representation for signal recovery and classification
CN109657693B (en) Classification method based on correlation entropy and transfer learning
CN117437426A (en) Semi-supervised semantic segmentation method for high-density representative prototype guidance
CN113761845A (en) Text generation method and device, storage medium and electronic equipment
CN105389560B (en) Figure optimization Dimensionality Reduction method based on local restriction
CN115937862A (en) End-to-end container number identification method and system
Lv et al. A robust mixed error coding method based on nonconvex sparse representation
CN114529908A (en) Offline handwritten chemical reaction type image recognition technology
Zhang et al. Nonlinear dictionary learning based deep neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant