CN105701506A - Improved method based on extreme learning machine (ELM) and sparse representation classification - Google Patents

Improved method based on extreme learning machine (ELM) and sparse representation classification Download PDF

Info

Publication number
CN105701506A
CN105701506A CN201610018444.7A CN201610018444A CN105701506A CN 105701506 A CN105701506 A CN 105701506A CN 201610018444 A CN201610018444 A CN 201610018444A CN 105701506 A CN105701506 A CN 105701506A
Authority
CN
China
Prior art keywords
output
hidden node
hat
classification
rarefaction representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610018444.7A
Other languages
Chinese (zh)
Other versions
CN105701506B (en
Inventor
曹九稳
郝娇平
张凯
曾焕强
赖晓平
赵雁飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201610018444.7A priority Critical patent/CN105701506B/en
Publication of CN105701506A publication Critical patent/CN105701506A/en
Application granted granted Critical
Publication of CN105701506B publication Critical patent/CN105701506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • G06F18/2155Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of improved methods based on transfinite learning machine and rarefaction representation classification. Steps are as follows by the present invention: 1, hidden node parameter is randomly generated; 2, hidden node output matrix is calculated, 3, according to the size relation of L and N, the output weight of connection hidden node and output neuron is calculated using different formula 4, the output vector of inquiry picture y is calculated; 5, the difference of maximum ο f and second largest value ο s in ELM output vector ο are judged, if difference is greater than the set value, finding out the corresponding index of maximum value in output vector is inquiry picture generic; Otherwise 6 are entered step; 6, using training sample corresponding to k maximum value in output vector ο, constructor dictionary calculates the linear expression coefficient of picture y using coefficient restructing algorithm, calculates residual error and the classification according to corresponding to residual error determines the affiliated class of inquiry picture. Calculation amount of the present invention will greatly reduce, and realize higher discrimination, can also substantially reduce computation complexity.

Description

A kind of based on the learning machine improved method with rarefaction representation classification that transfinites
Technical field
The invention belongs to image classification field, particularly relate to a kind of based on the learning machine improved method with rarefaction representation classification that transfinites。
Background technology
Image is classified, and namely automatically input picture is integrated in a certain specific classification and has attracted increasingly to pay close attention to widely, especially because it is in security system, and medical diagnosis, the application in multiple fields such as man-machine interaction。At several years of the past, some technology developed from machine learning also created very big impact in image classification field。It is true that each method almost passing by propose has its merits and demerits。One inevitable problem is exactly the compromise of computation complexity and classification accuracy。In other words, one can not namely be designed in all application at efficiency and discrimination all the best ways。In order to solve this problem, the system of mixing is arisen at the historic moment, and is namely integrated with the advantage of various distinct methods thus forming a kind of significantly more efficient method。
One vital factor of successful image classification system is exactly grader。One design good grader will not because of some other factor, for instance different feature extracting methods and be affected。In the past few decades, artificial neural network is owing to can arbitrarily arrange input parameter and significantly benefited, and not only pace of learning is than very fast, and achieves better Generalization Capability。Among them, the learning machine (ExtremeLearningMachine, ELM) that transfinites is paid close attention to widely and is studied。Why so welcome the learning machine (ELM) that transfinites is, is because it and has quick pace of learning, the measurability of the real-time ability processed and neutral net。Except the learning machine that transfinites (ELM), another also enjoys research institution to be concerned with the classification (SparseRepresentationbasedClassification, SRC) based on rarefaction representation。Rarefaction representation classification (SRC) is initially to study the neuronic sparse performance of human vision most, finds that it was in recognition of face later, and the aspect such as machine vision and direction estimation also has good performance。Rarefaction representation classification (SRC) is to try to find out from the contact between of a sort samples pictures and the rarefaction representation coefficient being set up picture to be checked by linear regression。Although ELM and SRC is each with prominent advantage, but they there are still some drawbacks and limit they development in actual applications。But SRC experiments show that the pace of learning of ELM quickly, but noise can not be processed preferably, although can process noise preferably pay very big calculation cost。It is additionally noted that one is designed good grader and not only needs to show higher discrimination, and need recognition efficiency faster。Since transfiniting, learning machine (ELM) and rarefaction representation classification (SRC) are respectively arranged with advantage, then a kind of hybrid classifer of design is exactly rational。Experiments show that, ELM-SRC ratio in discrimination learning machine (ELM) that transfinites does very well, computation complexity also reduces than rarefaction representation classification (SRC), but owing to employing complete dictionary, the learning machine that transfinites remains significantly high with rarefaction representation classification (ELM-SRC) computation complexity。
Artificial neural network (ArtificialNeuralNetworks) is also referred to as neutral net (NNs), and it is a kind of imitation animal nerve network behavior feature, carries out the algorithm mathematics model of distributed parallel information processing。This network relies on the complexity of system, by adjusting interconnective relation between internal great deal of nodes, thus reaching the purpose of process information。Artificial neural network is that a kind of application is similar to the structure that cerebral nerve synapse couples and carries out the mathematical model of information processing。In engineering with academia also often directly referred to as neutral net。The each neuron constituting feedforward network accepts previous stage input, and exports next stage, feedback-less, it is possible to represent with a directed acyclic graph。The node of figure is divided into two classes, i.e. input node and computing unit。Each computing unit can have an arbitrarily input, but only one of which exports, and exports the input being alternatively coupled to other other nodes any number of。Feedforward neural network is generally divided into different layers, and the input of the i-th layer output with the i-th-1 layer is associated, and inputs with output node owing to can be connected with the external world, directly protected from environmental, is called visible layer, and other intermediate layer is then called hidden layer。Single hidden layer feedforward neural networks (Single-hiddenLayerFeedforwordneuralNetworks, SLFNs) is as the term suggests being exactly the hidden layer feedforward neural network that only has one layer。For N number of arbitrarily different sample (xi,ti), wherein xi=[xi1,xi2,…xin]T, ti=[ti1,ti2,…,tin]T, there is the standard Single hidden layer feedforward neural networks (SLFNs) of M hidden node, its mathematical model isWherein wiIt is the weight of input node and hidden node, βiIt is the weight between hidden node and output node, biBeing hidden layer deviation, g (x) is activation primitive。
The Single hidden layer feedforward neural networks (SLFNs) with M hidden node can zero error approach, it is meant that Σ j = 1 N ‾ | | o j - t j | | = 0 , Namely Σ i = 1 N ‾ β i g ( w i + x j + b i ) = t j , N number of equation above can also be write as H β=T, wherein
H ( w 1 , ... w L , x 1 , ... , x N , b 1 , ... b L ) = g ( w 1 · x 1 + b 1 ) ... g ( w L · x 1 + b L ) . . . ... . . . g ( w 1 · x n + b 1 ) ... g ( w L · x N + b L ) .
β = β 1 T . . . β L T , T = t 1 T . . . t N T .
H is hidden layer output matrix。Experiments show that, randomly select input weight and hidden layer deviation just can be accurately obtained N number of different observation。It turns out that, Single hidden layer feedforward neural networks (SLFNs) not only pace of learning is fast, and has good Generalization Capability。
Summary of the invention
It is an object of the invention to for Problems existing in existing method, it is provided that a kind of based on the learning machine improved method with rarefaction representation classification that transfinites。This method is a kind of based on learning machine and rarefaction representation classification (the ExtremeLearningMachine-SparseRepresentationbasedClassifi cation of transfiniting, ELM-SRC) self adaptation improved transfinites the algorithm of learning machine and rarefaction representation classification (ExtremeLearningMachineandAdaptiveSparseRepresentationbas edClassification, EA-SRC)。To achieve these goals, the present invention adopts following scheme。
The technical solution adopted for the present invention to solve the technical problems comprises the steps:
Step 1, randomly generate hidden node parameter (wi,bi), i=1,2 ..., L, wherein wiIt is the input weight connecting i-th hidden node and input neuron, biBeing the deviation of i-th hidden node, L is hidden node number。
Step 2, calculate hidden node output matrix H (w1,…wL,x1,…,xN,b1,…bi,…,bL), and H ( w 1 , ... w L , x 1 , ... , x N , b 1 , ... b L ) = g ( w 1 · x 1 + b 1 ) ... g ( w L · x 1 + b L ) . . . ... . . . g ( w 1 · x n + b 1 ) ... g ( w L · x N + b L )
Wherein w is the input weight connecting hidden node and input neuron, and x is training sample input, and N is training sample number, biBeing the deviation of i-th hidden node, g () represents activation primitive。
Step 3, magnitude relationship according to L and N, be respectively adopted different formula and calculate the output weight connecting hidden node and output neuron
Step 4, calculate inquiry picture y output vector
Step 5, to the maximum o in ELM output vector ofWith second largest value osDifference judge, and obtain index corresponding to maximum in output vector be inquiry picture generic。
If the maximum o in ELM output vector ofWith second largest value osDifference more than threshold value σ set in advance, i.e. of-os> σ, then the neutral net directly adopting the learning machine (ELM) that transfinites to train, obtain index corresponding to maximum in output vector and be inquiry picture generic。
If the maximum o in output vectorfWith second largest value osDifference less than us threshold value set in advance, i.e. of-os< σ, then it is assumed that the noise that picture comprises is higher adopts rarefaction representation sorting algorithm to classify。
In described step 3, if L and N's is sized to L≤N, namely hidden node number is less than or equal to sample number, then in order to improve computational efficiency, hidden node output matrix carries out singular value decomposition, concrete:
3-1. singular value decomposition H=UDVT, wherein D=diag{d1,…di,…dNIt is diagonal matrix, diThe i-th singular value of matrix H, then HHT=VD2VT。Wherein U is n rank unitary matrice matrixes, and V is n rank unitary matrice。And UUT=UTU=I, VVT=VTV=I。
3-2. sets the upper limit λ adjusting parameter lambdamaxWith lower limit λmin, at λi∈[λminmax] in scope, every layer that calculates orthogonal intersection cast shadow matrix HAT respectively is decomposed square HATr, HATr=HV (D2nI)-1VTHT, wherein HAT=HH+=H (HTH)- 1HT
3-3. is at λi∈[λminmax] calculate difference in scope and adjust the mean square deviation that parameter lambda is correspondingComputing formula is:
MSE i P R E S S = 1 N &Sigma; j = 1 N ( t j - o j 1 - ( HAT r ) j j ) 2 ;
Wherein tjIt is desired output, and ojIt is actual output。
The Minimum Mean Square Error that 3-4. calculatesCorresponding λ, i.e. λopt。Now can obtain good Generalization Capability, and also be able to maximize classification boundaries。
3-5. calculates the output weight connecting hidden node and output neuron
&beta; ^ = ( H T H + &lambda; o p t I ) - 1 H T T = V ( D 2 + &lambda; o p t I ) - 1 V T H T T .
In described step 3, if L and N's is sized to L > N, namely hidden node number is more than sample number, then in order to improve computational efficiency, hidden node output matrix carries out singular value decomposition, concrete:
3-6. singular value decomposition H=UDVT, wherein D=diag{d1,…di,…dNIt is diagonal matrix, diThe i-th singular value of matrix H, then HHT=UD2UT, and UUT=UTU=I, VVT=VTV=I。
3-7. sets the upper limit λ adjusting parameter lambdamaxWith lower limit λmin, at λi∈[λminmax] in scope, every layer that calculates orthogonal intersection cast shadow matrix HAT respectively is decomposed square HATr=HHTU(D2iΙ)-1UT, wherein HAT=HH+=H (HTH)-1HT
3-8. is at λi∈[λminmax] in scope, calculate difference and adjust the mean square deviation that parameter lambda is correspondingComputing formula is:
MSE i P R E S S = 1 N &Sigma; j = 1 N ( t j - o j 1 - ( HAT r ) j j ) 2 ;
Wherein tjIt is desired output, and ojIt is actual output。
The Minimum Mean Square Error that 3-9. calculatesCorresponding λ, i.e. λopt。Now can obtain good Generalization Capability, and also be able to maximize classification boundaries。
3-10. calculates the output weight connecting hidden node and output neuron &beta; ^ = H T U ( D 2 + &lambda; o p t I ) - 1 U T T .
Rarefaction representation sorting algorithm described in step 5 sets up the sub-dictionary of self adaptation first with k maximum front in output vector o, then the rarefaction representation coefficient of training sample is reconstructed, obtain corresponding residual error, finally find out index corresponding to minima in residual error and be the classification that picture belongs to, concrete:
First find out the classification corresponding to k maximum before in output vector o, then set up the sub-dictionary vector of self adaptation with the vector that k maximum front in output vector o is correspondingWherein m (i) ∈ 1,2 ... m}。
Then reconstruct rarefaction representation coefficient, formula is as follows,Wherein τ is regulation coefficient。
Finally calculate corresponding residual error r d ( y ) = | | y - A d &delta; d ( x ^ ) | | 2 2 , 1 &le; d < m .
Wherein AdIt it is the training sample of d class;It it is the rarefaction representation coefficient that d class sample is corresponding。
The present invention has the beneficial effect that:
The transfinite algorithm of learning machine and rarefaction representation classification (EA-SRC) of this self adaptation improved based on transfinite learning machine and rarefaction representation classification (ELM-SRC) not only has higher discrimination, and pace of learning faster, greatly reduce computation complexity。
Accompanying drawing explanation
Fig. 1 is schematic flow sheet of the present invention;
Fig. 2 is Single hidden layer feedforward neural networks schematic diagram of the present invention。
Detailed description of the invention
Verify the improvements of the present invention below in conjunction with concrete experiment, be described below and be only used as demonstration and explain, the present invention is not done any pro forma restriction。
As depicted in figs. 1 and 2, choose any one data base, first randomly generated L hidden node parameter (w by random functioni,bi), i=1,2 ..., L, wherein wiIt is the input weight connecting i-th hidden node and input neuron, biBeing the deviation of i-th hidden node, L is hidden node number。Calculate hidden layer output matrix
H ( w 1 , ... w L , x 1 , ... , x N , b 1 , ... b L ) = g ( w 1 &CenterDot; x 1 + b 1 ) ... g ( w L &CenterDot; x 1 + b L ) . . . ... . . . g ( w 1 &CenterDot; x n + b 1 ) ... g ( w L &CenterDot; x N + b L ) .
(1) if hidden node number is less than sample number, i.e. L≤N。In order to improve computational efficiency, hidden layer input matrix is carried out singular value decomposition, it is assumed that H=UDVT, wherein D=diag{d1,…di,…dNIt is diagonal matrix, diThe i-th singular value of matrix H, then HHT=VD2VT, and VVT=VTV=I。Set the upper limit λ adjusting parameter lambda in advancemaxWith lower limit λmin, at λi∈[λminmax] in scope, every layer that calculates HAT matrix respectively is decomposed square HATr=HV (D2optI)-1VTHT, wherein HAT=HH+=H (HTH)-1HT。At λi∈[λminmax] in scope, calculate difference and adjust the mean square deviation that parameter lambda is correspondingComputing formulaWherein tjIt is desired output, and ojIt is actual output。Return λ, i.e. λ that the Minimum Mean Square Error calculated in above formula is correspondingopt。Now can obtain good Generalization Capability, and classification boundaries can also be maximized。Then the output weight connecting hidden node and output neuron is calculatedComputing formula is as follows &beta; ^ = ( H T H + &lambda; o p t I ) - 1 H T T = V ( D 2 + &lambda; o p t I ) - 1 V T H T T .
(2) if hidden node number is less than sample number, i.e. L >=N。Hidden layer input matrix is carried out singular value decomposition, it is assumed that H=UDVT, wherein D=diag{d1,…di,…dNIt is diagonal matrix, diThe i-th singular value of matrix H, then HHT=UD2UT, and UUT=UTU=I。Set the upper limit λ adjusting parameter lambda in advancemaxWith lower limit λmin, at λi∈[λminmax] in scope, every layer that calculates HAT matrix respectively is decomposed square HATr=HHTU(D2optΙ)-1UT, wherein HAT=HH+=H (HTH)-1HT, then calculate the mean square deviation that different adjustment parameter lambda is corresponding, computing formulaWherein tjIt is desired output, and ojIt is actual output。It is then back to λ, i.e. λ that the Minimum Mean Square Error that calculates in above formula is correspondingopt。Now can obtain good Generalization Capability, and classification boundaries can also be maximized。Calculate the output weight connecting hidden node and output neuron
Then output vector is calculatedIf the difference of maximum and second largest value is more than the threshold value set in advance, i.e. o in the output vector o of inquiry picturef-os> σ, directly adopting the neutral net that the learning machine (ELM) that transfinites has trained, the index that in output vector, maximum is corresponding is inquiry picture generic。If the difference of maximum and second largest value is less than the threshold value set in advance, i.e. o in ELM output vector of-os< σ, index corresponding to k maximum before first finding out in output vector o, wherein d ∈ 1,2 ... m}, then set up the sub-dictionary of self adaptation with the vector that k maximum before in output vector o is corresponding vectorialThe rarefaction representation coefficient formulas of reconstruct is as follows,Wherein τ is regulation coefficient。For d=1, d, < m finds out corresponding A respectivelydWithCalculate corresponding residual errorFind out the classification y that residual error minima is corresponding, be picture generic。

Claims (4)

1. the improved method classified with rarefaction representation based on the learning machine that transfinites, it is characterised in that comprise the steps:
Step 1, randomly generate hidden node parameter (wi,bi), i=1,2 ..., L, wherein wiIt is the input weight connecting i-th hidden node and input neuron, biBeing the deviation of i-th hidden node, L is hidden node number;
Step 2, calculate hidden node output matrix H (w1,…wL,x1,…,xN,b1,…bi,…,bL), and H ( w 1 , ... w L , x 1 , ... , x N , b 1 , ... b L ) = g ( w 1 &CenterDot; x 1 + b 1 ) ... g ( w L &CenterDot; x 1 + b L ) . . . ... . . . g ( w 1 &CenterDot; x n + b 1 ) ... g ( w L &CenterDot; x N + b L )
Wherein w is the input weight connecting hidden node and input neuron, and x is training sample input, and N is training sample number, biBeing the deviation of i-th hidden node, g () represents activation primitive;
Step 3, magnitude relationship according to L and N, be respectively adopted different formula and calculate the output weight connecting hidden node and output neuron
Step 4, calculate inquiry picture y output vector
Step 5, to the maximum ο in ELM output vector οfWith second largest value οsDifference judge, and obtain index corresponding to maximum in output vector be inquiry picture generic;
If the maximum ο in ELM output vector οfWith second largest value οsDifference more than threshold value σ set in advance, i.e. οf-οs> σ, then the neutral net directly adopting the learning machine (ELM) that transfinites to train, obtain index corresponding to maximum in output vector and be inquiry picture generic;
If the maximum ο in output vectorfWith second largest value οsDifference less than us threshold value set in advance, i.e. οf-οs< σ, then it is assumed that the noise that picture comprises is higher adopts rarefaction representation sorting algorithm to classify。
2. a kind of based on the learning machine improved method with rarefaction representation classification that transfinites as claimed in claim 1, it is characterized in that in described step 3, if L and N is sized to L≤N, namely hidden node number is less than or equal to sample number, then in order to improve computational efficiency, hidden node output matrix is carried out singular value decomposition, concrete:
3-1. singular value decomposition H=UDVT, wherein D=diag{d1,…di,…dNIt is diagonal matrix, diThe i-th singular value of matrix H, then HHT=VD2VT;Wherein U is n rank unitary matrice, and V is n rank unitary matrice;And UUT=UTU=I, VVT=VTV=I;
3-2. sets the upper limit λ adjusting parameter lambdamaxWith lower limit λmin, at λi∈[λminmax] in scope, every layer that calculates orthogonal intersection cast shadow matrix HAT respectively is decomposed square HATr, HATr=HV (D2nI)-1VTHT, wherein HAT=HH+=H (HTH)-1HT
3-3. is at λi∈[λminmax] calculate difference in scope and adjust the statistics mean square error that parameter lambda is corresponding, computing formula is:
MSE i P R E S S = 1 N &Sigma; j = 1 N ( t j - o j 1 - ( HAT r ) j j ) 2 ;
Wherein tjIt is desired output, and οjIt is actual output;
The minimum statistics mean square error that 3-4. calculatesCorresponding λ, i.e. λopt;Now can obtain good Generalization Capability, and also be able to maximize classification boundaries;
3-5. calculates the output weight connecting hidden node and output neuron
&beta; ^ = ( H T H + &lambda; o p t I ) - 1 H T T = V ( D 2 + &lambda; o p t I ) - 1 V T H T T .
3. a kind of based on the learning machine improved method with rarefaction representation classification that transfinites as claimed in claim 1, it is characterized in that in described step 3, if L and N is sized to L > N, namely hidden node number is more than sample number, then in order to improve computational efficiency, hidden node output matrix is carried out singular value decomposition, concrete:
3-6. singular value decomposition H=UDVT, wherein D=diag{d1,…di,…dNIt is diagonal matrix, diThe i-th singular value of matrix H, then HHT=UD2UT, and UUT=UTU=I, VVT=VTV=I;
3-7. sets the upper limit λ adjusting parameter lambdamaxWith lower limit λmin, at λi∈[λminmax] in scope, every layer that calculates orthogonal intersection cast shadow matrix HAT respectively is decomposed square HATr=HHTU(D2iΙ)-1UT, wherein HAT=HH+=H (HTH)-1HT
3-8. is at λi∈[λminmax] in scope, calculate difference and adjust the statistics mean square error that parameter lambda is correspondingComputing formula is:
MSE i P R E S S = 1 N &Sigma; j = 1 N ( t j - o j 1 - ( HAT r ) j j ) 2 ;
Wherein tjIt is desired output, and οjIt is actual output;
The minimum statistics mean square error that 3-9. calculatesCorresponding λ, i.e. λopt;Now can obtain good Generalization Capability, and also be able to maximize classification boundaries;
3-10. calculates the output weight connecting hidden node and output neuron
&beta; ^ = H T U ( D 2 + &lambda; o p t I ) - 1 U T T .
4. a kind of based on the learning machine improved method with rarefaction representation classification that transfinites as claimed in claim 1, it is characterized in that the rarefaction representation sorting algorithm described in step 5 sets up the sub-dictionary of self adaptation first with k maximum front in output vector ο, then the rarefaction representation coefficient of training sample is reconstructed, obtain corresponding residual error, finally find out the index that in residual error, minima is corresponding and be the classification that picture belongs to, concrete:
First find out the classification corresponding to k maximum before in output vector ο, then set up the sub-dictionary vector of self adaptation with the vector that k maximum front in output vector ο is corresponding
Wherein m (i) ∈ 1,2 ... m};
Then reconstruct rarefaction representation coefficient, formula is as follows,Wherein τ is regulation coefficient;
Finally calculate corresponding residual error1≤d < m;
Wherein AdIt it is the training sample of d class;It it is the rarefaction representation coefficient that d class sample is corresponding。
CN201610018444.7A 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification Active CN105701506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610018444.7A CN105701506B (en) 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610018444.7A CN105701506B (en) 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification

Publications (2)

Publication Number Publication Date
CN105701506A true CN105701506A (en) 2016-06-22
CN105701506B CN105701506B (en) 2019-01-18

Family

ID=56226315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610018444.7A Active CN105701506B (en) 2016-01-12 2016-01-12 A kind of improved method based on transfinite learning machine and rarefaction representation classification

Country Status (1)

Country Link
CN (1) CN105701506B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326677A (en) * 2016-09-12 2017-01-11 北京化工大学 Soft measurement method of acetic acid consumption in PTA device
CN106779091A (en) * 2016-12-23 2017-05-31 杭州电子科技大学 A kind of periodic vibration signal localization method based on transfinite learning machine and arrival distance
CN106897737A (en) * 2017-01-24 2017-06-27 北京理工大学 A kind of high-spectrum remote sensing terrain classification method based on the learning machine that transfinites
CN107369147A (en) * 2017-07-06 2017-11-21 江苏师范大学 Image interfusion method based on self-supervision study
CN107820093A (en) * 2017-11-15 2018-03-20 深圳大学 Information detecting method, device and receiving device based on packet energy differences
CN108460458A (en) * 2017-01-06 2018-08-28 谷歌有限责任公司 It is executed in graphics processing unit and calculates figure
CN108470337A (en) * 2018-04-02 2018-08-31 江门市中心医院 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature
CN109902644A (en) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 Face identification method, device, equipment and computer-readable medium
CN109934304A (en) * 2019-03-25 2019-06-25 重庆邮电大学 A kind of blind field image pattern classification method based on the hidden characteristic model that transfinites
CN109934295A (en) * 2019-03-18 2019-06-25 重庆邮电大学 A kind of image classification and method for reconstructing based on the hidden feature learning model that transfinites
CN110533101A (en) * 2019-08-29 2019-12-03 西安宏规电子科技有限公司 A kind of image classification method based on deep neural network subspace coding

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140055609A1 (en) * 2012-08-22 2014-02-27 International Business Machines Corporation Determining foregroundness of an object in surveillance video data
CN104980442A (en) * 2015-06-26 2015-10-14 四川长虹电器股份有限公司 Network intrusion detection method based on element sample sparse representation
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140055609A1 (en) * 2012-08-22 2014-02-27 International Business Machines Corporation Determining foregroundness of an object in surveillance video data
CN104980442A (en) * 2015-06-26 2015-10-14 四川长虹电器股份有限公司 Network intrusion detection method based on element sample sparse representation
CN104992165A (en) * 2015-07-24 2015-10-21 天津大学 Extreme learning machine based traffic sign recognition method

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326677A (en) * 2016-09-12 2017-01-11 北京化工大学 Soft measurement method of acetic acid consumption in PTA device
CN106326677B (en) * 2016-09-12 2018-12-25 北京化工大学 A kind of flexible measurement method of PTA device acetic acid consumption
CN106779091A (en) * 2016-12-23 2017-05-31 杭州电子科技大学 A kind of periodic vibration signal localization method based on transfinite learning machine and arrival distance
CN106779091B (en) * 2016-12-23 2019-02-12 杭州电子科技大学 A kind of periodic vibration signal localization method based on transfinite learning machine and arrival distance
CN108460458A (en) * 2017-01-06 2018-08-28 谷歌有限责任公司 It is executed in graphics processing unit and calculates figure
CN106897737B (en) * 2017-01-24 2019-10-11 北京理工大学 A kind of high-spectrum remote sensing terrain classification method based on the learning machine that transfinites
CN106897737A (en) * 2017-01-24 2017-06-27 北京理工大学 A kind of high-spectrum remote sensing terrain classification method based on the learning machine that transfinites
CN107369147A (en) * 2017-07-06 2017-11-21 江苏师范大学 Image interfusion method based on self-supervision study
CN107369147B (en) * 2017-07-06 2020-12-25 江苏师范大学 Image fusion method based on self-supervision learning
CN107820093A (en) * 2017-11-15 2018-03-20 深圳大学 Information detecting method, device and receiving device based on packet energy differences
CN107820093B (en) * 2017-11-15 2019-09-03 深圳大学 Information detecting method, device and receiving device based on grouping energy differences
CN108470337A (en) * 2018-04-02 2018-08-31 江门市中心医院 A kind of sub- reality Lung neoplasm quantitative analysis method and system based on picture depth feature
CN109902644A (en) * 2019-03-07 2019-06-18 北京海益同展信息科技有限公司 Face identification method, device, equipment and computer-readable medium
CN109934295A (en) * 2019-03-18 2019-06-25 重庆邮电大学 A kind of image classification and method for reconstructing based on the hidden feature learning model that transfinites
CN109934304A (en) * 2019-03-25 2019-06-25 重庆邮电大学 A kind of blind field image pattern classification method based on the hidden characteristic model that transfinites
CN110533101A (en) * 2019-08-29 2019-12-03 西安宏规电子科技有限公司 A kind of image classification method based on deep neural network subspace coding

Also Published As

Publication number Publication date
CN105701506B (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN105701506A (en) Improved method based on extreme learning machine (ELM) and sparse representation classification
Zhou et al. Self-attention feature fusion network for semantic segmentation
US11170256B2 (en) Multi-scale text filter conditioned generative adversarial networks
CN107563422A (en) A kind of polarization SAR sorting technique based on semi-supervised convolutional neural networks
CN106228512A (en) Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN110334580A (en) The equipment fault classification method of changeable weight combination based on integrated increment
CN105512680A (en) Multi-view SAR image target recognition method based on depth neural network
US11551076B2 (en) Event-driven temporal convolution for asynchronous pulse-modulated sampled signals
Zhang et al. Denoising Laplacian multi-layer extreme learning machine
CN112464004A (en) Multi-view depth generation image clustering method
CN103218617B (en) A kind of feature extracting method of polyteny Large space
CN104899921A (en) Single-view video human body posture recovery method based on multi-mode self-coding model
Liu et al. NAS-SCAM: Neural architecture search-based spatial and channel joint attention module for nuclei semantic segmentation and classification
CN107563430A (en) A kind of convolutional neural networks algorithm optimization method based on sparse autocoder and gray scale correlation fractal dimension
CN103268484A (en) Design method of classifier for high-precision face recognitio
CN112149645A (en) Human body posture key point identification method based on generation of confrontation learning and graph neural network
Shen et al. Multi-dimensional, multi-functional and multi-level attention in YOLO for underwater object detection
Lin et al. Research on Small Target Detection Technology Based on the MPH‐SSD Algorithm
CN110532545A (en) A kind of data information abstracting method based on complex neural network modeling
CN103761567A (en) Wavelet neural network weight initialization method based on Bayes estimation
Zand et al. Flow-based Spatio-Temporal Structured Prediction of Dynamics
Zhang et al. The Role of Knowledge Creation‐Oriented Convolutional Neural Network in Learning Interaction
Cheng et al. Solving monocular sensors depth prediction using MLP-based architecture and multi-scale inverse attention
CN103903003A (en) Method for using Widrow-Hoff learning algorithm
Wang et al. Analysis and Research of Artificial Intelligence Algorithms in GPS Data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant