CN107085704A - Fast face expression recognition method based on ELM own coding algorithms - Google Patents

Fast face expression recognition method based on ELM own coding algorithms Download PDF

Info

Publication number
CN107085704A
CN107085704A CN201710188162.6A CN201710188162A CN107085704A CN 107085704 A CN107085704 A CN 107085704A CN 201710188162 A CN201710188162 A CN 201710188162A CN 107085704 A CN107085704 A CN 107085704A
Authority
CN
China
Prior art keywords
msub
mrow
mtd
mtr
elm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710188162.6A
Other languages
Chinese (zh)
Inventor
陆晗
曹九稳
朱心怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201710188162.6A priority Critical patent/CN107085704A/en
Publication of CN107085704A publication Critical patent/CN107085704A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of fast face expression recognition method based on ELM own coding algorithms, it is as follows that the present invention includes step:The human face region detection grader of step 1, training based on Adaboost simultaneously carries out Face datection;2nd, the human face region detected is pre-processed, including cutting, size normalization and histogram equalization processing;3rd, feature extraction is carried out to pretreated Facial Expression Image as feature extraction algorithm using the ELM AE algorithms based on self-encoding encoder and the learning machine that transfinites combination;4th, the facial expression classifier based on the learning machine that transfinites is built, the vector of feature extraction is input in expression classifier, output result is the mood of this face.The present invention more can quickly and efficiently extract main information and dimensionality reduction.When Expression Recognition is classified, as long as ELM adjusts a parameter of neuron, identification run time is short, and accuracy rate is high, is a kind of efficient and fast algorithm of pace of learning.

Description

Fast face expression recognition method based on ELM own coding algorithms
Technical field
The invention belongs to image processing field, the overall process of the Emotion identification of human face expression is elaborated, more particularly to it is a kind of Fast face expression recognition method based on ELM own coding algorithms.
Background technology
The Emotion identification of human face expression, that is, face is identified in picture or video, and make further mood Analysis, this in the past few decades between turn into biological intelligence feature recognition field a focus.Emotion identification is inherently How assign computer " watching the mood and guessing the thoughts " ability, improve more stiff at present, jejune man-machine interaction environment.
The mood analysis of human face expression mainly includes the following aspects:Face datection and positioning, image preprocessing, expression Feature extraction, expression classification and mood analysis.Face datection often refers to detects human face region in the picture, if including people Face, in addition it is also necessary to navigate to the position of face and determine size, uses the Face datection method based on Adaboost, this is herein A kind of algorithm of iteration, for one group of training set, by changing the distribution probability of wherein each sample, and obtains different training Collect Si, be trained to obtain a Weak Classifier Hi for each Si, then by these Weak Classifiers according to different power Value combines, and has just obtained strong classifier.Usually there will be in facial expression image after detection noise, contrast not enough etc. lack Point, these are often as caused by the factors such as the performance quality of intensity of illumination degree and equipment.Therefore pretreatment is face feelings A very important link in thread identification process, effective preprocess method can be conducive to improving expression recognition rate. After pretreatment, feature extraction algorithm will extract the feature of different expressions applied to facial image.This patent uses own coding Device and the ELM-AE algorithms for learning machine combination of transfiniting are as feature extracting method, and this is a kind of own coding algorithm of highly effective, Sample data is encoded and decoding process, if reconstructed error is sufficiently small, in the range of restriction, you can assert that this coding code is To the effective expression of input sample data, you can the description vectors expressed one's feelings as facial image.Finally, realize that Expression Recognition is classified And further mood analysis, build identification model training according to the characteristic vector that Facial Expression Image feature extraction is obtained special Storehouse is levied, measured target classification logotype, including glad, sad, surprised, frightened, angry, detest and neutrality is given.
Human facial feature extraction is part and parcel in whole expression recognition system, traditional LBP, based on geometric properties And the feature extracting method based on template has certain defect, such as LBP feature extractions are difficult to handle high dimensional data, and The speed of service is slow;Method applicability based on geometric properties is not strong enough, while can also lost part information.
The essence of Expression Recognition is one efficient grader of design, and the characteristic vector data extracted according to the last stage will Target expression classification is one of six kinds of basic facial expression classifications or is divided into neutral expression.As can be seen here, the direct shadow of the design of grader Ring the final effect analyzed to Expression Recognition and mood.Because the data volume after feature extraction is larger, and traditional artificial neuron It is network, not fast enough based on the method such as template matches, support vector machines speed of service, and since it is desired that training sample mistake Many and long training times, it is impossible to meet the demand of real-time.Therefore herein using based on extreme learning machine (Extreme Learning Machine, ELM) grader of algorithm carries out quick Expression Recognition classification.ELM is that one kind is easy to use, had Single hidden layer feedforward neural networks (SLFNs) learning algorithm of effect.It has some following advantage in expression classification:(1) ELM is defeated Enter and random weight is used between layer and hidden layer.We can repeatedly train identical data set, and this gives different niceties of grading Different output spaces.(2) ELM is the learning algorithm of a simpler feedforward neural network.Traditional neural network learning Algorithm (such as BP algorithm) needs artificially to set substantial amounts of network training parameter, can thus be very easy to produce locally optimal solution. And ELM is it is determined that during network parameter, it is only necessary to set the hidden node number of network, be not required to during algorithm performs Input weights and the biasing of hidden member of network are adjusted, and produces unique optimal solution.Therefore, ELM pace of learnings are than passing System artificial neural network is faster and Generalization Capability is more preferable, can full out realize expression classification and Emotion identification.
ELM is output asWherein:βiIt is between hidden node and output node Weight, G (ai,bi, x) it is hidden layer output function.H (x)=[G (a1,b1,x),...,G(aL,bL,x)]TIt is hidden layer relative to defeated Enter x output vector.ELM key is to minimize training error and exports weight norm.Minimize And | | β | |.
ELM algorithms are summarized as follows:Given training set { (xi,ti)xi∈Rn,ti∈Rm, i=1,2 ... N }, hidden node is defeated Go out function g (w, b, x) and the number of hidden nodes L.
(1) it is randomly assigned parameter (the w of hidden nodei,bi), i=1,2..., L.
(2) hidden layer output matrix Η is calculated.
(3) weight beta β=H between hidden node and output node is calculated+T。
H+It is hidden layer output matrix H Moore-Penrose generalized inverse matrix, orthographic projection, orthogonalization can be used The method such as method and singular value decomposition obtains to calculate.
The content of the invention
The purpose of the present invention is self-editing based on ELM there is provided one kind for problem present in existing Expression Recognition algorithm The fast face expression recognition method of code algorithm.A kind of faster more efficient quick expression recognition method.
Technical scheme mainly comprises the following steps:
Step 1, training face region detection grader
1-1. gives a series of training sample (x1,y1),(x2,y2),...,(xi,yi),(xn,yn), wherein xiRepresent i-th Individual sample, yiIt is negative sample (non-face), y that it is represented when=0iIt is positive sample (face) that it is represented when=1, and n is training altogether Sample size.
1-2. initializes weight and makees weight normalization:Dt(i) it is i-th in the t times circulation The Error weight of individual sample, t=1...T.
1-3. trains a Weak Classifier h (x, f, p, θ) to each feature f;Calculate the Weak Classifier of all features of correspondence Weighting fault rate:
1-4. is updated to the weight of all samples:Wherein βtt/1-ξt, ei=0 represents xiCorrectly classified, ei=1 represents xiMistakenly classified.
Strong classifier after 1-5. training can be used for Face datection identification, if including face in picture, The position of face can also be navigated to and size, strong classifier H (x) is determined:
Wherein htThere is minimal error rate ξ during for trainingtWeak Classifier.
Step 2, human face region pretreatment
The human face region detected is carried out region of interest ROI and cuts out by 2-1., then makees Pixel Dimensions to image Normalized:Picture is reduced/enlarged into a certain suitable Pixel Dimensions.
2-2. strengthens picture contrast to the image after normalized as histogram equalization processing.For discrete figure Picture, equalizing formula is:Pr(rk)=rk/N,0≤rk< 1;K=0, wherein 1 ..., L-1, N are the total numbers of pixel, and k is ash Level sum is spent, 2 are taken for the gray level image k of 88=256, rkFor k-th of gray-scale value.Equalizing transforming function transformation function is:
Wherein njIt is the total pixel number that gray value is j.
Step 3, Facial Expression Image feature extraction
3-1. gives training sample:X=[x1,x2,...,xN], i.e. ELM-AE input and output matrix.
3-2. generates hidden layer input weight matrix a=[a at random1,...,aL] and orthogonalization bias vector matrix b= [b1,...,bL], input data is mapped to identical or different data dimension space:H=g (ax+b) aTA=I, bTB=1 Wherein:G () represents activation primitive.
3-3. solves ELM-AE output weight matrix β.
Assuming that input and output layer neuronal quantity is d, hidden layer neuron quantity is L.
If d < L or d > L, i.e., for sparse and compression feature representation,
If d=L, i.e., for etc. dimension Feature Mapping, β=H-1Tβ=I
Wherein:H=[h1,...,hN] represent ELM-AE hidden layer output matrix.
3-4. inputs pretreated Facial Expression Image to the ELM-AE systems trained, obtained hidden layer output square Battle array vector H is the texture feature vector of view picture facial image.
Step 4, structure facial expression classifier
4-1. gives training sample:{(xi,ti)|xi∈Rn,ti∈Rm, i=1,2 ... N }, hidden layer output function g (w, b, X), the number of hidden nodes L and test sample y.
4-2. generates hidden node parameter (w at randomi,bi), i=1,2 ..., L.
4-3. calculates hidden node output matrix H (w1,…wL,x1,…,xN,b1,…bL), and
Ensure H sequency spectrums, wherein w is the input weight for connecting hidden node and input neuron, and x is that training sample is defeated Enter, N is training sample number, biIt is the deviation of i-th of hidden node, g () represents activation primitive.
4-4. calculates optimal outer power β:β=H+T。
4-5. calculates the corresponding output o=H (w of test sample y1,…wL,x1,…,xN,b1,…bL)β。
4-6. carries out Expression Recognition classification to test sample, is this to the corresponding classification of maximum in ELM output vectors ο The mood of face.I.e.
The present invention has the beneficial effect that:
The present invention uses depth limit learning machine self-encoding encoder (ELM-AE) algorithm and carries out human face expression feature extraction, The algorithm is a kind of own coding algorithm more highly efficient than common AE own codings algorithm, and it can quickly handle the defeated of higher dimensional Enter data, extract its trunk portion information, and can realize initial data it is high-dimensional, etc. dimension, the feature representation of low dimensional.
The present invention has faster recognition speed, when carrying out human facial feature extraction with ELM-AE algorithms, compared to study speed The slow gradient descent algorithm of rate, it more can quickly and efficiently extract main information and dimensionality reduction.When Expression Recognition is classified, As long as ELM adjusts a parameter of neuron, identification run time is short, and accuracy rate is high, is a kind of efficient and pace of learning Fast algorithm.
The present invention can reduce the dimension of data and represent the main component of prime information (i.e. Facial Expression Image), and Other feature extraction algorithms are compared, and it has the ability of rapid extraction image basic building block, can also handle very high-dimensional Input data.Meanwhile, the expression classification algorithm based on the learning machine ELM that transfinites has faster pace of learning and recognition speed.Two Speed and the degree of accuracy of expression recognition can be greatly improved by planting the combination of algorithm.
Brief description of the drawings
Fig. 1 is schematic flow sheet of the present invention;
Fig. 2 is Japan's JAFFE Facial Expression Image databases;
Fig. 3 is pretreated Facial Expression Image;
Fig. 4 is ELM-AE network structures;
Fig. 5 is Single hidden layer feedforward neural networks schematic diagram.
Embodiment
As shown in figure 1, train face region detection grader with Adaboost algorithm first, if obtained by each training A dry Weak Classifier gets up according to certain weighed combination, it is possible to obtain that the strong classifier of human face region can be detected.Afterwards Picture to be detected is input to the Face datection grader trained, the human face region detected cut, size picture Plain normalized and histogram equalization processing.Human face expression picture after processing is input to the ELM- trained In AE feature extraction neutral nets, obtained hidden layer output matrix vector H is the texture feature vector of view picture facial image. Finally using this characteristic vector as the ELM Expression Recognition graders trained input, you can obtain corresponding expression classification Output.
The invention provides a kind of fast face expression recognition method based on ELM own coding algorithms, calculated using ELM-AE Method comes out human face expression feature extraction and as the input of ELM expression classifiers, both combine and both improve the speed of service, and And accuracy rate is high.
Concrete methods of realizing is as follows:
Step one:Train face region detection grader:For one group of training sample, by changing wherein each sample Distribution probability, and different training set Si are obtained, it is trained to obtain a Weak Classifier Hi for each Si, then will These Weak Classifiers get up according to different weighed combinations, have just obtained strong classifier.
(1-1) is as shown in Fig. 2 use Japan's JAFFE Facial expression databases as training sample, at the beginning of giving each sample Beginningization weight simultaneously makees weight normalized:Dt(i) be the t times circulation in i-th of sample mistake Poor weight, t=1...T.
(1-2) trains a Weak Classifier h (x, f, p, θ) to each feature f;Calculating corresponds to adding for the Weak Classifier of institute+feature Weigh error rate:And the weight of all samples is updated: Wherein βtt/1-ξt, ei=0 represents xiCorrectly classified, ei=1 represents xiMistakenly classified.
The strong classifier that (1-3) training terminates to obtain can be utilized for Face datection identification, if including people in picture Face, it may be determined that the center of face and size.
Strong classifier H (x):
Wherein htThere is minimal error rate ξ during for trainingtWeak Classifier.
Step 2:Human face region is pre-processed:As shown in figure 3, the human face region detected is carried out region of interest ROI sanction Cut, then make Pixel Dimensions normalized, picture is reduced/enlarged into a certain suitable Pixel Dimensions, and make histogram Equalization processing.
(2-1) makees Pixel Dimensions normalized to detecting that the picture of human face region is carried out after ROI region segmentation, defeated Go out the Facial Expression Image of fixed size.
(2-2) makees histogram equalization processing to the image after processing, and equalization transforming function transformation function is:
Wherein njIt is the total pixel number that gray value is j.
Step 3:Facial Expression Image feature extraction:To ELM-AE, this network structure (such as Fig. 4) is trained, according to The difference of feature representation dimension, calculates different output weight matrix β.The ELM-AE networks trained can be for expression figure Piece feature extraction.
(3-1) gives training sample:X=[x1,x2,...,xN], i.e. ELM-AE input and output matrix.
(3-2) generation hidden layer inputs weight matrix a=[a at random1,...,aL] and orthogonalization bias vector matrix b= [b1,...,bL]。
Input data is mapped to identical or different data dimension space by (3-3):
H=g (ax+b) aTA=I, bTB=1 is wherein:G () represents activation primitive.
(3-4) solves ELM-AE output weight matrix β.
Assuming that input and output layer neuronal quantity is d, hidden layer neuron quantity is L.
If d < L or d > L, i.e., for sparse and compression feature representation,
If d=L, i.e., for etc. dimension Feature Mapping, β=H-1Tβ=I
Wherein:H=[h1,...,hN] represent ELM-AE hidden layer output matrix.
(3-5) inputs pretreated Facial Expression Image to the ELM-AE systems trained, obtained hidden layer output Matrix-vector H is the texture feature vector of view picture facial image.
Step 4:Build facial expression classifier:As shown in figure 5, the expression classifier based on the learning machine that transfinites is built, with Machine generates the parameter of hidden node, optimizes unique regulation parameter β and is trained.
(4-1) gives training sample:{(xi,ti)|xi∈Rn,ti∈Rm, i=1,2 ... N }, hidden layer output function g (w, B, x), the number of hidden nodes L and test sample y.
(4-2) generates hidden node parameter (w at randomi,bi), i=1,2 ..., L.
(4-3) calculates hidden node output matrix H (w1,…wL,x1,…,xN,b1,…bL), and
Ensure H sequency spectrums, wherein w is the input weight for connecting hidden node and input neuron, and x is that training sample is defeated Enter, N is training sample number, biIt is the deviation of i-th of hidden node, g () represents activation primitive.
(4-4) calculates optimal outer power β:β=H+T。
H+It is hidden layer output matrix H Moore-Penrose generalized inverse matrix, orthographic projection, orthogonalization can be used The method such as method and singular value decomposition obtains to calculate.
(4-5) calculates the corresponding output of test sample y:
O=H (w1,…wL,x1,…,xN,b1,…bL
Expression Recognition classification is carried out to test sample, to maximum correspondence in ELM output vectors ο
Classification be the face mood, i.e.,

Claims (1)

1. the fast face expression recognition method based on ELM own coding algorithms, it is characterised in that comprise the following steps:
Step 1, training face region detection grader
1-1. gives a series of training sample (x1,y1),(x2,y2),...,(xi,yi),(xn,yn), wherein xiRepresent i-th of sample This, yiIt is negative sample (non-face), y that it is represented when=0iIt is positive sample (face) that it is represented when=1, and n is training sample altogether Quantity;
1-2. initializes weight and makees weight normalization:Dt(i) it is i-th of sample in the t times circulation This Error weight, t=1...T;
1-3. trains a Weak Classifier h (x, f, p, θ) to each feature f;Calculating corresponds to adding for the Weak Classifier of all features Weigh error rate:
1-4. is updated to the weight of all samples:Wherein βtt/1-ξt, ei=0 represents xiQuilt Correctly classify, ei=1 represents xiMistakenly classified;
Strong classifier after 1-5. training can be used for Face datection identification, if including face in picture, can also Navigate to the position of face and determine size, strong classifier H (x):
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </munderover> <msub> <mi>&amp;alpha;</mi> <mi>t</mi> </msub> <msub> <mi>h</mi> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>&amp;GreaterEqual;</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>t</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>T</mi> </munderover> <msub> <mi>&amp;alpha;</mi> <mi>t</mi> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>s</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> <mtd> <mrow> <msub> <mi>&amp;alpha;</mi> <mi>t</mi> </msub> <mo>=</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mn>1</mn> <mo>/</mo> <msub> <mi>&amp;beta;</mi> <mi>t</mi> </msub> <mo>=</mo> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mfrac> <mrow> <mn>1</mn> <mo>-</mo> <msub> <mi>&amp;xi;</mi> <mi>t</mi> </msub> </mrow> <msub> <mi>&amp;xi;</mi> <mi>t</mi> </msub> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced>
Wherein htThere is minimal error rate ξ during for trainingtWeak Classifier;
Step 2, human face region pretreatment;
The human face region detected is carried out region of interest ROI and cuts out by 2-1., then makees Pixel Dimensions normalizing to image Change is handled:Picture is reduced/enlarged into a certain suitable Pixel Dimensions;
2-2. strengthens picture contrast to the image after normalized as histogram equalization processing;For discrete picture, Equalizing formula is:Pr(rk)=rk/N,0≤rk< 1;K=0, wherein 1 ..., L-1, N are the total numbers of pixel, and k is gray scale Level sum, 2 are taken for the gray level image k of 88=256, rkFor k-th of gray-scale value;Equalizing transforming function transformation function is:
Wherein njIt is the total pixel number that gray value is j;
Step 3, Facial Expression Image feature extraction;
3-1. gives training sample:X=[x1,x2,...,xN], i.e. ELM-AE input and output matrix;
3-2. generates hidden layer input weight matrix a=[a at random1,...,aL] and orthogonalization bias vector matrix b=[b1,..., bL], input data is mapped to identical or different data dimension space:H=g (ax+b) aTA=I, bTB=1 is wherein:g () represents activation primitive;
3-3. solves ELM-AE output weight matrix β;
Assuming that input and output layer neuronal quantity is d, hidden layer neuron quantity is L;
If d < L or d > L, i.e., for sparse and compression feature representation,
If d=L, i.e., for etc. dimension Feature Mapping, β=H-1Tβ=I
Wherein:H=[h1,...,hN] represent ELM-AE hidden layer output matrix;
3-4. inputs pretreated Facial Expression Image to the ELM-AE systems that train, obtained hidden layer output matrix to Amount H is the texture feature vector of view picture facial image;
Step 4, structure facial expression classifier;
4-1. gives training sample:{(xi,ti)|xi∈Rn,ti∈Rm, i=1,2 ... N }, hidden layer output function g (w, b, x), The number of hidden nodes L and test sample y;
4-2. generates hidden node parameter (w at randomi,bi), i=1,2 ..., L;
4-3. calculates hidden node output matrix H (w1,…wL,x1,…,xN,b1,…bL), and
<mrow> <mi>H</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>...</mo> <msub> <mi>b</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mo>.</mo> </mtd> <mtd> <mrow></mrow> </mtd> <mtd> <mo>.</mo> </mtd> </mtr> <mtr> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mn>1</mn> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mn>...</mn> </mtd> <mtd> <mrow> <mi>g</mi> <mrow> <mo>(</mo> <msub> <mi>w</mi> <mi>L</mi> </msub> <mo>&amp;CenterDot;</mo> <msub> <mi>x</mi> <mi>N</mi> </msub> <mo>+</mo> <msub> <mi>b</mi> <mi>L</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
Ensure H sequency spectrums, wherein w is the input weight for connecting hidden node and input neuron, and x is training sample input, and N is Training sample number, biIt is the deviation of i-th of hidden node, g () represents activation primitive;
4-4. calculates optimal outer power β:β=H+T;
4-5. calculates the corresponding output o=H (w of test sample y1,…wL,x1,…,xN,b1,…bL)β;
4-6. carries out Expression Recognition classification to test sample, is the face to the corresponding classification of maximum in ELM output vectors ο Mood;I.e.
CN201710188162.6A 2017-03-27 2017-03-27 Fast face expression recognition method based on ELM own coding algorithms Pending CN107085704A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710188162.6A CN107085704A (en) 2017-03-27 2017-03-27 Fast face expression recognition method based on ELM own coding algorithms

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710188162.6A CN107085704A (en) 2017-03-27 2017-03-27 Fast face expression recognition method based on ELM own coding algorithms

Publications (1)

Publication Number Publication Date
CN107085704A true CN107085704A (en) 2017-08-22

Family

ID=59614561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710188162.6A Pending CN107085704A (en) 2017-03-27 2017-03-27 Fast face expression recognition method based on ELM own coding algorithms

Country Status (1)

Country Link
CN (1) CN107085704A (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610692A (en) * 2017-09-22 2018-01-19 杭州电子科技大学 The sound identification method of self-encoding encoder multiple features fusion is stacked based on neutral net
CN107832787A (en) * 2017-10-31 2018-03-23 杭州电子科技大学 Recognition Method of Radar Emitters based on bispectrum own coding feature
CN108021947A (en) * 2017-12-25 2018-05-11 北京航空航天大学 A kind of layering extreme learning machine target identification method of view-based access control model
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108460324A (en) * 2018-01-04 2018-08-28 上海孩子通信息科技有限公司 A method of child's mood for identification
CN108509941A (en) * 2018-04-20 2018-09-07 北京京东金融科技控股有限公司 Emotional information generation method and device
CN108764064A (en) * 2018-05-07 2018-11-06 西北工业大学 SAR Target Recognition Algorithms based on Steerable filter device and self-encoding encoder
CN109145963A (en) * 2018-08-01 2019-01-04 上海宝尊电子商务有限公司 A kind of expression packet screening technique
CN109165684A (en) * 2018-08-20 2019-01-08 集美大学 A kind of unmanned boat sea major class target visual image-recognizing method
CN109858509A (en) * 2018-11-05 2019-06-07 杭州电子科技大学 Based on multilayer stochastic neural net single classifier method for detecting abnormality
CN109934304A (en) * 2019-03-25 2019-06-25 重庆邮电大学 A kind of blind field image pattern classification method based on the hidden characteristic model that transfinites
CN109934107A (en) * 2019-01-31 2019-06-25 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109934295A (en) * 2019-03-18 2019-06-25 重庆邮电大学 A kind of image classification and method for reconstructing based on the hidden feature learning model that transfinites
CN110287983A (en) * 2019-05-10 2019-09-27 杭州电子科技大学 Based on maximal correlation entropy deep neural network single classifier method for detecting abnormality
CN110364141A (en) * 2019-06-04 2019-10-22 杭州电子科技大学 Elevator typical case's abnormal sound alarm method based on depth single classifier
CN110633516A (en) * 2019-08-30 2019-12-31 电子科技大学 Method for predicting performance degradation trend of electronic device
CN111062478A (en) * 2019-12-18 2020-04-24 天地伟业技术有限公司 Feature compression algorithm based on neural network
CN111126297A (en) * 2019-12-25 2020-05-08 淮南师范学院 Experience analysis method based on learner expression
CN111259689A (en) * 2018-11-30 2020-06-09 百度在线网络技术(北京)有限公司 Method and apparatus for transmitting information
CN112528764A (en) * 2020-11-25 2021-03-19 杭州欣禾圣世科技有限公司 Facial expression recognition method, system and device and readable storage medium
CN113469257A (en) * 2021-07-07 2021-10-01 云南大学 Distribution transformer fault detection method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6778705B2 (en) * 2001-02-27 2004-08-17 Koninklijke Philips Electronics N.V. Classification of objects through model ensembles
CN104318221A (en) * 2014-11-05 2015-01-28 中南大学 Facial expression recognition method based on ELM
CN104799852A (en) * 2015-05-19 2015-07-29 北京工业大学 Method for extracting movement imagination electroencephalogram characteristics based on ultralimit learning machine self encoding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6778705B2 (en) * 2001-02-27 2004-08-17 Koninklijke Philips Electronics N.V. Classification of objects through model ensembles
CN104318221A (en) * 2014-11-05 2015-01-28 中南大学 Facial expression recognition method based on ELM
CN104799852A (en) * 2015-05-19 2015-07-29 北京工业大学 Method for extracting movement imagination electroencephalogram characteristics based on ultralimit learning machine self encoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
万川: "基于动态序列图像的人脸表情识别***理论与方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *
陈阔: "三维点与二维图像协同的面部表情识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610692A (en) * 2017-09-22 2018-01-19 杭州电子科技大学 The sound identification method of self-encoding encoder multiple features fusion is stacked based on neutral net
CN107610692B (en) * 2017-09-22 2020-07-21 杭州电子科技大学 Voice recognition method based on neural network stacking self-encoder multi-feature fusion
CN107832787A (en) * 2017-10-31 2018-03-23 杭州电子科技大学 Recognition Method of Radar Emitters based on bispectrum own coding feature
CN107832787B (en) * 2017-10-31 2020-09-22 杭州电子科技大学 Radar radiation source identification method based on bispectrum self-coding characteristics
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108021947A (en) * 2017-12-25 2018-05-11 北京航空航天大学 A kind of layering extreme learning machine target identification method of view-based access control model
CN108460324A (en) * 2018-01-04 2018-08-28 上海孩子通信息科技有限公司 A method of child's mood for identification
CN108509941A (en) * 2018-04-20 2018-09-07 北京京东金融科技控股有限公司 Emotional information generation method and device
CN108764064A (en) * 2018-05-07 2018-11-06 西北工业大学 SAR Target Recognition Algorithms based on Steerable filter device and self-encoding encoder
CN109145963A (en) * 2018-08-01 2019-01-04 上海宝尊电子商务有限公司 A kind of expression packet screening technique
CN109165684A (en) * 2018-08-20 2019-01-08 集美大学 A kind of unmanned boat sea major class target visual image-recognizing method
CN109858509A (en) * 2018-11-05 2019-06-07 杭州电子科技大学 Based on multilayer stochastic neural net single classifier method for detecting abnormality
CN111259689B (en) * 2018-11-30 2023-04-25 百度在线网络技术(北京)有限公司 Method and device for transmitting information
CN111259689A (en) * 2018-11-30 2020-06-09 百度在线网络技术(北京)有限公司 Method and apparatus for transmitting information
CN109934107A (en) * 2019-01-31 2019-06-25 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109934107B (en) * 2019-01-31 2022-03-01 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN109934295A (en) * 2019-03-18 2019-06-25 重庆邮电大学 A kind of image classification and method for reconstructing based on the hidden feature learning model that transfinites
CN109934304A (en) * 2019-03-25 2019-06-25 重庆邮电大学 A kind of blind field image pattern classification method based on the hidden characteristic model that transfinites
CN110287983A (en) * 2019-05-10 2019-09-27 杭州电子科技大学 Based on maximal correlation entropy deep neural network single classifier method for detecting abnormality
CN110287983B (en) * 2019-05-10 2021-07-09 杭州电子科技大学 Single-classifier anomaly detection method based on maximum correlation entropy deep neural network
CN110364141A (en) * 2019-06-04 2019-10-22 杭州电子科技大学 Elevator typical case's abnormal sound alarm method based on depth single classifier
CN110364141B (en) * 2019-06-04 2021-09-28 杭州电子科技大学 Elevator typical abnormal sound alarm method based on depth single classifier
CN110633516B (en) * 2019-08-30 2022-06-14 电子科技大学 Method for predicting performance degradation trend of electronic device
CN110633516A (en) * 2019-08-30 2019-12-31 电子科技大学 Method for predicting performance degradation trend of electronic device
CN111062478A (en) * 2019-12-18 2020-04-24 天地伟业技术有限公司 Feature compression algorithm based on neural network
CN111126297A (en) * 2019-12-25 2020-05-08 淮南师范学院 Experience analysis method based on learner expression
CN111126297B (en) * 2019-12-25 2023-10-31 淮南师范学院 Experience analysis method based on learner expression
CN112528764A (en) * 2020-11-25 2021-03-19 杭州欣禾圣世科技有限公司 Facial expression recognition method, system and device and readable storage medium
CN113469257A (en) * 2021-07-07 2021-10-01 云南大学 Distribution transformer fault detection method and system

Similar Documents

Publication Publication Date Title
CN107085704A (en) Fast face expression recognition method based on ELM own coding algorithms
CN108717568B (en) A kind of image characteristics extraction and training method based on Three dimensional convolution neural network
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN106599797B (en) A kind of infrared face recognition method based on local parallel neural network
CN112308158A (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN104268593A (en) Multiple-sparse-representation face recognition method for solving small sample size problem
CN112818764B (en) Low-resolution image facial expression recognition method based on feature reconstruction model
CN110399821A (en) Customer satisfaction acquisition methods based on facial expression recognition
CN106326843B (en) A kind of face identification method
CN105718889A (en) Human face identity recognition method based on GB(2D)2PCANet depth convolution model
CN111861945B (en) Text-guided image restoration method and system
CN105956570B (en) Smiling face&#39;s recognition methods based on lip feature and deep learning
Tereikovskyi et al. The method of semantic image segmentation using neural networks
CN107066951A (en) A kind of recognition methods of spontaneous expression of face and system
CN110175248A (en) A kind of Research on face image retrieval and device encoded based on deep learning and Hash
CN115966010A (en) Expression recognition method based on attention and multi-scale feature fusion
CN117079098A (en) Space small target detection method based on position coding
Huang et al. Design and Application of Face Recognition Algorithm Based on Improved Backpropagation Neural Network.
CN116628605A (en) Method and device for electricity stealing classification based on ResNet and DSCAttention mechanism
CN111126155B (en) Pedestrian re-identification method for generating countermeasure network based on semantic constraint
CN110070068B (en) Human body action recognition method
CN110991554A (en) Improved PCA (principal component analysis) -based deep network image classification method
CN107133579A (en) Based on CSGF (2D)2The face identification method of PCANet convolutional networks
CN105389573B (en) A kind of face identification method based on three value mode layering manufactures of part

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170822

RJ01 Rejection of invention patent application after publication