CN108268838A - Facial expression recognizing method and facial expression recognition system - Google Patents

Facial expression recognizing method and facial expression recognition system Download PDF

Info

Publication number
CN108268838A
CN108268838A CN201810001358.4A CN201810001358A CN108268838A CN 108268838 A CN108268838 A CN 108268838A CN 201810001358 A CN201810001358 A CN 201810001358A CN 108268838 A CN108268838 A CN 108268838A
Authority
CN
China
Prior art keywords
face
expression
feature
facial
carried out
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810001358.4A
Other languages
Chinese (zh)
Other versions
CN108268838B (en
Inventor
付璐斯
周盛宗
于志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Institute of Research on the Structure of Matter of CAS
Original Assignee
Fujian Institute of Research on the Structure of Matter of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Institute of Research on the Structure of Matter of CAS filed Critical Fujian Institute of Research on the Structure of Matter of CAS
Priority to CN201810001358.4A priority Critical patent/CN108268838B/en
Publication of CN108268838A publication Critical patent/CN108268838A/en
Application granted granted Critical
Publication of CN108268838B publication Critical patent/CN108268838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

This application discloses a kind of facial expression recognizing method, including:Face is detected from original image;Face alignment and positioning feature point are carried out to the face of detection;Face feature information is extracted from facial image;According to the characteristic of acquisition, expression classification is carried out, realizes facial expression recognition.The application ensure that the accuracy of Expression Recognition, be with a wide range of applications by Face datection, positioning feature point, feature extraction, expression classification so as to carry out the prediction that human face expression carries out maximum likelihood.

Description

Facial expression recognizing method and facial expression recognition system
Technical field
This application involves a kind of facial expression recognizing method and facial expression recognition systems, belong to facial expression recognition technology Field.
Background technology
The generation of the emotion of people is a very complicated mental process, and the expression of emotion is also with a variety of manifestation modes, often By computer household in research expression way there are mainly three types of:Expression, voice, action.In these three emotional expression modes In, the emotion ratio that expression is contributed is up to 55%, increasingly extensive with the application of human-computer interaction technology, in field of human-computer interaction In, facial expression recognition technology has very important significance.As pattern-recognition and the main research side in machine learning field One of method, oneself is through there is a large amount of facial expression recognition algorithm to be suggested.
However facial expression recognition technology also has its weakness:1st, different people's expression shape change:Human face expression can be according to difference People's manifestation mode difference and generate otherness;2nd, same people's change in context:The expression of same person is in actual life Real-time;3rd, extraneous condition, such as:Background, illumination, angle, distance etc. is affected to Expression Recognition.The above all can Influence the accuracy of facial expression recognition.
Invention content
Inaccurate for facial expression recognition in the prior art, the technology for influencing the accuracy of facial expression recognition is asked Topic, the application are designed to provide a kind of facial expression recognizing method and system, can realize accurately identifying for expression.
To achieve the above object, the present invention provides a kind of facial expression recognizing methods.
The facial expression recognizing method, which is characterized in that including:
Face is detected from original image;
Face alignment and positioning feature point are carried out to the face of detection;
Face feature information is extracted from facial image;
According to the characteristic of acquisition, expression classification is carried out, realizes facial expression recognition.
The Face datection:The presence of face is detected from the original image of various scenes, and is precisely separating out people Face region.
Further, it is described to detect that face includes from original image:
Original image is progressively scanned based on local binary pattern, meet with a response image;
Face datection is carried out to the response image using AdaBoost algorithms, detects the presence of face;
Human eye detection is carried out using AdaBoost algorithms, isolates human face region.
Optionally, during the use AdaBoost algorithms are detected multiple scale detecting is carried out according to 1.25-0.9.
Further, the face of described pair of detection carries out face alignment and positioning feature point includes:
Face feature point is labeled using local restriction model.
Optionally, face feature point is labeled using the local restriction model, after getting feature point coordinates, choosing The region for embodying otherness between all kinds of expressions is taken, extracts the two of expressive features based on deformation and based drive expressive features The feature of type;
It is eliminated using recursive feature and linear vector machine does feature evaluation, feature choosing is further carried out to the feature of selection It selects.
Further, the face feature information that extracted from facial image includes:
Selection embodies the region of otherness between all kinds of expressions, extracts the expressive features based on deformation and based drive table The two kinds of feature of feelings feature;
It is eliminated using recursive feature and linear vector machine does feature evaluation, feature choosing is further carried out to the feature of selection It selects.
Optionally, the region for embodying otherness between all kinds of expressions include eyes, nose, corners of the mouth point, eyebrow and Each component outline point of face.
Further, the face feature information that extracted from facial image further includes:The facial characteristics of extraction is believed Breath carries out feature selecting, obtains facial characteristics subset, face feature information is preserved, for Expression Recognition.
Further, the characteristic according to acquisition carries out expression classification, realizes that facial expression recognition includes:
According to the face feature information of extraction, sample is chosen, expression classifier, each sample pair are trained using priori Answer corresponding expression label;
By expression classifier, using least square rule, expression classification is realized.
Further, the characteristic according to acquisition carries out expression classification, realizes that facial expression recognition further includes:
Manufacture basal orientation quantity space with the expressive features of known label, expression to be measured by by its Projection Character to this space come Judge expression classification, carry out facial expression recognition.
As a kind of specific embodiment, the facial expression recognizing method the described method comprises the following steps:(1) Face is detected from original image;(2) face alignment and positioning feature point are carried out to the face of detection;(3) from facial image In extract face feature information;(4) according to the characteristic of acquisition, expression classification is carried out, realizes facial expression recognition.
Wherein, step (1) further comprises:(11) original image is progressively scanned based on local binary pattern, obtains one Response image;(12) Face datection is carried out to the response image using AdaBoost algorithms, detects the presence of face;(13) Human eye detection is carried out using AdaBoost algorithms, isolates human face region.
Further, it is more using being carried out during AdaBoost algorithms progress Face datection or human eye detection according to 1.25-0.9 Size measurement.
Step (2) further comprises:Face feature point is labeled using local restriction model.
Step (3) further comprises:(31) three mouth, eyebrow, eyes masters for embodying otherness between all kinds of expressions are chosen Region is wanted, extracts the two kinds of feature of expressive features and based drive expressive features based on deformation;(32) it uses and passs Feature elimination and linear vector machine is returned to do feature evaluation, feature selecting is further carried out to the feature of selection.
Further, feature selecting is carried out to the face feature information of extraction, obtains facial characteristics subset, preserve facial characteristics Information, for Expression Recognition.
Step (4) further comprises:(41) according to the face feature information of extraction, sample is chosen, is instructed using priori Practice expression classifier, each sample corresponds to corresponding expression label;It is (42) regular using least square by expression classifier, Realize expression classification.
Further, basal orientation quantity space is manufactured with the expressive features of known label, expression to be measured is by the way that its Projection Character is arrived This space judges expression classification, carries out facial expression recognition.
The another aspect of the application provides a kind of facial expression recognition system, which is characterized in that the system comprises: Face detection module, positioning feature point module, characteristic extracting module, facial expression recognition module;
The facial expression recognition module, for detecting face from original image;
The positioning feature point module is connected with the face detection module, for carrying out face alignment to the face of detection And positioning feature point;
The characteristic extracting module is connected with the positioning feature point module, for extracting facial spy from facial image Reference ceases;
The facial expression recognition module is connected with the characteristic extracting module, for being believed according to the facial characteristics of extraction Human face expression data to be identified by the expression classifier trained are carried out the prediction of maximum likelihood, find out possibility by breath Property highest expression classification, realize facial expression recognition.
Optionally, the face detection module is based on local binary pattern and progressively scans original image, and meet with a response figure Picture;
Face datection is carried out to the response image using AdaBoost algorithms, detects the presence of face;
Human eye detection is carried out using AdaBoost algorithms, isolates human face region.
Optionally, during the use AdaBoost algorithms are detected multiple scale detecting is carried out according to 1.25-0.9.
Optionally, the positioning feature point module is labeled face feature point using local restriction model.
Optionally, the characteristic extracting module chooses the region of otherness between all kinds of expressions of embodiment, and extraction is based on deformation Expressive features and based drive expressive features two kinds of feature;
It is eliminated using recursive feature and linear vector machine does feature evaluation, feature choosing is further carried out to the feature of selection It selects.
Optionally, the region of otherness is included in mouth, eyebrow, eyes, nose at least between all kinds of expressions of embodiment One.
Optionally, the characteristic extracting module carries out feature selecting to the face feature information of extraction, obtains facial characteristics Subset preserves face feature information, for Expression Recognition.
Optionally, the facial expression recognition module carries out expression classification, realizes face table according to the characteristic of acquisition Feelings identification includes:According to the face feature information of extraction, sample is chosen, expression classifier, each sample are trained using priori The corresponding expression label of this correspondence;
By expression classifier, using least square rule, expression classification is realized.
Optionally, the expressive features of the facial expression recognition module known label manufacture basal orientation quantity space, table to be measured Feelings carry out facial expression recognition by the way that its Projection Character to this space is judged expression classification.
The advantageous effect that the application can generate includes:
The application is carried out most by Face datection, positioning feature point, feature extraction, expression classification so as to carry out human face expression The prediction of big possibility, ensure that the accuracy of Expression Recognition, is with a wide range of applications.
Description of the drawings
Fig. 1 is the flow diagram of herein described face identification method.
Fig. 2 is the configuration diagram of herein described face identification system.
Specific embodiment
The application is described in detail, but the application is not limited to these embodiments with reference to embodiment.
Embodiment 1
It elaborates below in conjunction with the accompanying drawings to facial expression recognizing method provided by the invention and system.
With reference to figure 1, the flow diagram of facial expression recognizing method of the present invention.It the described method comprises the following steps: S11:Face is detected from original image;S12:Face alignment and positioning feature point are carried out to the face of detection;S13:From people Face feature information is extracted in face image;S14:According to the characteristic of acquisition, expression classification is carried out, realizes that human face expression is known Not.Above-mentioned steps are described in detail below in conjunction with attached drawing.
S11:Face is detected from original image.
Face datection:The presence of face is detected from the original image of various scenes, and is precisely separating out face area Domain.As preferred embodiment, following step completion further may be used in step S11:11) based on local binary pattern Original image is progressively scanned, obtains a response image;12) face inspection is carried out to the response image using AdaBoost algorithms It surveys, detects the presence of face;13) human eye detection is carried out using AdaBoost algorithms, isolates human face region.
Local binary pattern (LBP) is used as a kind of effective texture description operator, has to image local textural characteristics There is remarkable description ability.The template operation in filtering is similar to using LBP operators process, progressively scans original image;It is right Using the gray value of the point as threshold value, two-value is carried out to around it 3 × 38 fields for each pixel in original image Change;The result of binaryzation is formed into 8 bits in a certain order, is made with the value (0~255) of this binary number For the point response.
Original image corresponding grey scale value in an embodiment as shown in table 1, for the central point in 3 × 3 regions in table 1, with Its gray value 88 is used as threshold value, carries out binaryzation to its 8 field, and according to clockwise (sequentially can be with since the point of upper left Arbitrarily, but to unify) result of binaryzation is formed into a binary number 10001011, i.e., metric 139, centered on Response.After entire progressive scan process, a LBP response image is obtained, this response image can be used as follow-up work The feature of work;Gained response image corresponding grey scale value is as shown in table 2.
180 52 5
213 88 79
158 84 156
Original image corresponding grey scale value in 1 one embodiment of table.
1 0 0
1 139 0
1 0 1
2 gained response image corresponding grey scale value of table.
AdaBoost algorithms are that Freund and Schapire are proposed according to online allocation algorithm, and AdaBoost algorithms allow Designer continually adds new Weak Classifier, the sufficiently small error rate until reaching some reservation.In AdaBoost algorithms In each training sample be endowed a weight, surface it the probability of training set is selected by some component classifier.If certain A sample point is accurately classified, then as soon as under construction in training set, its selected probability is lowered;Phase Instead, if some sample point is not classified correctly, then its weight is just improved.By T take turns as training, AdaBoost algorithms can be focused on those more difficult samples, the comprehensive strong classifier obtained for target detection.
AdaBoost algorithm descriptions are as follows:
1) training sample set (x of calibration is given1,y1),(x2,y2),……(xL,yL).Wherein, gj(xi) represent i-th of instruction Practice j-th of Haar-Like feature of image, xi∈ X represent the training sample of input, yi∈ Y={ 1, -1 } represent true and false respectively Sample.
2) initialization weight w1, i=1/2m, 1/2n, wherein m, n represent true, dummy copy data, total number of samples L=respectively m+n。
3) T is taken turns and trained, For t=1,2 ..., T.
The weight of all samples is normalized:
For j-th of Haar-Like feature in each sample, a simple classification device can be obtained, that is, determine Threshold θjWith biasing PjSo that error εjReach minimum:
Wherein,
Bias PjDetermine inequality direction, only ± 1 two kinds of situations.
In determining simple classification device, finding out one has minimal error εtWeak Classifier ht
4) weight of all samples is updated:
Wherein, βtt/(1-εt), if xiBy hiCorrectly classify, then ei=0, on the contrary ei=1.
5) strong classifier finally obtained is:
Wherein, αt=ln (1/ βt) it is according to htPrediction error weigh.
So far, face can be detected by above-mentioned steps.It can be according to 1.25-0.9 in detection process Multiple scale detecting is carried out, finally merges window, exports result.
On the basis of face is detected, AdaBoost algorithms are used for human eye detection.The basic principle of human eye detection with Face datection is identical, and details are not described herein again.During human eye detection, multiple scale detecting can be carried out according to 1.25-0.9, and Establish rejecting mechanism (can be established according to features such as the position of human eye, sizes).
S12:Face alignment and positioning feature point are carried out to the face of detection.
Positioning feature point:I.e. according to the facial image of input, be automatically positioned out facial key feature points, as eyes, nose, Corners of the mouth point, eyebrow and each component outline point of face.As preferred embodiment, step S12 further may be used following Step is completed:Face feature point is labeled using local restriction model.
Local restriction model (CLM) is existed by initializing the position of average face, the characteristic point then allowed on each average face Matching is scanned on its neighborhood position to complete facial feature points detection.Whole process is in two stages:The model construction stage With a fitting stage.The model construction stage can segment the structure of two different models again:Shape is built and Patch models Structure.Shape structure is exactly that faceform's shape is modeled, it describes the criterion that change in shape follows.And Patch models are then that each characteristic point surrounding neighbors are modeled, and establish a Feature Points Matching criterion, judging characteristic point Best match.
Local restriction model (CLM) algorithm description is as follows:
1) shape is built
Calculate the average shape after all face sample alignment in training set.Assuming that there are M pictures, have per pictures N number of Characteristic point, the coordinate of each characteristic point are assumed to be (xi,yi), the vector x of the coordinate composition of N number of characteristic point on an image =[x1 y1 x2 y2 … xN yN]TIt represents, the average face of all images can be obtained:
The shape vector of each sample image and the difference of average face are calculated, the change in shape of a zero-mean can be obtained Matrix X:
PCA transformation is carried out to matrix X can obtain determining the main component of face shape variation, i.e.,
Acquire main eigenvalue λiWith corresponding feature vector pi.Because the corresponding feature vector one of larger characteristic value As all contain the main information of sample, therefore the corresponding feature vector of k maximum characteristic value is selected to form orthogonal matrix P= (p1,p2,…,pk)。
Weight vectors b=(the b of change in shape1,b2,…,bk)T, each representation in components of b its in corresponding feature vector The size in direction:
Then to arbitrary face test image, sample shape vector can be expressed as:
2) Patch model constructions
Assuming that there are M width facial images in training sample, N number of face key feature points are selected on each image, each The patch regions of fixed size are chosen around characteristic point, are positive sample by the patch zone markers comprising characteristic point;Then exist Non- characteristic point region intercepts an equal amount of patch and is labeled as negative sample.
Assuming that a total of r patch of each characteristic point, is formed a vector (x(1),x(2),…x(r))T, to sample The each image of concentration hasSo output just only has positive sample and negative sample, That is patch is characterized a region and non-characteristic point region.So y(i)={ -1,1 } i=1, wherein 2 ... r, y(i)=1 is positive sample This label, y(i)=-1 marks for negative sample.Then trained linear SVM is:
Wherein xiRepresent the subspace vector of sample set, i.e. supporting vector, αiIt is weight coefficient, MsIt is each characteristic point branch The quantity of vector is held, b is offset.It can obtain:
y(i)=wT·x(i)
wT=[w1 w2 … wn] be each supporting vector weight coefficient, θ is exactly offset.It is thus each feature Point establishes patch models.
3) point fitting
By carrying out local search in the restricted area for the characteristic point position currently estimated, to each characteristic point generation one A similar response diagram is identified as R (X, Y).
One quadratic function is fitted to response diagram, it is assumed that R (X, Y) is (x in contiguous range0,y0) at obtain maximum value, We are fitted a function to this position so that position and maximum value R (X, Y) are corresponded.Quadratic function can describe such as Under:
R (x, y)=a (x-x0)2+b(y-y0)2+c
Wherein a, b, c are the coefficients of quadratic function, and method for solving is exactly to make between quadratic function r (x, y) and R (X, Y) Error is minimum, that is, completes a least square method and calculate:
There are parameter a, b, c, then r (x, y) is exactly an objective cost function about characteristic point position, is then added again Upper deformation constraint cost function just constitutes the object function of characteristic point lookup, and object function is as follows:
Per suboptimization, this object function obtains a new characteristic point position, is then updated in iteration, until converging to Maximum value just completes the fitting of face point.
S13:Face feature information is extracted from facial image.
Feature extraction:The representative characteristic information of face is extracted from the facial image after normalization.As Following step completion further may be used in preferred embodiment, step S13:(31) three mouth, eyebrow, eyes embodiments are chosen The main region of otherness between all kinds of expressions, extract expressive features based on deformation and based drive expressive features two kinds The feature of type;(32) it is eliminated using recursive feature and linear vector machine does feature evaluation, the feature of selection is further carried out Feature selecting.
Using local restriction model to the mark of face feature point, after getting feature point coordinates, choose face, eyebrow, The shape feature of three main regions of eyes calculates the related slope information between key point in these three regions, extract base In the expressive features of deformation.Corresponding displacement information is extracted, and be extracted table into line trace to the key point in three regions simultaneously The distance between the special characteristic point of feelings picture information, it is poor to make apart from equal and tranquil picture, obtains the change information of distance, carries Take based drive expressive features.
It is eliminated using recursive feature and linear vector machine does feature evaluation, made using the weights size that support vector machines calculates For ranking criteria, denoising is further carried out to the feature of selection.
Feature selecting algorithm is described as follows:
Input:Training sample setL is classification number
Output:Feature ordering collection R
1) initialization primitive character set S={ 1,2 ..., D }, feature ordering collection R=[]
2) (l (l-1))/2 training samples are generated:
In training sampleIn find out different classes of combination of two and obtain training sample to the end:
Process is recycled until S=[]:
3) it obtains with l trained subsample Xj(j=1,2 ..., (l (l-1))/2);
X is used respectivelyjTraining Support Vector Machines respectively obtain wj(j=1,2 ..., l);
Calculate ranking criteria score
Find out the feature of ranking criteria score minimum
Update feature set R={ p } ∪ R;
This feature S=S/p. is removed in S
S14:According to the characteristic of acquisition, expression classification is carried out, realizes facial expression recognition.
Classification:I.e. human expressions are roughly divided into seven classes, be respectively happy, indignation, it is sad, detest, it is surprised, frightened and It is neutral.As preferred embodiment, following step completion further may be used in step S14:(41) according to the face of extraction Characteristic information chooses sample, trains expression classifier using priori, each sample corresponds to corresponding expression label;(42) By expression classifier, using least square rule, expression classification is realized.
Training expression classifier:The facial characteristics of extraction is trained using algorithm of support vector machine, after the completion of training It will obtain an expression classifier.
Support vector machines (SVM) algorithm description:
Input training setWherein xi∈RD,yi∈ {+1, -1 }, xiFor i-th of sample, N is sample size, and D is Sample characteristics number.SVM finds optimal Optimal Separating Hyperplane wx+b=0.
The optimization problem that solves of SVM needs is:
s.t.yi(w·xi+b)≥1-ξiI=1,2 ..., N
ξi>=0, i=1,2 ..., N
And primal problem can be converted into dual problem:
Wherein, αiFor Lagrange multiplier.
Finally the solution of w is:
The discriminant function of SVM is:
Expression classification:The face feature information extracted is inputted trained grader, grader is allowed to provide a table The value of feelings prediction.I.e. using least square method rule, the optimal function that data are found by the quadratic sum for minimizing error matches. So far, a complete recognition of face flow just completes.
With reference to figure 2, the configuration diagram of facial expression recognition system of the present invention;The system comprises:One face is examined Survey module 21, a positioning feature point module 22, a characteristic extracting module 23 and a facial expression recognition module 24.
The face detection module 21, for detecting face from original image.The face detection module 21 can be with Original image is progressively scanned based on local binary pattern, obtains a response image;Again using AdaBoost algorithms to the sound Image is answered to carry out Face datection, detects the presence of face;Then human eye detection is carried out using AdaBoost algorithms, isolates people Face region.Face datection specific implementation is with reference to preceding method flow, and details are not described herein again.
The positioning feature point module 22 is connected with the face detection module 21, for carrying out face to the face of detection Alignment and positioning feature point.Face feature point is labeled using local restriction model, orients facial key feature points, such as Eyes, nose, corners of the mouth point, eyebrow and each component outline point of face.Positioning feature point specific implementation is with reference to preceding method Flow, details are not described herein again.
The characteristic extracting module 23 is connected with the positioning feature point module 22, appears for being extracted from facial image Portion's characteristic information.The characteristic extracting module 23 can be by choosing difference between three mouth, eyebrow, eyes all kinds of expressions of embodiment The main region of property extracts the two kinds of feature of expressive features and based drive expressive features based on deformation;Later It is eliminated using recursive feature and linear vector machine does feature evaluation, feature selecting is further carried out to the feature of selection.Feature carries The stage is taken to carry out feature selecting to the face feature information of extraction, obtains facial characteristics subset, face feature information is preserved, is used for Expression Recognition.Specific implementation is with reference to preceding method flow, and details are not described herein again.
The face recognition module 24 is connected with the characteristic extracting module 23, for the characteristic according to acquisition, into Row expression classification realizes facial expression recognition.The characteristic extracting module 24 can be chosen according to the face feature information of extraction Sample trains expression classifier using priori, and each sample corresponds to corresponding expression label;Pass through expression classification later Device using least square rule, realizes expression classification.The assorting process is exactly to manufacture basal orientation with the expressive features of known label Quantity space, expression to be measured carry out facial expression recognition by the way that its Projection Character to this space is judged expression classification.It is specific Realization method is with reference to preceding method flow, and details are not described herein again.
2 facial expression recognizing method of embodiment
The method following steps of facial expression recognition in the present embodiment:
Step 11:Face is detected from original image;
In this step, a kind of specific embodiment includes step 101, step 102 and step 103.
Step 101:Original image is progressively scanned based on local binary pattern, obtains a response image.
Step 102:Face datection is carried out to the response image using AdaBoost algorithms, detects the presence of face.
Step 103:Human eye detection is carried out using AdaBoost algorithms, isolates human face region.
In a kind of specific mode, using AdaBoost algorithms carry out during Face datection or human eye detection according to 1.25-0.9 carry out multiple scale detecting.
Step 12:Face alignment and positioning feature point are carried out to the face of detection;
In this step, a kind of specific embodiment is:Face feature point is labeled using local restriction model.
Step 13:Face feature information is extracted from facial image;
In this step, a kind of specific embodiment includes step 301 and step 302.
Step 301:Three mouth, eyebrow, eyes main regions for embodying otherness between all kinds of expressions are chosen, extraction is based on The two kinds of feature of the expressive features of deformation and based drive expressive features;
In this step, another specific embodiment is:It is each to choose eyes, nose, corners of the mouth point, eyebrow and face Component outline point embodies the main region of otherness between all kinds of expressions, extracts expressive features based on deformation and based drive The two kinds of feature of expressive features;
Step 302:It is eliminated using recursive feature and linear vector machine does feature evaluation, the feature of selection is further carried out Feature selecting.
In a kind of specific embodiment, feature selecting is carried out to the face feature information of extraction, obtains facial characteristics Subset preserves face feature information, for Expression Recognition.
Step 14:According to the characteristic of acquisition, expression classification is carried out, realizes facial expression recognition.
In this step, a kind of specific embodiment includes step 401 and step 402.
Step 401:According to the face feature information of extraction, sample is chosen, trains expression classifier using priori, often A sample corresponds to corresponding expression label;
Step 402:By expression classifier, using least square rule, expression classification is realized.
In a kind of specific embodiment, basal orientation quantity space is manufactured with the expressive features of known label, expression to be measured is led to It crosses and its Projection Character to this space is judged into expression classification, carry out facial expression recognition.
Various algorithms involved in the present embodiment are the same as embodiment 1.
3 facial expression recognition system of embodiment
Facial expression recognition system in the present embodiment includes:Face detection module, positioning feature point module, feature extraction Module, facial expression recognition module;
The facial expression recognition module, for detecting face from original image;
In a kind of specific embodiment, the face detection module is based on local binary pattern and progressively scans original graph Picture, meet with a response image;
Face datection is carried out to the response image using AdaBoost algorithms, detects the presence of face;
Human eye detection is carried out using AdaBoost algorithms, isolates human face region.
In one of which specific embodiment, it is described using AdaBoost algorithms be detected during according to 1.25- 0.9 carries out multiple scale detecting.
The positioning feature point module is connected with the face detection module, for carrying out face alignment to the face of detection And positioning feature point;
In a kind of specific embodiment, the positioning feature point module using local restriction model to face feature point into Rower is noted.
The characteristic extracting module is connected with the positioning feature point module, for extracting facial spy from facial image Reference ceases;
In a kind of specific embodiment, the characteristic extracting module chooses the area of otherness between all kinds of expressions of embodiment The two kinds of feature of expressive features and based drive expressive features based on deformation is extracted in domain;
It is eliminated using recursive feature and linear vector machine does feature evaluation, feature choosing is further carried out to the feature of selection It selects.
In one of which specific embodiment, the region of otherness includes eye between all kinds of expressions of the selection embodiment Eyeball, nose, corners of the mouth point, eyebrow and each component outline point of face.
The facial expression recognition module is connected with the characteristic extracting module, for being believed according to the facial characteristics of extraction Human face expression data to be identified by the expression classifier trained are carried out the prediction of maximum likelihood, find out possibility by breath Property highest expression classification, realize facial expression recognition;
In a kind of specific embodiment:The facial expression recognition module carries out expression according to the characteristic of acquisition Classification realizes that facial expression recognition includes:According to the face feature information of extraction, sample is chosen, table is trained using priori Feelings grader, each sample correspond to corresponding expression label;
By expression classifier, using least square rule, expression classification is realized.
In a kind of specific embodiment, the facial expression recognition module carries out expression according to the characteristic of acquisition Classification realizes that facial expression recognition includes:According to the face feature information of extraction, sample is chosen, table is trained using priori Feelings grader, each sample correspond to corresponding expression label;
By expression classifier, using least square rule, expression classification is realized;
The expressive features of the facial expression recognition module known label manufacture basal orientation quantity space, expression to be measured pass through by Its Projection Character judges expression classification to this space, carries out facial expression recognition.
Various algorithms involved in the present embodiment are the same as embodiment 1.
The above is only several embodiments of the application, any type of limitation is not done to the application, although this Shen Please disclosed as above with preferred embodiment, however not to limit the application, any person skilled in the art is not taking off In the range of technical scheme, make a little variation using the technology contents of the disclosure above or modification is equal to Case study on implementation is imitated, is belonged in the range of technical solution.

Claims (10)

1. a kind of facial expression recognizing method, which is characterized in that including:
Face is detected from original image;
Face alignment and positioning feature point are carried out to the face of detection;
Face feature information is extracted from facial image;
According to the characteristic of acquisition, expression classification is carried out, realizes facial expression recognition.
2. according to the method described in claim 1, it is characterized in that, described detect that face includes from original image:
Original image is progressively scanned based on local binary pattern, meet with a response image;
Face datection is carried out to the response image using AdaBoost algorithms, detects the presence of face;
Human eye detection is carried out using AdaBoost algorithms, isolates human face region;
Preferably, during the use AdaBoost algorithms are detected multiple scale detecting is carried out according to 1.25-0.9.
3. according to the method described in claim 1, it is characterized in that, the face of described pair of detection carries out face alignment and characteristic point Positioning includes:
Face feature point is labeled using local restriction model.
4. according to the method described in claim 1, it is characterized in that, described extract face feature information packet from facial image It includes:
Selection embodies the region of otherness between all kinds of expressions, extracts the expressive features based on deformation and based drive expression spy The two kinds of feature of sign;
It is eliminated using recursive feature and linear vector machine does feature evaluation, feature selecting is further carried out to the feature of selection;
Preferably, the region of otherness includes eyes, nose, corners of the mouth point, eyebrow and face between all kinds of expressions of embodiment Each component outline point;
Preferably, the face feature information that extracted from facial image further includes:The face feature information of extraction is carried out Feature selecting obtains facial characteristics subset, face feature information is preserved, for Expression Recognition.
5. according to the method described in claim 1, it is characterized in that, the characteristic according to acquisition, expression classification is carried out, Realize that facial expression recognition includes:
According to the face feature information of extraction, sample is chosen, trains expression classifier using priori, each sample corresponds to phase The expression label answered;
By expression classifier, using least square rule, expression classification is realized;
Preferably, the characteristic according to acquisition carries out expression classification, realizes that facial expression recognition further includes:
Basal orientation quantity space is manufactured with the expressive features of known label, expression to be measured is by the way that its Projection Character is judged to this space Expression classification carries out facial expression recognition.
6. a kind of facial expression recognition system, which is characterized in that the system comprises:Face detection module, positioning feature point mould Block, characteristic extracting module, facial expression recognition module;
The facial expression recognition module, for detecting face from original image;
The positioning feature point module is connected with the face detection module, for carrying out face alignment and spy to the face of detection Levy point location;
The characteristic extracting module is connected with the positioning feature point module, for extracting facial characteristics letter from facial image Breath;
The facial expression recognition module is connected with the characteristic extracting module, will for the face feature information according to extraction Human face expression data to be identified carry out the prediction of maximum likelihood by the expression classifier trained, and find out possibility highest Expression classification, realize facial expression recognition.
7. system according to claim 6, which is characterized in that the face detection module be based on local binary pattern by Row scanning original image, meet with a response image;
Face datection is carried out to the response image using AdaBoost algorithms, detects the presence of face;
Human eye detection is carried out using AdaBoost algorithms, isolates human face region.
8. system according to claim 6, which is characterized in that the positioning feature point module uses local restriction model pair Face feature point is labeled.
9. system according to claim 6, which is characterized in that the characteristic extracting module is chosen between all kinds of expressions of embodiment The two kinds of feature of expressive features and based drive expressive features based on deformation is extracted in the region of otherness;
It is eliminated using recursive feature and linear vector machine does feature evaluation, feature selecting is further carried out to the feature of selection;
Preferably, the region of otherness includes eyes, nose, corners of the mouth point, eyebrow and face between all kinds of expressions of embodiment Each component outline point;
Preferably, the characteristic extracting module carries out feature selecting to the face feature information of extraction, obtains facial characteristics subset, Face feature information is preserved, for Expression Recognition.
10. system according to claim 6, which is characterized in that the facial expression recognition module is according to the feature of acquisition Data carry out expression classification, realize that facial expression recognition includes:According to the face feature information of extraction, sample is chosen, utilizes elder generation Knowledge training expression classifier is tested, each sample corresponds to corresponding expression label;
By expression classifier, using least square rule, expression classification is realized;
Preferably, the expressive features of the facial expression recognition module known label manufacture basal orientation quantity space, and expression to be measured is led to It crosses and its Projection Character to this space is judged into expression classification, carry out facial expression recognition.
CN201810001358.4A 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system Active CN108268838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810001358.4A CN108268838B (en) 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810001358.4A CN108268838B (en) 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system

Publications (2)

Publication Number Publication Date
CN108268838A true CN108268838A (en) 2018-07-10
CN108268838B CN108268838B (en) 2020-12-29

Family

ID=62773093

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810001358.4A Active CN108268838B (en) 2018-01-02 2018-01-02 Facial expression recognition method and facial expression recognition system

Country Status (1)

Country Link
CN (1) CN108268838B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN109712144A (en) * 2018-10-29 2019-05-03 百度在线网络技术(北京)有限公司 Processing method, training method, equipment and the storage medium of face-image
CN109948672A (en) * 2019-03-05 2019-06-28 张智军 A kind of wheelchair control method and system
CN109948541A (en) * 2019-03-19 2019-06-28 西京学院 A kind of facial emotion recognition methods and system
CN110020638A (en) * 2019-04-17 2019-07-16 唐晓颖 Facial expression recognizing method, device, equipment and medium
CN110059650A (en) * 2019-04-24 2019-07-26 京东方科技集团股份有限公司 Information processing method, device, computer storage medium and electronic equipment
CN110166836A (en) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device
CN110334643A (en) * 2019-06-28 2019-10-15 广东奥园奥买家电子商务有限公司 A kind of feature evaluation method and device based on recognition of face
CN110348899A (en) * 2019-06-28 2019-10-18 广东奥园奥买家电子商务有限公司 A kind of commodity information recommendation method and device
CN110941993A (en) * 2019-10-30 2020-03-31 东北大学 Dynamic personnel classification and storage method based on face recognition
CN111144374A (en) * 2019-12-31 2020-05-12 泰康保险集团股份有限公司 Facial expression recognition method and device, storage medium and electronic equipment
WO2020125386A1 (en) * 2018-12-18 2020-06-25 深圳壹账通智能科技有限公司 Expression recognition method and apparatus, computer device, and storage medium
WO2020133072A1 (en) * 2018-12-27 2020-07-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for target region evaluation and feature point evaluation
CN112132117A (en) * 2020-11-16 2020-12-25 黑龙江大学 Fusion identity authentication system assisting coercion detection
CN112307942A (en) * 2020-10-29 2021-02-02 广东富利盛仿生机器人股份有限公司 Facial expression quantitative representation method, system and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN1996344A (en) * 2006-12-22 2007-07-11 北京航空航天大学 Method for extracting and processing human facial expression information
US20130163829A1 (en) * 2011-12-21 2013-06-27 Electronics And Telecommunications Research Institute System for recognizing disguised face using gabor feature and svm classifier and method thereof
CN104021384A (en) * 2014-06-30 2014-09-03 深圳市创冠智能网络技术有限公司 Face recognition method and device
CN104268580A (en) * 2014-10-15 2015-01-07 南京大学 Class cartoon layout image management method based on scene classification
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN105069447A (en) * 2015-09-23 2015-11-18 河北工业大学 Facial expression identification method
CN106022391A (en) * 2016-05-31 2016-10-12 哈尔滨工业大学深圳研究生院 Hyperspectral image characteristic parallel extraction and classification method
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
US20170132408A1 (en) * 2015-11-11 2017-05-11 Samsung Electronics Co., Ltd. Methods and apparatuses for adaptively updating enrollment database for user authentication
CN106919884A (en) * 2015-12-24 2017-07-04 北京汉王智远科技有限公司 Human facial expression recognition method and device
CN106934375A (en) * 2017-03-15 2017-07-07 中南林业科技大学 The facial expression recognizing method of distinguished point based movement locus description
US20170301121A1 (en) * 2013-05-02 2017-10-19 Emotient, Inc. Anonymization of facial images

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN1996344A (en) * 2006-12-22 2007-07-11 北京航空航天大学 Method for extracting and processing human facial expression information
US20130163829A1 (en) * 2011-12-21 2013-06-27 Electronics And Telecommunications Research Institute System for recognizing disguised face using gabor feature and svm classifier and method thereof
US20170301121A1 (en) * 2013-05-02 2017-10-19 Emotient, Inc. Anonymization of facial images
CN104021384A (en) * 2014-06-30 2014-09-03 深圳市创冠智能网络技术有限公司 Face recognition method and device
CN104268580A (en) * 2014-10-15 2015-01-07 南京大学 Class cartoon layout image management method based on scene classification
CN104951743A (en) * 2015-03-04 2015-09-30 苏州大学 Active-shape-model-algorithm-based method for analyzing face expression
CN105069447A (en) * 2015-09-23 2015-11-18 河北工业大学 Facial expression identification method
US20170132408A1 (en) * 2015-11-11 2017-05-11 Samsung Electronics Co., Ltd. Methods and apparatuses for adaptively updating enrollment database for user authentication
CN106919884A (en) * 2015-12-24 2017-07-04 北京汉王智远科技有限公司 Human facial expression recognition method and device
CN106022391A (en) * 2016-05-31 2016-10-12 哈尔滨工业大学深圳研究生院 Hyperspectral image characteristic parallel extraction and classification method
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
CN106934375A (en) * 2017-03-15 2017-07-07 中南林业科技大学 The facial expression recognizing method of distinguished point based movement locus description

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KHAN, MASOOD MEHMOOD ET AL: "Automated Facial Expression Classification and affect interpretation using infrared measurement of facial skin temperature variations", 《TRANSACTIONS ON AUTONOMOUS AND ADAPTIVE SYSTEMS》 *
PUAL VIOLA ET AL: "Rapid Object Detection using a Boosted Cascade of SimpleFeatures", 《PROCEEDINGS OF THE 2001 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
师亚亭等: "基于嘴巴状态约束的人脸特征点定位算法", 《智能***学报》 *
蒋政: "人脸识别中特征提取算法的研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
马飞: "基于几何特征的表情识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409273A (en) * 2018-10-17 2019-03-01 中联云动力(北京)科技有限公司 A kind of motion state detection appraisal procedure and system based on machine vision
CN109712144A (en) * 2018-10-29 2019-05-03 百度在线网络技术(北京)有限公司 Processing method, training method, equipment and the storage medium of face-image
WO2020125386A1 (en) * 2018-12-18 2020-06-25 深圳壹账通智能科技有限公司 Expression recognition method and apparatus, computer device, and storage medium
CN113302619B (en) * 2018-12-27 2023-11-14 浙江大华技术股份有限公司 System and method for evaluating target area and characteristic points
WO2020133072A1 (en) * 2018-12-27 2020-07-02 Zhejiang Dahua Technology Co., Ltd. Systems and methods for target region evaluation and feature point evaluation
CN109948672A (en) * 2019-03-05 2019-06-28 张智军 A kind of wheelchair control method and system
CN109948541A (en) * 2019-03-19 2019-06-28 西京学院 A kind of facial emotion recognition methods and system
CN110166836A (en) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 A kind of TV program switching method, device, readable storage medium storing program for executing and terminal device
CN110166836B (en) * 2019-04-12 2022-08-02 深圳壹账通智能科技有限公司 Television program switching method and device, readable storage medium and terminal equipment
CN110020638A (en) * 2019-04-17 2019-07-16 唐晓颖 Facial expression recognizing method, device, equipment and medium
CN110059650A (en) * 2019-04-24 2019-07-26 京东方科技集团股份有限公司 Information processing method, device, computer storage medium and electronic equipment
CN110348899A (en) * 2019-06-28 2019-10-18 广东奥园奥买家电子商务有限公司 A kind of commodity information recommendation method and device
CN110334643A (en) * 2019-06-28 2019-10-15 广东奥园奥买家电子商务有限公司 A kind of feature evaluation method and device based on recognition of face
CN110334643B (en) * 2019-06-28 2023-05-23 知鱼智联科技股份有限公司 Feature evaluation method and device based on face recognition
CN110941993A (en) * 2019-10-30 2020-03-31 东北大学 Dynamic personnel classification and storage method based on face recognition
CN111144374A (en) * 2019-12-31 2020-05-12 泰康保险集团股份有限公司 Facial expression recognition method and device, storage medium and electronic equipment
CN111144374B (en) * 2019-12-31 2023-10-13 泰康保险集团股份有限公司 Facial expression recognition method and device, storage medium and electronic equipment
CN112307942A (en) * 2020-10-29 2021-02-02 广东富利盛仿生机器人股份有限公司 Facial expression quantitative representation method, system and medium
CN112132117A (en) * 2020-11-16 2020-12-25 黑龙江大学 Fusion identity authentication system assisting coercion detection

Also Published As

Publication number Publication date
CN108268838B (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN108268838A (en) Facial expression recognizing method and facial expression recognition system
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN108052896B (en) Human body behavior identification method based on convolutional neural network and support vector machine
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN106295124B (en) The method of a variety of image detecting technique comprehensive analysis gene subgraph likelihood probability amounts
JP2004206656A (en) Detection device and detection method
CN109389074A (en) A kind of expression recognition method extracted based on human face characteristic point
CN111126482A (en) Remote sensing image automatic classification method based on multi-classifier cascade model
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN112766218B (en) Cross-domain pedestrian re-recognition method and device based on asymmetric combined teaching network
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN112381047B (en) Enhanced recognition method for facial expression image
CN108898623A (en) Method for tracking target and equipment
CN110717385A (en) Dynamic gesture recognition method
CN111783885A (en) Millimeter wave image quality classification model construction method based on local enhancement
CN111339932B (en) Palm print image preprocessing method and system
Ashfaq et al. Classification of hand gestures using Gabor filter with Bayesian and naïve Bayes classifier
Li et al. Recognizing hand gestures using the weighted elastic graph matching (WEGM) method
JP2005351814A (en) Detector and detecting method
CN108288276B (en) Interference filtering method in touch mode in projection interaction system
Wang et al. Lip segmentation with the presence of beards
CN107679467A (en) A kind of pedestrian's weight recognizer implementation method based on HSV and SDALF
US11521427B1 (en) Ear detection method with deep learning pairwise model based on contextual information
Boruah et al. Different face regions detection based facial expression recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant