CN111444860A - Expression recognition method and system - Google Patents
Expression recognition method and system Download PDFInfo
- Publication number
- CN111444860A CN111444860A CN202010238235.XA CN202010238235A CN111444860A CN 111444860 A CN111444860 A CN 111444860A CN 202010238235 A CN202010238235 A CN 202010238235A CN 111444860 A CN111444860 A CN 111444860A
- Authority
- CN
- China
- Prior art keywords
- face
- key points
- feature
- features
- mouth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000014509 gene expression Effects 0.000 title claims abstract description 14
- 230000008921 facial expression Effects 0.000 claims abstract description 51
- 210000004709 eyebrow Anatomy 0.000 claims abstract description 39
- 239000011324 bead Substances 0.000 claims abstract description 9
- 238000004364 calculation method Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 32
- 238000012549 training Methods 0.000 claims description 31
- 238000001514 detection method Methods 0.000 claims description 27
- 238000013145 classification model Methods 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 12
- 230000001815 facial effect Effects 0.000 claims description 10
- 210000003128 head Anatomy 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 210000000744 eyelid Anatomy 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 6
- 230000007547 defect Effects 0.000 abstract description 3
- 238000007635 classification algorithm Methods 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24143—Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
- G06F18/24155—Bayesian classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a facial expression recognition method and system. Which comprises the following steps: recognizing a face area of the image; extracting 68 key points of each face by using Dlib to extract geometric features of the face; the geometrical characteristics include: the mouth height, the mouth width, the eye height, the sum of the eyebrow heights, the sum of the eyebrow widths, the minimum external rectangular area of the mouth, the distance between the eyebrows and the eyes, the distance between the eyebrows, the height between the lip beads and the corners of the mouth and the minimum external rectangular area of the eyes; sending the geometric features of the human face into a naive Bayes classifier, and determining the expression category; and in Bayesian decision, the K nearest neighbor is used for secondary learning to improve the classification effect. The scheme of the invention overcomes the defects of low classification efficiency, high calculation cost and influence on the classification algorithm effect caused by high dimension of the existing human face features.
Description
Technical Field
The invention belongs to the field of artificial intelligence and the field of psychology, and particularly relates to a face recognition technology.
Background
The human facial expression is an important way for human to convey emotion, and the human facial expression recognition technology can be widely applied to the fields of human-computer interaction, computer vision, medical assistance, fatigue driving detection and the like.
The existing facial expression feature extraction technology mainly comprises traditional image feature extraction methods such as a Gabor filter, Scale Invariant Feature Transform (SIFT), a gradient Histogram (HOG), linear discriminant analysis (L DA), a local binary pattern (L BP) and the like, and also comprises a more popular deep learning feature extraction method, and classification is required after feature extraction, and the existing expression classification technology mainly uses a support vector machine, a decision tree and a classifier based on a convolutional neural network to classify expressions.
Although the feature extraction and classification based on deep learning can achieve a good recognition effect, the feature dimension extracted by deep learning is high, the calculation amount is large, and the requirement on hardware is high, so that the requirement on hardware by deep learning in many places in real life is difficult to meet, and the support of the facial expression technology is difficult to realize.
Disclosure of Invention
The invention aims to provide a facial expression recognition method.
In order to achieve the above object, one technical solution of the present invention is to provide an expression recognition method, which is characterized by comprising the following steps:
s1, identifying a face area in an image;
s2, calculating key point information in the face area and calculating a face expression geometric feature vector, and the method comprises the following steps:
s201, in 68 key points of the face features, obtaining a mouth height feature, a mouth width feature, a minimum circumscribed rectangle area feature of a mouth and a height feature of a lip bead and a mouth corner from all the key points of the face features corresponding to the mouth, wherein: the mouth height features are expressed by the key points of the face features with the numbers 62 and 66, and the ordinate of the key points is Y62And Y66,Y66-Y62I.e., the mouth height feature; the mouth width feature is expressed by the key points of the face features with the numbers 54 and 48, and the abscissa of the key points is X54And X48,X54-X48I.e., the mouth width feature; the minimum external rectangle area characteristic of the mouth is expressed by the human face characteristic key points with the numbers from 48 to 67, the minimum external rectangle four-point coordinate of the mouth characteristic point is calculated through OpenCV, the length and the width of the rectangle are calculated, and the rectangle area is obtained through calculation and is the minimum external rectangle area characteristic of the mouthPerforming sign; the lip bead and corner height features are expressed by the numbers 48, 54, 51, the ordinate Y of which48、Y54、Y62、Y63,(Y54-Y51)+(Y48-Y51) Lip bead and mouth corner height features;
s202, in 68 personal face feature key points, obtaining eye height features and minimum circumscribed rectangle area features of two eyes from all the face feature key points corresponding to the eyes, wherein: the eye height features are expressed by the key points of the human face features with the numbers 37, 38, 40, 41, 43, 44, 46 and 47, and the ordinate of the key points is Y37、Y38、Y40、Y41、Y43、Y44、Y46、Y47,(Y41-Y37)+(Y40-Y38)+(Y47-Y43)+(Y46-Y44) I.e., the eye height feature; the minimum external rectangle area characteristics of the two eyes are expressed by human face characteristic key points with the numbers from 36 to 47, the eye key points calculate the coordinates of four points of the minimum external rectangle of the left eye characteristic points through OpenCV, and the length and the width of the rectangle so as to calculate the area of the rectangle, the minimum rectangle area of the right eye is calculated by the same method, and the sum of the minimum rectangle areas of the left eye and the right eye is the minimum external rectangle area characteristic of the two eyes;
s203, in 68 personal face feature key points, obtaining an eyebrow height sum feature, an eyebrow width sum feature, an eyebrow-to-eye distance and an eyebrow distance feature from all the face feature key points corresponding to the eyebrows, wherein: the eyebrow height sum characteristic is expressed by the key points of the face characteristics with the numbers from 17 to 26, and the ordinate of the key points is Y17To Y26The distance between the ordinate and the face frame, and Y since the face frame is the X coordinate axis17To Y26The sum of the coordinates is the feature of the sum of the heights of the eyebrows; the sum of the eyebrow width features is expressed by the key points of the face features with the numbers from 17 to 26, and the abscissa of the key points is X17To X26,(X22-X17)+(X23-X18)+(X24-X19)+(X25-X20)+(X26-X21) The sum of the width of the eyebrows is the characteristic; eyebrowThe head and eye distance features are expressed by the key points of the human face features with the numbers 21, 39, 22 and 42, and the horizontal and vertical coordinates are (X)21,Y21)、(X39,Y39)、(X22,Y22)、(X42,Y42) Then, thenAndthe sum is the distance characteristic of the eyebrows and the heads of the eyes; the eyebrow distance feature is expressed by the key points of the face features with the numbers 21 and 22, and the abscissa of the key points is X21And X22,(X22-X21) Is an eyebrow distance feature;
s3, obtaining geometric feature vector data of the facial expression, and training a weighted naive Bayes model through modeling to obtain posterior probabilities of m categories;
s4, m-dimensional probability feature vectors are formed by the posterior probabilities of the m classes and are used as training data of the K neighbor classification model to train the K neighbor classification model;
and S5, performing feature extraction and classification model on the test data through face feature extraction and model classification to predict the face expression.
Preferably, the step S1 specifically includes the following sub-steps:
s101, carrying out gray processing on the shot image;
s102, detecting and acquiring a face region from the image through Haar features and an AdaBoost algorithm;
s103, identifying 68 personal face feature key points in the face area by using a dlib key point detection algorithm and generating a face detection frame containing the face area; extracting geometric features of the human face from the key points of the human face features;
s104, defining a coordinate system in the face detection frame, wherein the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis;
and S105, determining the coordinates of each face feature key point according to the defined coordinate system.
Preferably, in the step S3, the weighted naive bayes model construction specifically includes the following steps:
s301, acquiring human face expression geometric feature vector data for training the weighted naive Bayes model as sample data by executing the step S2, and randomly selecting 80% of the sample data set as a training data set D and 20% of the sample data set as a test data set T;
302. set training set D { (N)1,Ci),(N2,Ci),…,(Nn,Ci) In which N isnN training data are shown, C is facial expression category, i ∈ {1,2, …, m };
s303, according to a Bayesian formulaWherein X represents unknown facial expression feature vector, and P (X) is unknown facial expression feature vector probability, which is constant, so that the posterior probability P (C) is calculatedi| X) only P (X | C) needs to be calculatedi)P(Ci) I.e., P (C)i) The probability of a priori is represented and,where N is the number of training samples, NCiIs class C in the training sampleiThe number of the cells. In naive Bayes, formally because of the assumption of independence between attributes, this is calculating P (X | C)i) The following were used:wherein X ═ X1,x2,…,xn]Representing n-dimensional feature vectors, xjA single eigenvalue representing an unknown facial expression eigenvector. When the attribute is a continuous attribute, the posterior probability P (X | C)i) Calculating the conditional probability P (x) of the characteristic attribute under the assumption that the attribute follows normal distributionj|Ci) In time, the following steps:wherein, muciAnd σciAre respectively class CiMean and standard deviation of (1), training by training data to obtain [ mu ]ciAnd σci。
Another technical solution of the present invention is to provide a facial expression recognition system, which is characterized by comprising:
a face region recognition module, which recognizes a face region in the image by using the step S1 in the expression recognition method;
a facial geometric feature acquisition module, which calculates facial expression geometric feature vectors in the facial area obtained by the facial area recognition module by adopting the step S2 in the above expression recognition method;
the weighted naive Bayes model is used for inputting the facial expression geometric feature vector obtained by the facial geometric feature obtaining module into the trained weighted naive Bayes model to obtain the posterior probabilities of m classes;
and the facial expression recognition module is used for weighting the posterior probabilities of m categories output by the naive Bayes model to form an m-dimensional probability feature vector, inputting the m-dimensional probability feature vector into a K-neighbor classification model of the facial expression recognition module, and classifying the m-dimensional probability feature vector by the K-neighbor classification model so as to predict the facial expression.
The invention has the beneficial effects that: on one hand, the method overcomes the defect that the traditional image feature extraction has higher dimensionality to influence the subsequent classification efficiency, 68 key point information of the facial expression is extracted by using Dlib, the facial geometric feature is extracted by using 68 key point coordinates, and the facial geometric feature is extracted by using the difference of different expressions in the geometric feature. In the aspect of algorithm, aiming at the defects of the naive Bayes algorithm, a method for combining weighted naive Bayes and K neighbor is provided, and when the weighted naive Bayes is used for decision making, the K neighbor is used for secondary learning to improve the classification effect. On the other hand, the invention has low performance requirements on hardware equipment for computer resources far smaller than the convolutional neural network in the facial expression recognition technology, and is conveniently applied to low-performance mobile equipment, market self-service machines, coin-operated game machines, children toys and other low-end equipment, so that the facial expression recognition function can be realized in the fields at low cost.
Drawings
FIG. 1 is a flow chart of a facial expression recognition method in an embodiment;
FIG. 2 is a schematic diagram of a face region and face feature key points identified in an embodiment;
fig. 3 is a block diagram of the facial expression recognition method in the embodiment.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
In this embodiment, as shown in fig. 1, the method for recognizing a facial expression of a captured image includes the following steps:
s1, identifying a face area in an image;
s2, calculating key point information in the face area and calculating a face expression geometric feature vector;
s3, obtaining geometric feature vector data of the facial expression, and training a weighted naive Bayes model through modeling to obtain posterior probabilities of m categories;
s4, m-dimensional probability feature vectors are formed by the posterior probabilities of the m categories and serve as training data of the K neighbor model to train the K neighbor classification model;
and S5, performing feature extraction and classification model on the test data through face feature extraction and model classification to predict the face expression.
In the present embodiment, the above steps S1 and S2 are performed using dlib algorithm. The step S1 specifically includes the following sub-steps:
s101, carrying out gray processing on the shot image;
s102, detecting and acquiring a face region from the image through Haar features and an AdaBoost algorithm;
s103, identifying 68 personal face feature key points in the face area by using a dlib key point detection algorithm and generating a face detection frame containing the face area; extracting geometric features of the human face from the key points of the human face features;
s104, defining a coordinate system in the face detection frame, wherein the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis;
and S105, determining the coordinates of each face feature key point according to the defined coordinate system.
By executing step S101, the original color image can be converted into a grayscale image, so that the information amount of the image is reduced, which is beneficial to the post-processing of the image. Of course, if the image itself is grayscale, step S101 need not be performed. Since all the steps of the facial expression recognition method performed in the present embodiment do not depend on the color of the image, graying the image does not affect the recognition effect of the facial expression recognition method.
By executing the face detection algorithm in the dlib algorithm, in the case that a face region exists in the image, a rectangular face detection frame is generated, and the face detection frame includes the face region in the image.
By performing the keypoint detection in the dlib algorithm, 68 key points of the face features in the face region can be obtained, and the effect is shown in fig. 2.
In this embodiment, each face feature key point identified by the coordinate system dlib key point detection algorithm is assigned with a coordinate. The established coordinate system is a plane rectangular coordinate system, and the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis.
After the coordinates are assigned to the identified key points of the face features, step S2 may be executed to quantitatively calculate the geometric features of the face, where step S2 specifically includes the following steps:
s201, in 68 key points of the face features of the individual, obtaining the mouth height features, the mouth width features, the minimum circumscribed rectangle area features of the mouth and the heights of the lips and the corners of the mouth from all the key points of the face features corresponding to the mouthA degree characteristic, wherein: the mouth height features are expressed by the key points of the face features with the numbers 62 and 66, and the ordinate of the key points is Y62And Y66,Y66-Y62I.e., the mouth height feature; the mouth width feature is expressed by the key points of the face features with the numbers 54 and 48, and the abscissa of the key points is X54And X48,X54-X48I.e., the mouth width feature; the minimum circumscribed rectangle area characteristic of the mouth is expressed by the human face characteristic key points with the numbers of 48 to 67, the coordinates of four points of the minimum circumscribed rectangle of the mouth characteristic points are calculated through OpenCV, the length and the width of the rectangle are calculated, and the rectangular area is obtained through calculation and is the minimum circumscribed rectangle area characteristic of the mouth; the lip bead and corner height features are expressed by the numbers 48, 54, 51, the ordinate Y of which48、Y54、Y62、Y63,(Y54-Y51)+(Y48-Y51) Lip bead and mouth corner height features;
s202, in 68 personal face feature key points, obtaining eye height features and minimum circumscribed rectangle area features of two eyes from all the face feature key points corresponding to the eyes, wherein: the eye height features are expressed by the key points of the human face features with the numbers 37, 38, 40, 41, 43, 44, 46 and 47, and the ordinate of the key points is Y37、Y38、Y40、Y41、Y43、Y44、Y46、Y47,(Y41-Y37)+(Y40-Y38)+(Y47-Y43)+(Y46-Y44) I.e., the eye height feature; the minimum external rectangle area characteristics of the two eyes are expressed by human face characteristic key points with the numbers from 36 to 47, the eye key points calculate the coordinates of four points of the minimum external rectangle of the left eye characteristic points through OpenCV, and the length and the width of the rectangle so as to calculate the area of the rectangle, the minimum rectangle area of the right eye is calculated by the same method, and the sum of the minimum rectangle areas of the left eye and the right eye is the minimum external rectangle area characteristic of the two eyes;
s203, in 68 personal face feature key points, obtaining eyebrow height sum features, eyebrow width sum features, eyebrow and eyelid distance and eyebrow height from all the face feature key points corresponding to the eyebrowsA head distance feature, wherein: the eyebrow height sum characteristic is expressed by the key points of the face characteristics with the numbers from 17 to 26, and the ordinate of the key points is Y17To Y26The distance between the ordinate and the face frame, and Y since the face frame is the X coordinate axis17To Y26The sum of the coordinates is the feature of the sum of the heights of the eyebrows; the sum of the eyebrow width features is expressed by the key points of the face features with the numbers from 17 to 26, and the abscissa of the key points is X17To X26,(X22-X17)+(X23-X18)+(X24-X19)+(X25-X20)+(X26-X21) The sum of the width of the eyebrows is the characteristic; the eyebrow and the head distance features are expressed by the key points of the face features with the numbers 21, 39, 22 and 42, and the horizontal and vertical coordinates of the key points are (X)21,Y21)、(X39,Y39)、(X22,Y22)、(X42,Y42) Then, thenAndthe sum is the distance characteristic of the eyebrows and the heads of the eyes; the eyebrow distance feature is expressed by the key points of the face features with the numbers 21 and 22, and the abscissa of the key points is X21And X22,(X22-X21) Is an eyebrow distance feature.
By executing the steps S201 to S203, the geometric feature vector of the facial expression can be obtained, and the training data can be obtained.
In the step S3, the weighted naive bayes model construction specifically includes the following steps:
s301, acquiring human face expression geometric feature vector data for training the weighted naive Bayes model as sample data by executing the step S2, and randomly selecting 80% of the sample data set as a training data set DD and 20% of the sample data set as a test data set T;
302. set training set D { (N)1,Ci),(N2,Ci),…,(Nn,Ci) In which N isnIt is shown that there are n training data,c is the facial expression category, i ∈ {1,2, …, m };
s303, according to a Bayesian formulaWherein X represents unknown facial expression feature vector, and P (X) is unknown facial expression feature vector probability, which is constant, so that the posterior probability P (C) is calculatedi| X) only P (X | C) needs to be calculatedi)P(Ci) I.e., P (C)i) The probability of a priori is represented and,where N is the number of training samples, NCiIs class C in the training sampleiThe number of the cells. In naive Bayes, formally because of the assumption of independence between attributes, this is calculating P (X | C)i) The following were used:wherein X ═ X1,x2,…,xn]Representing n-dimensional feature vectors, xjA single eigenvalue representing an unknown facial expression eigenvector. When the attribute is a continuous attribute, the posterior probability P (X | C)i) Calculating the conditional probability P (x) of the characteristic attribute under the assumption that the attribute follows normal distributionj|Ci) When, namely:wherein, muciAnd σciAre respectively class CiMean and standard deviation of (1), training by training data to obtain [ mu ]ciAnd σci。
Claims (4)
1. An expression recognition method is characterized by comprising the following steps:
s1, identifying a face area in an image;
s2, calculating key point information in the face area and calculating a face expression geometric feature vector, and the method comprises the following steps:
s201. in 68 key points of the characteristics of the individual face, all the key points corresponding to the mouth are selectedThe method comprises the following steps of obtaining mouth height characteristics, mouth width characteristics, minimum external rectangular area characteristics of a mouth, lip beads and mouth corner height characteristics from human face characteristic key points, wherein: the mouth height features are expressed by the key points of the face features with the numbers 62 and 66, and the ordinate of the key points is Y62And Y66,Y66-Y62I.e., the mouth height feature; the mouth width feature is expressed by the key points of the face features with the numbers 54 and 48, and the abscissa of the key points is X54And X48,X54-X48I.e., the mouth width feature; the minimum circumscribed rectangle area characteristic of the mouth is expressed by the human face characteristic key points with the numbers of 48 to 67, the coordinates of four points of the minimum circumscribed rectangle of the mouth characteristic points are calculated through OpenCV, the length and the width of the rectangle are calculated, and the rectangular area is obtained through calculation and is the minimum circumscribed rectangle area characteristic of the mouth; the lip bead and corner height features are expressed by the numbers 48, 54, 51, the ordinate Y of which48、Y54、Y62、Y63,(Y54-Y51)+(Y48-Y51) Lip bead and mouth corner height features;
s202, in 68 personal face feature key points, obtaining eye height features and minimum circumscribed rectangle area features of two eyes from all the face feature key points corresponding to the eyes, wherein: the eye height features are expressed by the key points of the human face features with the numbers 37, 38, 40, 41, 43, 44, 46 and 47, and the ordinate of the key points is Y37、Y38、Y40、Y41、Y43、Y44、Y46、Y47,(Y41-Y37)+(Y40-Y38)+(Y47-Y43)+(Y46-Y44) I.e., the eye height feature; the minimum external rectangle area characteristics of the two eyes are expressed by human face characteristic key points with the numbers from 36 to 47, the eye key points calculate the coordinates of four points of the minimum external rectangle of the left eye characteristic points through OpenCV, and the length and the width of the rectangle so as to calculate the area of the rectangle, the minimum rectangle area of the right eye is calculated by the same method, and the sum of the minimum rectangle areas of the left eye and the right eye is the minimum external rectangle area characteristic of the two eyes;
s203, in 68 individual face feature key points, all the persons corresponding to the eyebrowsObtain eyebrow height sum characteristic, eyebrow width sum characteristic, eyebrow and eyelids distance and eyebrow distance characteristic among the face characteristic key point, wherein: the eyebrow height sum characteristic is expressed by the key points of the face characteristics with the numbers from 17 to 26, and the ordinate of the key points is Y17To Y26The distance between the ordinate and the face frame, and Y since the face frame is the X coordinate axis17To Y26The sum of the coordinates is the feature of the sum of the heights of the eyebrows; the sum of the eyebrow width features is expressed by the key points of the face features with the numbers from 17 to 26, and the abscissa of the key points is X17To X26,(X22-X17)+(X23-X18)+(X24-X19)+(X25-X20)+(X26-X21) The sum of the width of the eyebrows is the characteristic; the eyebrow and the head distance features are expressed by the key points of the face features with the numbers 21, 39, 22 and 42, and the horizontal and vertical coordinates of the key points are (X)21,Y21)、(X39,Y39)、(X22,Y22)、(X42,Y42) Then, thenAndthe sum is the distance characteristic of the eyebrows and the heads of the eyes; the eyebrow distance feature is expressed by the key points of the face features with the numbers 21 and 22, and the abscissa of the key points is X21And X22,(X22-X21) Is an eyebrow distance feature;
s3, obtaining geometric feature vector data of the facial expression, and training a weighted naive Bayes model through modeling to obtain posterior probabilities of m categories;
s4, m-dimensional probability feature vectors are formed by the posterior probabilities of the m classes and are used as training data of the K neighbor classification model to train the K neighbor classification model;
and S5, performing feature extraction and classification model on the test data through face feature extraction and model classification to predict the face expression.
2. The expression recognition method according to claim 1, wherein the step S1 specifically includes the following substeps:
s101, carrying out gray processing on the shot image;
s102, detecting and acquiring a face region from the image through Haar features and an AdaBoost algorithm;
s103, identifying 68 personal face feature key points in the face area by using a dlib key point detection algorithm and generating a face detection frame containing the face area; extracting geometric features of the human face from the key points of the human face features;
s104, defining a coordinate system in the face detection frame, wherein the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis;
and S105, determining the coordinates of each face feature key point according to the defined coordinate system.
3. The expression recognition method of claim 1, wherein in step S3, the weighted naive bayes model construction specifically comprises the following steps:
s301, acquiring human face expression geometric feature vector data for training the weighted naive Bayes model as sample data by executing the step S2, and randomly selecting 80% of the sample data set as a training data set D and 20% of the sample data set as a test data set T;
302. set training set D { (N)1,Ci),(N2,Ci),…,(Nn,Ci) In which N isnN training data are shown, C is facial expression category, i ∈ {1,2, …, m };
s303, according to a Bayesian formulaWherein X represents unknown facial expression feature vector, P (X) is unknown facial expression feature vector probability, and P (C) is the posterior probability after calculationi| X) is calculatedi)P(Ci) I.e., P (C)i) The probability of a priori is represented and,where N is the number of training samples, NCiIs class C in the training sampleiThe number of (2); in naive Bayes, it is because of the assumption of independence between attributes, so P (X | C) is being computedi) The following were used:wherein X ═ X1,x2,…,xn]Representing n-dimensional feature vectors, xjA single eigenvalue representing an unknown facial expression eigenvector. When the attribute is a continuous attribute, the posterior probability P (X | C)i) Calculating the conditional probability P (x) of the characteristic attribute under the assumption that the attribute follows normal distributionj|Ci) When, namely:wherein, muciAnd σciAre respectively class CiMean and standard deviation of (1), training by training data to obtain [ mu ]ciAnd σci。
4. A system for facial expression recognition, comprising:
a face region recognition module for recognizing a face region in an image by using the step S1 in the expression recognition method according to claim 1;
a facial geometric feature acquisition module, which calculates facial expression geometric feature vectors in the facial area obtained by the facial area recognition module by using the step S2 in the expression recognition method of claim 1;
the weighted naive Bayes model is used for inputting the facial expression geometric feature vector obtained by the facial geometric feature obtaining module into the trained weighted naive Bayes model to obtain the posterior probabilities of m classes;
and the facial expression recognition module is used for weighting the posterior probabilities of m categories output by the naive Bayes model to form an m-dimensional probability feature vector, inputting the m-dimensional probability feature vector into a K-neighbor classification model of the facial expression recognition module, and classifying the m-dimensional probability feature vector by the K-neighbor classification model so as to predict the facial expression.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010238235.XA CN111444860A (en) | 2020-03-30 | 2020-03-30 | Expression recognition method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010238235.XA CN111444860A (en) | 2020-03-30 | 2020-03-30 | Expression recognition method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111444860A true CN111444860A (en) | 2020-07-24 |
Family
ID=71650934
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010238235.XA Pending CN111444860A (en) | 2020-03-30 | 2020-03-30 | Expression recognition method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111444860A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112565806A (en) * | 2020-12-02 | 2021-03-26 | 广州繁星互娱信息科技有限公司 | Virtual gift presenting method, device, computer equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729835A (en) * | 2017-10-10 | 2018-02-23 | 浙江大学 | A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features |
CN110502989A (en) * | 2019-07-16 | 2019-11-26 | 山东师范大学 | A kind of small sample EO-1 hyperion face identification method and system |
WO2019232866A1 (en) * | 2018-06-08 | 2019-12-12 | 平安科技(深圳)有限公司 | Human eye model training method, human eye recognition method, apparatus, device and medium |
CN110705467A (en) * | 2019-09-30 | 2020-01-17 | 广州海昇计算机科技有限公司 | Facial expression recognition method, system, device and storage medium |
-
2020
- 2020-03-30 CN CN202010238235.XA patent/CN111444860A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107729835A (en) * | 2017-10-10 | 2018-02-23 | 浙江大学 | A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features |
WO2019232866A1 (en) * | 2018-06-08 | 2019-12-12 | 平安科技(深圳)有限公司 | Human eye model training method, human eye recognition method, apparatus, device and medium |
CN110502989A (en) * | 2019-07-16 | 2019-11-26 | 山东师范大学 | A kind of small sample EO-1 hyperion face identification method and system |
CN110705467A (en) * | 2019-09-30 | 2020-01-17 | 广州海昇计算机科技有限公司 | Facial expression recognition method, system, device and storage medium |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112565806A (en) * | 2020-12-02 | 2021-03-26 | 广州繁星互娱信息科技有限公司 | Virtual gift presenting method, device, computer equipment and medium |
CN112565806B (en) * | 2020-12-02 | 2023-08-29 | 广州繁星互娱信息科技有限公司 | Virtual gift giving method, device, computer equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Littlewort et al. | Dynamics of facial expression extracted automatically from video | |
CN109472198B (en) | Gesture robust video smiling face recognition method | |
CN112784763B (en) | Expression recognition method and system based on local and overall feature adaptive fusion | |
CN109389074B (en) | Facial feature point extraction-based expression recognition method | |
Zhang et al. | Adaptive facial point detection and emotion recognition for a humanoid robot | |
Zhu et al. | Learning a hierarchical deformable template for rapid deformable object parsing | |
CN108830237B (en) | Facial expression recognition method | |
Tariq et al. | Recognizing emotions from an ensemble of features | |
Zhan et al. | Facial expression recognition based on Gabor wavelet transformation and elastic templates matching | |
CN113239839B (en) | Expression recognition method based on DCA face feature fusion | |
Wang | Effect of subject's age and gender on face recognition results | |
Bekhouche | Facial soft biometrics: extracting demographic traits | |
Zhou et al. | Real-time gender recognition based on eigen-features selection from facial images | |
CN111444860A (en) | Expression recognition method and system | |
Yuvaraj et al. | An Adaptive Deep Belief Feature Learning Model for Cognitive Emotion Recognition | |
Mao et al. | Robust facial expression recognition based on RPCA and AdaBoost | |
Yang | Facial expression recognition and expression intensity estimation | |
Devi et al. | Face Emotion Classification using AMSER with Artificial Neural Networks | |
Nie et al. | The facial features analysis method based on human star-structured model | |
Belle | Detection and recognition of human faces using random forests for a mobile robot | |
Chowdhury et al. | A probabilistic approach to support Self-Organizing Map (SOM) driven facial expression recognition | |
Yang et al. | The performance analysis of facial expression recognition system using local regions and features | |
Navraan et al. | Automatic Facial Emotion Recognition Method Based on Eye Region Changes | |
Avani | DAYANANDA SAGAR UNIVERSITY | |
Shen et al. | Modelling geiometric features for face based age classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200724 |
|
RJ01 | Rejection of invention patent application after publication |