CN111444860A - Expression recognition method and system - Google Patents

Expression recognition method and system Download PDF

Info

Publication number
CN111444860A
CN111444860A CN202010238235.XA CN202010238235A CN111444860A CN 111444860 A CN111444860 A CN 111444860A CN 202010238235 A CN202010238235 A CN 202010238235A CN 111444860 A CN111444860 A CN 111444860A
Authority
CN
China
Prior art keywords
face
key points
feature
features
mouth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010238235.XA
Other languages
Chinese (zh)
Inventor
丁童心
禹素萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
National Dong Hwa University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN202010238235.XA priority Critical patent/CN111444860A/en
Publication of CN111444860A publication Critical patent/CN111444860A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24143Distances to neighbourhood prototypes, e.g. restricted Coulomb energy networks [RCEN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a facial expression recognition method and system. Which comprises the following steps: recognizing a face area of the image; extracting 68 key points of each face by using Dlib to extract geometric features of the face; the geometrical characteristics include: the mouth height, the mouth width, the eye height, the sum of the eyebrow heights, the sum of the eyebrow widths, the minimum external rectangular area of the mouth, the distance between the eyebrows and the eyes, the distance between the eyebrows, the height between the lip beads and the corners of the mouth and the minimum external rectangular area of the eyes; sending the geometric features of the human face into a naive Bayes classifier, and determining the expression category; and in Bayesian decision, the K nearest neighbor is used for secondary learning to improve the classification effect. The scheme of the invention overcomes the defects of low classification efficiency, high calculation cost and influence on the classification algorithm effect caused by high dimension of the existing human face features.

Description

Expression recognition method and system
Technical Field
The invention belongs to the field of artificial intelligence and the field of psychology, and particularly relates to a face recognition technology.
Background
The human facial expression is an important way for human to convey emotion, and the human facial expression recognition technology can be widely applied to the fields of human-computer interaction, computer vision, medical assistance, fatigue driving detection and the like.
The existing facial expression feature extraction technology mainly comprises traditional image feature extraction methods such as a Gabor filter, Scale Invariant Feature Transform (SIFT), a gradient Histogram (HOG), linear discriminant analysis (L DA), a local binary pattern (L BP) and the like, and also comprises a more popular deep learning feature extraction method, and classification is required after feature extraction, and the existing expression classification technology mainly uses a support vector machine, a decision tree and a classifier based on a convolutional neural network to classify expressions.
Although the feature extraction and classification based on deep learning can achieve a good recognition effect, the feature dimension extracted by deep learning is high, the calculation amount is large, and the requirement on hardware is high, so that the requirement on hardware by deep learning in many places in real life is difficult to meet, and the support of the facial expression technology is difficult to realize.
Disclosure of Invention
The invention aims to provide a facial expression recognition method.
In order to achieve the above object, one technical solution of the present invention is to provide an expression recognition method, which is characterized by comprising the following steps:
s1, identifying a face area in an image;
s2, calculating key point information in the face area and calculating a face expression geometric feature vector, and the method comprises the following steps:
s201, in 68 key points of the face features, obtaining a mouth height feature, a mouth width feature, a minimum circumscribed rectangle area feature of a mouth and a height feature of a lip bead and a mouth corner from all the key points of the face features corresponding to the mouth, wherein: the mouth height features are expressed by the key points of the face features with the numbers 62 and 66, and the ordinate of the key points is Y62And Y66,Y66-Y62I.e., the mouth height feature; the mouth width feature is expressed by the key points of the face features with the numbers 54 and 48, and the abscissa of the key points is X54And X48,X54-X48I.e., the mouth width feature; the minimum external rectangle area characteristic of the mouth is expressed by the human face characteristic key points with the numbers from 48 to 67, the minimum external rectangle four-point coordinate of the mouth characteristic point is calculated through OpenCV, the length and the width of the rectangle are calculated, and the rectangle area is obtained through calculation and is the minimum external rectangle area characteristic of the mouthPerforming sign; the lip bead and corner height features are expressed by the numbers 48, 54, 51, the ordinate Y of which48、Y54、Y62、Y63,(Y54-Y51)+(Y48-Y51) Lip bead and mouth corner height features;
s202, in 68 personal face feature key points, obtaining eye height features and minimum circumscribed rectangle area features of two eyes from all the face feature key points corresponding to the eyes, wherein: the eye height features are expressed by the key points of the human face features with the numbers 37, 38, 40, 41, 43, 44, 46 and 47, and the ordinate of the key points is Y37、Y38、Y40、Y41、Y43、Y44、Y46、Y47,(Y41-Y37)+(Y40-Y38)+(Y47-Y43)+(Y46-Y44) I.e., the eye height feature; the minimum external rectangle area characteristics of the two eyes are expressed by human face characteristic key points with the numbers from 36 to 47, the eye key points calculate the coordinates of four points of the minimum external rectangle of the left eye characteristic points through OpenCV, and the length and the width of the rectangle so as to calculate the area of the rectangle, the minimum rectangle area of the right eye is calculated by the same method, and the sum of the minimum rectangle areas of the left eye and the right eye is the minimum external rectangle area characteristic of the two eyes;
s203, in 68 personal face feature key points, obtaining an eyebrow height sum feature, an eyebrow width sum feature, an eyebrow-to-eye distance and an eyebrow distance feature from all the face feature key points corresponding to the eyebrows, wherein: the eyebrow height sum characteristic is expressed by the key points of the face characteristics with the numbers from 17 to 26, and the ordinate of the key points is Y17To Y26The distance between the ordinate and the face frame, and Y since the face frame is the X coordinate axis17To Y26The sum of the coordinates is the feature of the sum of the heights of the eyebrows; the sum of the eyebrow width features is expressed by the key points of the face features with the numbers from 17 to 26, and the abscissa of the key points is X17To X26,(X22-X17)+(X23-X18)+(X24-X19)+(X25-X20)+(X26-X21) The sum of the width of the eyebrows is the characteristic; eyebrowThe head and eye distance features are expressed by the key points of the human face features with the numbers 21, 39, 22 and 42, and the horizontal and vertical coordinates are (X)21,Y21)、(X39,Y39)、(X22,Y22)、(X42,Y42) Then, then
Figure BDA0002431727610000021
And
Figure BDA0002431727610000022
the sum is the distance characteristic of the eyebrows and the heads of the eyes; the eyebrow distance feature is expressed by the key points of the face features with the numbers 21 and 22, and the abscissa of the key points is X21And X22,(X22-X21) Is an eyebrow distance feature;
s3, obtaining geometric feature vector data of the facial expression, and training a weighted naive Bayes model through modeling to obtain posterior probabilities of m categories;
s4, m-dimensional probability feature vectors are formed by the posterior probabilities of the m classes and are used as training data of the K neighbor classification model to train the K neighbor classification model;
and S5, performing feature extraction and classification model on the test data through face feature extraction and model classification to predict the face expression.
Preferably, the step S1 specifically includes the following sub-steps:
s101, carrying out gray processing on the shot image;
s102, detecting and acquiring a face region from the image through Haar features and an AdaBoost algorithm;
s103, identifying 68 personal face feature key points in the face area by using a dlib key point detection algorithm and generating a face detection frame containing the face area; extracting geometric features of the human face from the key points of the human face features;
s104, defining a coordinate system in the face detection frame, wherein the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis;
and S105, determining the coordinates of each face feature key point according to the defined coordinate system.
Preferably, in the step S3, the weighted naive bayes model construction specifically includes the following steps:
s301, acquiring human face expression geometric feature vector data for training the weighted naive Bayes model as sample data by executing the step S2, and randomly selecting 80% of the sample data set as a training data set D and 20% of the sample data set as a test data set T;
302. set training set D { (N)1,Ci),(N2,Ci),…,(Nn,Ci) In which N isnN training data are shown, C is facial expression category, i ∈ {1,2, …, m };
s303, according to a Bayesian formula
Figure BDA0002431727610000031
Wherein X represents unknown facial expression feature vector, and P (X) is unknown facial expression feature vector probability, which is constant, so that the posterior probability P (C) is calculatedi| X) only P (X | C) needs to be calculatedi)P(Ci) I.e., P (C)i) The probability of a priori is represented and,
Figure BDA0002431727610000032
where N is the number of training samples, NCiIs class C in the training sampleiThe number of the cells. In naive Bayes, formally because of the assumption of independence between attributes, this is calculating P (X | C)i) The following were used:
Figure BDA0002431727610000033
wherein X ═ X1,x2,…,xn]Representing n-dimensional feature vectors, xjA single eigenvalue representing an unknown facial expression eigenvector. When the attribute is a continuous attribute, the posterior probability P (X | C)i) Calculating the conditional probability P (x) of the characteristic attribute under the assumption that the attribute follows normal distributionj|Ci) In time, the following steps:
Figure BDA0002431727610000041
wherein, muciAnd σciAre respectively class CiMean and standard deviation of (1), training by training data to obtain [ mu ]ciAnd σci
Another technical solution of the present invention is to provide a facial expression recognition system, which is characterized by comprising:
a face region recognition module, which recognizes a face region in the image by using the step S1 in the expression recognition method;
a facial geometric feature acquisition module, which calculates facial expression geometric feature vectors in the facial area obtained by the facial area recognition module by adopting the step S2 in the above expression recognition method;
the weighted naive Bayes model is used for inputting the facial expression geometric feature vector obtained by the facial geometric feature obtaining module into the trained weighted naive Bayes model to obtain the posterior probabilities of m classes;
and the facial expression recognition module is used for weighting the posterior probabilities of m categories output by the naive Bayes model to form an m-dimensional probability feature vector, inputting the m-dimensional probability feature vector into a K-neighbor classification model of the facial expression recognition module, and classifying the m-dimensional probability feature vector by the K-neighbor classification model so as to predict the facial expression.
The invention has the beneficial effects that: on one hand, the method overcomes the defect that the traditional image feature extraction has higher dimensionality to influence the subsequent classification efficiency, 68 key point information of the facial expression is extracted by using Dlib, the facial geometric feature is extracted by using 68 key point coordinates, and the facial geometric feature is extracted by using the difference of different expressions in the geometric feature. In the aspect of algorithm, aiming at the defects of the naive Bayes algorithm, a method for combining weighted naive Bayes and K neighbor is provided, and when the weighted naive Bayes is used for decision making, the K neighbor is used for secondary learning to improve the classification effect. On the other hand, the invention has low performance requirements on hardware equipment for computer resources far smaller than the convolutional neural network in the facial expression recognition technology, and is conveniently applied to low-performance mobile equipment, market self-service machines, coin-operated game machines, children toys and other low-end equipment, so that the facial expression recognition function can be realized in the fields at low cost.
Drawings
FIG. 1 is a flow chart of a facial expression recognition method in an embodiment;
FIG. 2 is a schematic diagram of a face region and face feature key points identified in an embodiment;
fig. 3 is a block diagram of the facial expression recognition method in the embodiment.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
In this embodiment, as shown in fig. 1, the method for recognizing a facial expression of a captured image includes the following steps:
s1, identifying a face area in an image;
s2, calculating key point information in the face area and calculating a face expression geometric feature vector;
s3, obtaining geometric feature vector data of the facial expression, and training a weighted naive Bayes model through modeling to obtain posterior probabilities of m categories;
s4, m-dimensional probability feature vectors are formed by the posterior probabilities of the m categories and serve as training data of the K neighbor model to train the K neighbor classification model;
and S5, performing feature extraction and classification model on the test data through face feature extraction and model classification to predict the face expression.
In the present embodiment, the above steps S1 and S2 are performed using dlib algorithm. The step S1 specifically includes the following sub-steps:
s101, carrying out gray processing on the shot image;
s102, detecting and acquiring a face region from the image through Haar features and an AdaBoost algorithm;
s103, identifying 68 personal face feature key points in the face area by using a dlib key point detection algorithm and generating a face detection frame containing the face area; extracting geometric features of the human face from the key points of the human face features;
s104, defining a coordinate system in the face detection frame, wherein the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis;
and S105, determining the coordinates of each face feature key point according to the defined coordinate system.
By executing step S101, the original color image can be converted into a grayscale image, so that the information amount of the image is reduced, which is beneficial to the post-processing of the image. Of course, if the image itself is grayscale, step S101 need not be performed. Since all the steps of the facial expression recognition method performed in the present embodiment do not depend on the color of the image, graying the image does not affect the recognition effect of the facial expression recognition method.
By executing the face detection algorithm in the dlib algorithm, in the case that a face region exists in the image, a rectangular face detection frame is generated, and the face detection frame includes the face region in the image.
By performing the keypoint detection in the dlib algorithm, 68 key points of the face features in the face region can be obtained, and the effect is shown in fig. 2.
In this embodiment, each face feature key point identified by the coordinate system dlib key point detection algorithm is assigned with a coordinate. The established coordinate system is a plane rectangular coordinate system, and the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis.
After the coordinates are assigned to the identified key points of the face features, step S2 may be executed to quantitatively calculate the geometric features of the face, where step S2 specifically includes the following steps:
s201, in 68 key points of the face features of the individual, obtaining the mouth height features, the mouth width features, the minimum circumscribed rectangle area features of the mouth and the heights of the lips and the corners of the mouth from all the key points of the face features corresponding to the mouthA degree characteristic, wherein: the mouth height features are expressed by the key points of the face features with the numbers 62 and 66, and the ordinate of the key points is Y62And Y66,Y66-Y62I.e., the mouth height feature; the mouth width feature is expressed by the key points of the face features with the numbers 54 and 48, and the abscissa of the key points is X54And X48,X54-X48I.e., the mouth width feature; the minimum circumscribed rectangle area characteristic of the mouth is expressed by the human face characteristic key points with the numbers of 48 to 67, the coordinates of four points of the minimum circumscribed rectangle of the mouth characteristic points are calculated through OpenCV, the length and the width of the rectangle are calculated, and the rectangular area is obtained through calculation and is the minimum circumscribed rectangle area characteristic of the mouth; the lip bead and corner height features are expressed by the numbers 48, 54, 51, the ordinate Y of which48、Y54、Y62、Y63,(Y54-Y51)+(Y48-Y51) Lip bead and mouth corner height features;
s202, in 68 personal face feature key points, obtaining eye height features and minimum circumscribed rectangle area features of two eyes from all the face feature key points corresponding to the eyes, wherein: the eye height features are expressed by the key points of the human face features with the numbers 37, 38, 40, 41, 43, 44, 46 and 47, and the ordinate of the key points is Y37、Y38、Y40、Y41、Y43、Y44、Y46、Y47,(Y41-Y37)+(Y40-Y38)+(Y47-Y43)+(Y46-Y44) I.e., the eye height feature; the minimum external rectangle area characteristics of the two eyes are expressed by human face characteristic key points with the numbers from 36 to 47, the eye key points calculate the coordinates of four points of the minimum external rectangle of the left eye characteristic points through OpenCV, and the length and the width of the rectangle so as to calculate the area of the rectangle, the minimum rectangle area of the right eye is calculated by the same method, and the sum of the minimum rectangle areas of the left eye and the right eye is the minimum external rectangle area characteristic of the two eyes;
s203, in 68 personal face feature key points, obtaining eyebrow height sum features, eyebrow width sum features, eyebrow and eyelid distance and eyebrow height from all the face feature key points corresponding to the eyebrowsA head distance feature, wherein: the eyebrow height sum characteristic is expressed by the key points of the face characteristics with the numbers from 17 to 26, and the ordinate of the key points is Y17To Y26The distance between the ordinate and the face frame, and Y since the face frame is the X coordinate axis17To Y26The sum of the coordinates is the feature of the sum of the heights of the eyebrows; the sum of the eyebrow width features is expressed by the key points of the face features with the numbers from 17 to 26, and the abscissa of the key points is X17To X26,(X22-X17)+(X23-X18)+(X24-X19)+(X25-X20)+(X26-X21) The sum of the width of the eyebrows is the characteristic; the eyebrow and the head distance features are expressed by the key points of the face features with the numbers 21, 39, 22 and 42, and the horizontal and vertical coordinates of the key points are (X)21,Y21)、(X39,Y39)、(X22,Y22)、(X42,Y42) Then, then
Figure BDA0002431727610000071
And
Figure BDA0002431727610000072
the sum is the distance characteristic of the eyebrows and the heads of the eyes; the eyebrow distance feature is expressed by the key points of the face features with the numbers 21 and 22, and the abscissa of the key points is X21And X22,(X22-X21) Is an eyebrow distance feature.
By executing the steps S201 to S203, the geometric feature vector of the facial expression can be obtained, and the training data can be obtained.
In the step S3, the weighted naive bayes model construction specifically includes the following steps:
s301, acquiring human face expression geometric feature vector data for training the weighted naive Bayes model as sample data by executing the step S2, and randomly selecting 80% of the sample data set as a training data set DD and 20% of the sample data set as a test data set T;
302. set training set D { (N)1,Ci),(N2,Ci),…,(Nn,Ci) In which N isnIt is shown that there are n training data,c is the facial expression category, i ∈ {1,2, …, m };
s303, according to a Bayesian formula
Figure BDA0002431727610000073
Wherein X represents unknown facial expression feature vector, and P (X) is unknown facial expression feature vector probability, which is constant, so that the posterior probability P (C) is calculatedi| X) only P (X | C) needs to be calculatedi)P(Ci) I.e., P (C)i) The probability of a priori is represented and,
Figure BDA0002431727610000074
where N is the number of training samples, NCiIs class C in the training sampleiThe number of the cells. In naive Bayes, formally because of the assumption of independence between attributes, this is calculating P (X | C)i) The following were used:
Figure BDA0002431727610000075
wherein X ═ X1,x2,…,xn]Representing n-dimensional feature vectors, xjA single eigenvalue representing an unknown facial expression eigenvector. When the attribute is a continuous attribute, the posterior probability P (X | C)i) Calculating the conditional probability P (x) of the characteristic attribute under the assumption that the attribute follows normal distributionj|Ci) When, namely:
Figure BDA0002431727610000081
wherein, muciAnd σciAre respectively class CiMean and standard deviation of (1), training by training data to obtain [ mu ]ciAnd σci

Claims (4)

1. An expression recognition method is characterized by comprising the following steps:
s1, identifying a face area in an image;
s2, calculating key point information in the face area and calculating a face expression geometric feature vector, and the method comprises the following steps:
s201. in 68 key points of the characteristics of the individual face, all the key points corresponding to the mouth are selectedThe method comprises the following steps of obtaining mouth height characteristics, mouth width characteristics, minimum external rectangular area characteristics of a mouth, lip beads and mouth corner height characteristics from human face characteristic key points, wherein: the mouth height features are expressed by the key points of the face features with the numbers 62 and 66, and the ordinate of the key points is Y62And Y66,Y66-Y62I.e., the mouth height feature; the mouth width feature is expressed by the key points of the face features with the numbers 54 and 48, and the abscissa of the key points is X54And X48,X54-X48I.e., the mouth width feature; the minimum circumscribed rectangle area characteristic of the mouth is expressed by the human face characteristic key points with the numbers of 48 to 67, the coordinates of four points of the minimum circumscribed rectangle of the mouth characteristic points are calculated through OpenCV, the length and the width of the rectangle are calculated, and the rectangular area is obtained through calculation and is the minimum circumscribed rectangle area characteristic of the mouth; the lip bead and corner height features are expressed by the numbers 48, 54, 51, the ordinate Y of which48、Y54、Y62、Y63,(Y54-Y51)+(Y48-Y51) Lip bead and mouth corner height features;
s202, in 68 personal face feature key points, obtaining eye height features and minimum circumscribed rectangle area features of two eyes from all the face feature key points corresponding to the eyes, wherein: the eye height features are expressed by the key points of the human face features with the numbers 37, 38, 40, 41, 43, 44, 46 and 47, and the ordinate of the key points is Y37、Y38、Y40、Y41、Y43、Y44、Y46、Y47,(Y41-Y37)+(Y40-Y38)+(Y47-Y43)+(Y46-Y44) I.e., the eye height feature; the minimum external rectangle area characteristics of the two eyes are expressed by human face characteristic key points with the numbers from 36 to 47, the eye key points calculate the coordinates of four points of the minimum external rectangle of the left eye characteristic points through OpenCV, and the length and the width of the rectangle so as to calculate the area of the rectangle, the minimum rectangle area of the right eye is calculated by the same method, and the sum of the minimum rectangle areas of the left eye and the right eye is the minimum external rectangle area characteristic of the two eyes;
s203, in 68 individual face feature key points, all the persons corresponding to the eyebrowsObtain eyebrow height sum characteristic, eyebrow width sum characteristic, eyebrow and eyelids distance and eyebrow distance characteristic among the face characteristic key point, wherein: the eyebrow height sum characteristic is expressed by the key points of the face characteristics with the numbers from 17 to 26, and the ordinate of the key points is Y17To Y26The distance between the ordinate and the face frame, and Y since the face frame is the X coordinate axis17To Y26The sum of the coordinates is the feature of the sum of the heights of the eyebrows; the sum of the eyebrow width features is expressed by the key points of the face features with the numbers from 17 to 26, and the abscissa of the key points is X17To X26,(X22-X17)+(X23-X18)+(X24-X19)+(X25-X20)+(X26-X21) The sum of the width of the eyebrows is the characteristic; the eyebrow and the head distance features are expressed by the key points of the face features with the numbers 21, 39, 22 and 42, and the horizontal and vertical coordinates of the key points are (X)21,Y21)、(X39,Y39)、(X22,Y22)、(X42,Y42) Then, then
Figure FDA0002431727600000021
And
Figure FDA0002431727600000022
the sum is the distance characteristic of the eyebrows and the heads of the eyes; the eyebrow distance feature is expressed by the key points of the face features with the numbers 21 and 22, and the abscissa of the key points is X21And X22,(X22-X21) Is an eyebrow distance feature;
s3, obtaining geometric feature vector data of the facial expression, and training a weighted naive Bayes model through modeling to obtain posterior probabilities of m categories;
s4, m-dimensional probability feature vectors are formed by the posterior probabilities of the m classes and are used as training data of the K neighbor classification model to train the K neighbor classification model;
and S5, performing feature extraction and classification model on the test data through face feature extraction and model classification to predict the face expression.
2. The expression recognition method according to claim 1, wherein the step S1 specifically includes the following substeps:
s101, carrying out gray processing on the shot image;
s102, detecting and acquiring a face region from the image through Haar features and an AdaBoost algorithm;
s103, identifying 68 personal face feature key points in the face area by using a dlib key point detection algorithm and generating a face detection frame containing the face area; extracting geometric features of the human face from the key points of the human face features;
s104, defining a coordinate system in the face detection frame, wherein the coordinate system takes the upper left of the face detection frame as an original point, the horizontal frame of the face detection frame as an X axis and the vertical frame of the face detection frame as a Y axis;
and S105, determining the coordinates of each face feature key point according to the defined coordinate system.
3. The expression recognition method of claim 1, wherein in step S3, the weighted naive bayes model construction specifically comprises the following steps:
s301, acquiring human face expression geometric feature vector data for training the weighted naive Bayes model as sample data by executing the step S2, and randomly selecting 80% of the sample data set as a training data set D and 20% of the sample data set as a test data set T;
302. set training set D { (N)1,Ci),(N2,Ci),…,(Nn,Ci) In which N isnN training data are shown, C is facial expression category, i ∈ {1,2, …, m };
s303, according to a Bayesian formula
Figure FDA0002431727600000031
Wherein X represents unknown facial expression feature vector, P (X) is unknown facial expression feature vector probability, and P (C) is the posterior probability after calculationi| X) is calculatedi)P(Ci) I.e., P (C)i) The probability of a priori is represented and,
Figure FDA0002431727600000032
where N is the number of training samples, NCiIs class C in the training sampleiThe number of (2); in naive Bayes, it is because of the assumption of independence between attributes, so P (X | C) is being computedi) The following were used:
Figure FDA0002431727600000033
wherein X ═ X1,x2,…,xn]Representing n-dimensional feature vectors, xjA single eigenvalue representing an unknown facial expression eigenvector. When the attribute is a continuous attribute, the posterior probability P (X | C)i) Calculating the conditional probability P (x) of the characteristic attribute under the assumption that the attribute follows normal distributionj|Ci) When, namely:
Figure FDA0002431727600000034
wherein, muciAnd σciAre respectively class CiMean and standard deviation of (1), training by training data to obtain [ mu ]ciAnd σci
4. A system for facial expression recognition, comprising:
a face region recognition module for recognizing a face region in an image by using the step S1 in the expression recognition method according to claim 1;
a facial geometric feature acquisition module, which calculates facial expression geometric feature vectors in the facial area obtained by the facial area recognition module by using the step S2 in the expression recognition method of claim 1;
the weighted naive Bayes model is used for inputting the facial expression geometric feature vector obtained by the facial geometric feature obtaining module into the trained weighted naive Bayes model to obtain the posterior probabilities of m classes;
and the facial expression recognition module is used for weighting the posterior probabilities of m categories output by the naive Bayes model to form an m-dimensional probability feature vector, inputting the m-dimensional probability feature vector into a K-neighbor classification model of the facial expression recognition module, and classifying the m-dimensional probability feature vector by the K-neighbor classification model so as to predict the facial expression.
CN202010238235.XA 2020-03-30 2020-03-30 Expression recognition method and system Pending CN111444860A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010238235.XA CN111444860A (en) 2020-03-30 2020-03-30 Expression recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010238235.XA CN111444860A (en) 2020-03-30 2020-03-30 Expression recognition method and system

Publications (1)

Publication Number Publication Date
CN111444860A true CN111444860A (en) 2020-07-24

Family

ID=71650934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010238235.XA Pending CN111444860A (en) 2020-03-30 2020-03-30 Expression recognition method and system

Country Status (1)

Country Link
CN (1) CN111444860A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565806A (en) * 2020-12-02 2021-03-26 广州繁星互娱信息科技有限公司 Virtual gift presenting method, device, computer equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729835A (en) * 2017-10-10 2018-02-23 浙江大学 A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features
CN110502989A (en) * 2019-07-16 2019-11-26 山东师范大学 A kind of small sample EO-1 hyperion face identification method and system
WO2019232866A1 (en) * 2018-06-08 2019-12-12 平安科技(深圳)有限公司 Human eye model training method, human eye recognition method, apparatus, device and medium
CN110705467A (en) * 2019-09-30 2020-01-17 广州海昇计算机科技有限公司 Facial expression recognition method, system, device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107729835A (en) * 2017-10-10 2018-02-23 浙江大学 A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features
WO2019232866A1 (en) * 2018-06-08 2019-12-12 平安科技(深圳)有限公司 Human eye model training method, human eye recognition method, apparatus, device and medium
CN110502989A (en) * 2019-07-16 2019-11-26 山东师范大学 A kind of small sample EO-1 hyperion face identification method and system
CN110705467A (en) * 2019-09-30 2020-01-17 广州海昇计算机科技有限公司 Facial expression recognition method, system, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565806A (en) * 2020-12-02 2021-03-26 广州繁星互娱信息科技有限公司 Virtual gift presenting method, device, computer equipment and medium
CN112565806B (en) * 2020-12-02 2023-08-29 广州繁星互娱信息科技有限公司 Virtual gift giving method, device, computer equipment and medium

Similar Documents

Publication Publication Date Title
Littlewort et al. Dynamics of facial expression extracted automatically from video
CN109472198B (en) Gesture robust video smiling face recognition method
CN112784763B (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN109389074B (en) Facial feature point extraction-based expression recognition method
Zhang et al. Adaptive facial point detection and emotion recognition for a humanoid robot
Zhu et al. Learning a hierarchical deformable template for rapid deformable object parsing
CN108830237B (en) Facial expression recognition method
Tariq et al. Recognizing emotions from an ensemble of features
Zhan et al. Facial expression recognition based on Gabor wavelet transformation and elastic templates matching
CN113239839B (en) Expression recognition method based on DCA face feature fusion
Wang Effect of subject's age and gender on face recognition results
Bekhouche Facial soft biometrics: extracting demographic traits
Zhou et al. Real-time gender recognition based on eigen-features selection from facial images
CN111444860A (en) Expression recognition method and system
Yuvaraj et al. An Adaptive Deep Belief Feature Learning Model for Cognitive Emotion Recognition
Mao et al. Robust facial expression recognition based on RPCA and AdaBoost
Yang Facial expression recognition and expression intensity estimation
Devi et al. Face Emotion Classification using AMSER with Artificial Neural Networks
Nie et al. The facial features analysis method based on human star-structured model
Belle Detection and recognition of human faces using random forests for a mobile robot
Chowdhury et al. A probabilistic approach to support Self-Organizing Map (SOM) driven facial expression recognition
Yang et al. The performance analysis of facial expression recognition system using local regions and features
Navraan et al. Automatic Facial Emotion Recognition Method Based on Eye Region Changes
Avani DAYANANDA SAGAR UNIVERSITY
Shen et al. Modelling geiometric features for face based age classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200724

RJ01 Rejection of invention patent application after publication