CN115331292B - Face image-based emotion recognition method and device and computer storage medium - Google Patents

Face image-based emotion recognition method and device and computer storage medium Download PDF

Info

Publication number
CN115331292B
CN115331292B CN202210986516.2A CN202210986516A CN115331292B CN 115331292 B CN115331292 B CN 115331292B CN 202210986516 A CN202210986516 A CN 202210986516A CN 115331292 B CN115331292 B CN 115331292B
Authority
CN
China
Prior art keywords
image
face
facial
feature
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210986516.2A
Other languages
Chinese (zh)
Other versions
CN115331292A (en
Inventor
李雪
曾昉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Yuanzidong Technology Co ltd
Original Assignee
Wuhan Yuanzidong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Yuanzidong Technology Co ltd filed Critical Wuhan Yuanzidong Technology Co ltd
Priority to CN202210986516.2A priority Critical patent/CN115331292B/en
Publication of CN115331292A publication Critical patent/CN115331292A/en
Application granted granted Critical
Publication of CN115331292B publication Critical patent/CN115331292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides an emotion recognition method based on a facial image, which comprises the following steps of: s1, acquiring a first feature image of a user in a non-expression state; the user starts to state an event or a conversation, and the expression changes along with the time; s2, acquiring a second feature image of the change of the facial features of the human face in the video image from the non-expression of the facial features to the change process; s3, according to the displacement parameters of the characteristic points in the face organs in the first characteristic image and the second characteristic image, the displacement parameters at least comprise the displacement amount and the displacement direction of the characteristic points; and transferring the displacement parameters of the feature points to the feature points at corresponding positions on the face image of the expressionless template, performing corresponding displacement processing, and identifying the expression type according to the processed test sample image. The invention carries out the transfer of expression change based on the same template face, and finally only needs to identify the facial features of the template face image, thereby reducing the identification error and improving the accuracy of expression identification.

Description

Facial image-based emotion recognition method and device and computer storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a face image-based emotion recognition method and device and a computer storage medium.
Background
The artificial intelligence and behavior big data test is to utilize video intelligent equipment, carry out psychology and semantic feature learning on collected data through ecological behavior record of a tested person in a specific environment, realize non-contact measurement on the psychology of related persons based on a psychology feature prediction model obtained by machine learning training of behavior big data, realize timely tracking, analysis and early warning on the psychology of service personnel, and play a very key role in the psychology service of people. Currently, when people perform mental health diagnosis, people generally observe the slight changes of human facial expressions to feel the psychological state of users by means of the users stating recent experience events, talking with the users and the like.
At present, when an intelligent terminal identifies facial expressions, training and huge pictures with different expression states are needed, so that the accuracy of identifying the expressions for different users is ensured; however, the faces of different users are different, and therefore certain errors can be caused to subsequent expression recognition when the training images are obtained, so that service personnel cannot accurately obtain the facial expressions of the users, and certain influence is caused to the judgment of the psychological health.
Disclosure of Invention
In order to solve the problems, the invention provides an emotion recognition method and device based on a facial image and a computer storage medium, which aim at the change of expressions of different user groups and carry out expression change transfer based on the same template face, and finally only need to analyze and recognize the facial features of a test sample image subjected to change processing on the template face image, so that the recognition error is reduced, and the expression recognition accuracy is improved.
The technical scheme of the invention is realized as follows:
a face image-based emotion recognition method includes the following steps:
s1, acquiring a first feature image of a user in a non-expression state; the user starts to state an event or a conversation, and the expression changes along with the time;
s2, acquiring a second feature image of the change of the facial features of the human face in the video image from the non-expression of the facial features to the change process;
s3, according to the displacement parameters of the characteristic points in the face organs in the first characteristic image and the second characteristic image, the displacement parameters at least comprise the displacement amount and the displacement direction of the characteristic points; and transferring the displacement parameters of the feature points to the feature points at corresponding positions on the face image of the expressionless template, performing corresponding displacement processing, and identifying the expression type according to the processed test sample image.
Preferably, step S1 is preceded by: step S0, face correction reminding, which comprises the following steps:
s01, determining a line l from a glabellar point to a nasion point in the face image in the current state 1 Line l connecting the nasal root point to the nasal tip 2 Angle alpha, line from glabellar point to nasion point l 1 An angle beta with a longitudinal normal l drawn downwards from the glabellar point,
s02, according to l 1 、l 2 L's position and included angle relationship reminds user to adjust facial position to l 1 、l 2 And l, three lines are overlapped, namely the face image is adjusted to the front face, and the adjusting method comprises the following steps:
if l 2 At l 1 On the right side of (2), the face is determinedDeflecting towards the right to remind a user to rotate the angle alpha towards the left;
at this time, if 1 If the face is positioned on the right side of the l, judging that the face is in a head-up state, and reminding a user to face downwards to lower the head by an angle beta; if l 1 If the face is positioned at the left side of the l, the face is judged to be in a head lowering state, and a user is reminded of raising the head by an angle beta;
on the contrary, the method can be used for carrying out the following steps,
a plurality of 2 At l 1 If the left side of the face is not the right side, the face is judged to deflect towards the left, and a user is reminded to rotate the angle alpha towards the right;
at this time, if 1 If the face is positioned on the left side of the l, judging that the face is in a head-up state, and reminding a user to face downwards to lower the head by an angle beta; if l 1 And if the face is positioned on the right side of the l, judging that the face is in a low head state, and reminding a user of raising the head by an angle beta upwards.
Preferably, in step S2, the change in the face feature of the face in the video image is determined according to whether the change in the coordinates of the feature points of the face organ in the face is greater than a threshold value: when the coordinate of the feature point of the facial organ of a certain frame is compared with the feature point of the first feature image and the coordinate displacement is smaller than a threshold value, determining that the facial expression is unchanged; when greater than the threshold, a change in facial expression is determined.
Preferably, step S3 comprises the steps of:
s31, creating an expressionless template face image, setting standard face organ feature points at standard positions on the template face image, and constructing classification models of information features of expressions under different emotions according to the template face image;
s32, normalizing the first characteristic image and the second characteristic image to be detected to finally obtain the size which is the same as that of the template face image;
s33, obtaining the displacement parameters of the feature points of the facial organs according to the facial organs with the displacement value change in the extracted second feature image, and carrying out displacement change of the corresponding displacement parameters according to the corresponding points of certain feature points in the template face image;
and S34, after displacement of feature points of all displacement of the facial features is completed, automatically identifying the facial expression of the processed test sample image according to the expression classification model.
Preferably, in step S33, according to a proportional position of a certain displaced feature point in a facial organ in the first feature image, adaptively finding a feature point to be displaced corresponding to the proportional position in a corresponding facial organ of the template face image; and correspondingly displacing the characteristic points to be displaced in the same displacement direction.
Preferably, in step S31, the variation of the displacement parameter of each feature point of each facial organ is recorded under different expression classifications constructed according to the template facial image.
Preferably, in step S34, according to the obtained displacement parameters of the feature points of the facial organs in the test sample image, matching is performed with the displacement parameters of the feature points of each facial organ under different expression classifications in step S31, so as to identify the facial expression.
Preferably, the method further comprises the following steps:
and S35, displaying the first characteristic image, the second characteristic image, the test sample image and expressions corresponding to the images on a display panel, so as to conveniently and intuitively acquire the change state of the emotion of the human face, and calibrating according to the images and the expression states.
An emotion recognition apparatus based on a face image, the apparatus comprising:
the vision camera is used for acquiring a facial feature image of a face of a user;
the voice input equipment is used for acquiring voice data of a user;
a memory, a processor and a mood-identifying program of a facial image stored on the memory and executable on the processor, the mood-identifying program of a facial image implementing the steps of the method described above when executed by the processor;
and the display panel is used for displaying the acquired image and the image and expression state processed by the processor.
A computer storage medium, on which a computer program is stored which, when executed by a processor, implements the method described above.
The invention has the following effects:
(1) The problem that the face feature data loss is large and the recognition inaccuracy is large due to the fact that the side face angle is too large and the face feature data loss is large caused by the fact that the obtained face features form a non-positive posture is solved. Determining a line l from the glabellar point to the nasal root point in the current state image in a specific identification mode 1 Connecting line l from the root of the nose to the tip of the nose 2 Angle alpha, line from glabellar point to nasion point l 1 The position relation between the included angle beta of the included angle beta and a longitudinal normal l led out downwards from the nasal root reminds the user of the specific direction of facial movement, so that the finally obtained facial image is guaranteed to be a characteristic image of the front or the approximate front posture, the obtained facial features are more accurate, and the accuracy of expression recognition is improved.
(2) The method comprises the steps of obtaining data of facial features to be detected, only extracting feature points of a certain or some facial organs which are specifically displaced according to node feature images before and during expression change, converting the feature points into corresponding facial feature points of a template face image in an equal proportion according to displacement parameters (displacement direction, displacement value and the like) on the feature points, directly combining and matching through the variation of the displacement parameters to identify expressions, and updating and obtaining more accurate expression types under the same condition. Therefore, the problem that the result of expression recognition is influenced by the difference caused by the training of different people with different images can be avoided.
(3) Different expression types are classified by the template face image on the front face, different classified expressions can be rapidly determined according to the displacement parameter change relation of the displacement points of the facial features on the template face image, a network training model or other complex methods are not needed, the first feature image and the second feature image, the test sample image and the expressions corresponding to the images are further displayed subsequently, and the results of recognition and expression correction can be manually read.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of an identification process according to the present invention.
FIG. 2 shows an embodiment 1 、l 2 L, alpha and beta.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments of the present invention, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a method for emotion recognition based on a face image includes the following steps:
step S0, face correction reminding, which comprises the following steps:
s01, determining a line l from the middle eyebrow point to the nose root point in the face image in the current state 1 Line l connecting the nasal root point to the nasal tip 2 Angle alpha, line from glabellar point to nasion point l 1 The specific position relationship of the included angle β with the longitudinal normal l drawn downward from the glabellar point is shown in fig. 2.
S02, according to l 1 、l 2 L's position and included angle relationship reminds user to adjust facial position to l 1 、l 2 And l, three lines are overlapped, namely the face image is adjusted to the front face, and the adjusting method comprises the following steps:
if l 2 At l 1 If the face is on the right side, the face is judged to deflect towards the right, and the user is reminded to rotate towards the left by an angle alpha;
at this time, if 1 If the face is positioned on the right side of the l, judging that the face is in a head-up state, and reminding a user to face downwards to lower the head by an angle beta; if l 1 If the face is positioned at the left side of the l, the face is judged to be in a head lowering state, and a user is reminded of raising the head by an angle beta;
on the contrary, the method can be used for carrying out the following steps,
a plurality of 2 At l 1 If the left side of the face is not the right side, the face is judged to deflect towards the left, and a user is reminded to rotate the angle alpha towards the right;
at this time, if 1 If the face is positioned at the left side of the l, judging that the face is in a head-up state, and reminding a user of lowering the head by an angle beta; if l 1 And if the face is positioned on the right side of the l, judging that the face is in a low head state, and reminding a user of raising the head by an angle beta upwards.
Of course, the face image subjected to the correction processing is not completely in a positive state in the actual processing process, and when the detection of α and β is not 0 °, the face image is in a set threshold value, and the face image is determined to be a positive image and then the next operation is performed. In order to ensure the accuracy of detection, affine transformation may be performed on the acquired node image to transform the non-frontal pose feature image into a frontal feature image, which is a well-known and well-known technique and will not be described in detail herein.
S1, acquiring a first feature image of a user in a non-expression state; the user begins to state an event or conversation and the expression changes over time.
S2, acquiring a second feature image of the change of the facial features of the human face in the video image from the non-expression of the facial features to the change process;
wherein, the change of the face feature of the face in the video image is judged according to whether the change of the coordinates of the feature points of the face organs in the face is larger than a threshold value: when the coordinate of the feature point of the facial organ of a certain frame is compared with the feature point of the first feature image and the coordinate displacement is smaller than a threshold value, determining that the facial expression is unchanged; when greater than the threshold, a change in facial expression is determined. According to actual needs, the first characteristic image and the second characteristic image at least comprise one image, particularly the second characteristic image used as a basis for judging the expression can be selected from a plurality of images at random or at intervals, so that the change condition of the user can be identified more accurately.
And S3, when the expression type is identified, the expression identification can be carried out by adopting a conventional training model, but too many sample images are collected, so that great memory resources are consumed. Here, we adopt an expression recognition mode with a brand-new idea:
according to the displacement parameters of the feature points in the face organs of the human face in the first feature image and the second feature image, the displacement parameters at least comprise the displacement amount and the displacement direction of the feature points; and transferring the displacement parameters of the feature points to the feature points at corresponding positions on the face image of the expressionless template, performing corresponding displacement processing, and identifying the expression type according to the processed test sample image.
The method specifically comprises the following steps:
s31, creating a non-expression template face image, setting standard face organ feature points at standard positions on the template face image, constructing classification models of information features of expressions under different emotions according to the template face image, and recording displacement parameter variation of each feature point of the face organ under various expressions.
And S32, normalizing the first characteristic image and the second characteristic image to be detected to finally obtain the size which is the same as that of the template face image, so that the obtained characteristic image and the template face image have the same size, the test sample image after the matched characteristic points are transferred corresponds to the actual characteristic image more accurately, and the problem of overlarge expression recognition difference caused by transfer errors is avoided.
And S33, acquiring the displacement parameters of the feature points of the facial organ according to the extracted facial organ with the displacement value change in the second feature image, and carrying out displacement change of the corresponding displacement parameters according to the corresponding points of the certain feature points in the template face image.
According to the proportional position of a certain displaced feature point in a facial organ in a first feature image, adaptively finding a feature point to be displaced corresponding to the proportional position in the corresponding facial organ of the template face image; the characteristic points which generate displacement are correspondingly displaced from the characteristic points to be displaced on the template face image, so that the characteristic points to be displaced are ensured to be displaced towards the same displacement direction.
Since the feature image and the template face image have been normalized in step S32, the difference between the respective facial features is already small, and the feature point to be displaced is generally displaced by the same displacement amount in the same displacement direction. However, since each person has a certain difference in organ size, in order to ensure the accuracy of displacement, adjustment may be made according to the ratio of the length of the organ feature on the first feature image to the length on the template face image. If the second feature image is compared with the first feature suitcase, the upward displacement of a certain feature point on the mouth corner is 10mm, the mouth length on the first feature image is 5cm, and the mouth length on the template face image is 6cm, the ratio of the first feature image to the template face image for the mouth is 1.2, that is, the feature point at the corresponding mouth corner of the template face image is upwards displaced by 12mm.
And S34, after the displacement of the feature points of all the displacements of the facial features is completed, automatically identifying the facial expressions of the processed test sample image according to the expression classification model.
In the process of identification, because the expressions of the template face image are all artificially constructed, the displacement parameters of all the characteristic points of each facial organ can be directly and clearly obtained according to the constructed expressions, namely, under each expression, which organs are displaced, and the displaced characteristic points and the displacement parameters are clear. And comparing the characteristic points and the displacement parameters which are subjected to displacement in the test sample image with the characteristic points and the displacement parameters under the constructed expression classification to obtain the expression state of the test sample image at the moment.
And S35, displaying the first characteristic image, the second characteristic image, the test sample image and the expressions corresponding to the images on a display panel, so that the change state of the face emotion can be visually acquired conveniently, and calibration can be performed according to the images and the expression states.
The present invention also provides a face image-based emotion recognition apparatus, the apparatus including:
the vision camera is used for acquiring a facial feature image of a face of a user;
the voice input equipment is used for acquiring voice data of a user;
a memory, a processor and a emotion recognition program for a face image stored on said memory and executable on said processor, said emotion recognition program for a face image when executed by said processor implementing the steps of the method described above;
and the display panel is used for displaying the acquired image and the image and expression state processed by the processor.
The present invention also provides a computer storage medium having stored thereon a computer program which, when executed by a processor, performs the method described above.
The present invention may take the form of a computer program product embodied on one or more storage media including, but not limited to, disk storage, CD-ROM, optical storage, and the like, having program code embodied therein. Computer readable storage media, which include both non-transitory and non-transitory, removable and non-removable media, may implement any method or technology for storage of information. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic tape storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by a computing device.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A face image-based emotion recognition method is characterized by comprising the following steps:
step S0, face correction reminding, which comprises the following steps:
s01, determining the middle-of-eyebrow point in the face image in the current stateLine to nasion point l 1 Connecting line l from the root of the nose to the tip of the nose 2 Angle alpha, line from glabellar point to nasion point l 1 An angle beta with a longitudinal normal l drawn downwards from the glabellar point,
s02, according to l 1 、l 2 L's position and included angle relationship reminds user to adjust facial position to l 1 、l 2 And l, three lines are overlapped, namely the face image is adjusted to the front face, and the adjusting method comprises the following steps:
if l 2 At l 1 If the face is on the right side, the face is judged to deflect towards the right, and the user is reminded to rotate towards the left by an angle alpha;
at this time, if 1 If the face is positioned at the right side of the l, judging that the face is in a head-up state, and reminding a user of lowering the head by an angle beta; if l 1 If the face is positioned at the left side of the l, the face is judged to be in a head lowering state, and a user is reminded of raising the head by an angle beta;
on the contrary, the method can be used for carrying out the following steps,
if l 2 At l 1 If the left side of the face is not the right side, the face is judged to deflect towards the left, and a user is reminded to rotate the angle alpha towards the right;
at this time, if 1 If the face is positioned at the left side of the l, judging that the face is in a head-up state, and reminding a user of lowering the head by an angle beta; if l 1 If the face is positioned at the right side of the l, judging that the face is in a low head state, and reminding a user of raising the head by an angle beta;
s1, acquiring a first feature image of a user in a non-expression state; the user starts to state an event or a conversation, and the expression changes along with the time;
s2, acquiring a second characteristic image of the change of the facial features of the human face in the video image from the non-expression of the facial features to the change process;
s3, according to the displacement parameters of the feature points in the face organs of the human face in the first feature image and the second feature image, the displacement parameters at least comprise the displacement amount and the displacement direction of the feature points; and transferring the displacement parameters of the feature points to the feature points at corresponding positions on the face image of the expressionless template, performing corresponding displacement processing, and identifying the expression type according to the processed test sample image.
2. The emotion recognition method based on a face image as claimed in claim 1, wherein in step S2, the face feature variation in the video image is determined based on whether or not the coordinate variation of the feature point of the facial organ in the face is greater than a threshold value: when the coordinate displacement of the feature point coordinates of the facial organ of a certain frame is compared with the feature point of the first feature image and the coordinate displacement is smaller than a threshold value, judging that the facial expression is unchanged; when greater than the threshold, a change in facial expression is determined.
3. The emotion recognition method based on a face image as claimed in claim 1, wherein step S3 includes the steps of:
s31, creating a non-expression template face image, setting standard face organ feature points at standard positions on the template face image, and constructing classification models of expression information features under different emotions according to the template face image;
s32, normalizing the first characteristic image and the second characteristic image to be detected to finally obtain the size which is the same as that of the template face image;
s33, obtaining the displacement parameters of the feature points of the facial organ according to the facial organ with the displacement value change in the extracted second feature image, and carrying out displacement change of the corresponding displacement parameters according to the corresponding points of certain feature points in the template face image;
and S34, after the displacement of the feature points of all the displacements of the facial features is completed, automatically identifying the facial expressions of the processed test sample image according to the expression classification model.
4. The emotion recognition method based on facial images as claimed in claim 3, wherein, in step S33, the feature point to be displaced corresponding to a certain displaced feature point is adaptively found in the corresponding facial organ of the template face image according to the proportional position of the displaced feature point in the facial organ in the first feature image; and correspondingly displacing the characteristic points to be displaced in the same displacement direction.
5. The emotion recognition method based on facial images, as recited in claim 3, wherein in step S31, the variation of displacement parameters of each feature point of each facial organ is recorded under different expression classifications of the template face image configuration.
6. The emotion recognition method based on a face image as recited in claim 5, wherein in step S34, the displacement parameters of the feature points of the facial organs in the obtained test sample image are matched with the displacement parameters of the feature points of the facial organs under the different expression classifications in step S31 to recognize the facial expressions.
7. The emotion recognition method based on a face image as recited in claim 3, further comprising:
and S35, displaying the first characteristic image, the second characteristic image, the test sample image and the expressions corresponding to the images on a display panel, so that the change state of the face emotion can be visually acquired conveniently, and calibration can be performed according to the images and the expression states.
8. An emotion recognition apparatus based on a face image, characterized in that the apparatus comprises:
the vision camera is used for acquiring a facial feature image of a face of a user;
the voice input equipment is used for acquiring voice data of a user;
a memory, a processor and a mood recognition program of a facial image stored on the memory and executable on the processor, the mood recognition program of a facial image implementing the steps of the method of any one of claims 1 to 7 when executed by the processor;
and the display panel is used for displaying the acquired image and the image and expression state processed by the processor.
9. A computer storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 7.
CN202210986516.2A 2022-08-17 2022-08-17 Face image-based emotion recognition method and device and computer storage medium Active CN115331292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210986516.2A CN115331292B (en) 2022-08-17 2022-08-17 Face image-based emotion recognition method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210986516.2A CN115331292B (en) 2022-08-17 2022-08-17 Face image-based emotion recognition method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN115331292A CN115331292A (en) 2022-11-11
CN115331292B true CN115331292B (en) 2023-04-14

Family

ID=83923396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210986516.2A Active CN115331292B (en) 2022-08-17 2022-08-17 Face image-based emotion recognition method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN115331292B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015106252A (en) * 2013-11-29 2015-06-08 シャープ株式会社 Face direction detection device and three-dimensional measurement device
CN103996032A (en) * 2014-05-27 2014-08-20 厦门瑞为信息技术有限公司 Face angle determining method based on cranium image coincidence theory
CN107341784A (en) * 2016-04-29 2017-11-10 掌赢信息科技(上海)有限公司 A kind of expression moving method and electronic equipment
CN109993021A (en) * 2017-12-29 2019-07-09 浙江宇视科技有限公司 The positive face detecting method of face, device and electronic equipment
CN112836680A (en) * 2021-03-03 2021-05-25 郑州航空工业管理学院 Visual sense-based facial expression recognition method

Also Published As

Publication number Publication date
CN115331292A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
CN110678875B (en) System and method for guiding a user to take a self-photograph
CN105516280B (en) A kind of Multimodal Learning process state information packed record method
WO2021077382A1 (en) Method and apparatus for determining learning state, and intelligent robot
CN106897658A (en) The discrimination method and device of face live body
CN109034099B (en) Expression recognition method and device
CN109063587B (en) Data processing method, storage medium and electronic device
CN109801105A (en) Service methods of marking, device, equipment and storage medium based on artificial intelligence
CN111796681A (en) Self-adaptive sight estimation method and medium based on differential convolution in man-machine interaction
CN111080624B (en) Sperm movement state classification method, device, medium and electronic equipment
CN114372701A (en) Method and device for evaluating customer service quality, storage medium and equipment
CN109697421A (en) Evaluation method, device, computer equipment and storage medium based on micro- expression
Cowie et al. Recognition of emotional states in natural human-computer interaction
EP3035233A1 (en) Assessment method for facial expressions
CN109858379A (en) Smile's sincerity degree detection method, device, storage medium and electronic equipment
CN115331292B (en) Face image-based emotion recognition method and device and computer storage medium
CN110598607B (en) Non-contact and contact cooperative real-time emotion intelligent monitoring system
KR102247481B1 (en) Device and method for generating job image having face to which age transformation is applied
CN111723688A (en) Human body action recognition result evaluation method and device and electronic equipment
US12002291B2 (en) Method and system for confidence level detection from eye features
RU2768797C1 (en) Method and system for determining synthetically modified face images on video
KR20230007250A (en) UBT system using face contour recognition AI and method thereof
KR20230013236A (en) Online Test System using face contour recognition AI to prevent the cheating behaviour by using speech recognition and method thereof
CN115100560A (en) Method, device and equipment for monitoring bad state of user and computer storage medium
KR20230007249A (en) UBT system using face contour recognition AI to prevent the cheating behaviour and method thereof
CN114998440A (en) Multi-mode-based evaluation method, device, medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant