CN110555401B - Self-adaptive emotion expression system and method based on expression recognition - Google Patents

Self-adaptive emotion expression system and method based on expression recognition Download PDF

Info

Publication number
CN110555401B
CN110555401B CN201910790582.0A CN201910790582A CN110555401B CN 110555401 B CN110555401 B CN 110555401B CN 201910790582 A CN201910790582 A CN 201910790582A CN 110555401 B CN110555401 B CN 110555401B
Authority
CN
China
Prior art keywords
module
expression
user
behavior
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910790582.0A
Other languages
Chinese (zh)
Other versions
CN110555401A (en
Inventor
孙凌云
周子洪
周志斌
张于扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201910790582.0A priority Critical patent/CN110555401B/en
Publication of CN110555401A publication Critical patent/CN110555401A/en
Application granted granted Critical
Publication of CN110555401B publication Critical patent/CN110555401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a self-adaptive emotion expression system and a method based on expression recognition; the system comprises an image acquisition module, an image processing module, an expression recognition module, an emotion-behavior mapping module, a motion expression module, a light expression module and a storage module. The method comprises the steps that a face image is collected through an image collecting module, the face image is subjected to face detection, image cutting and other processing through an image processing module, the form recognition of the face image is realized through an improved convolutional neural network, and the emotional state of the face image is analyzed; the emotion-behavior mapping module maps user emotion into high-level behaviors containing certain emotion, and the motion expression module and the light expression module are used for analyzing and expressing the corresponding high-level behaviors so as to give response to the user; and recording the emotional state of the user at the moment by using the storage module and feeding the emotional state back to the emotion-behavior mapping module. By the method and the system, the social interaction and emotion interaction capability of the service robot can be improved.

Description

Self-adaptive emotion expression system and method based on expression recognition
Technical Field
The invention belongs to the field of artificial intelligence and human-computer interaction, and particularly relates to an expression recognition-based adaptive emotion expression system and method.
Background
Under the background of wide development of service industry and retail industry, the emotion robot with emotion recognition, interaction and accompanying functions has wide commercial value in the fields of intelligent families, public services and the like.
For the emotion robot, it is an important ring to recognize and analyze user emotion. However, the emotion recognition capability of the existing robot is still very limited.
For example: chinese patent application No. 201310694112.7 discloses a face detection and emotion recognition system and method for a robot, which includes: the system comprises a facial expression library acquisition module for acquiring a large number of facial expression color image frames by using a video acquisition device, an original expression library construction module for extracting expression characteristics by using a facial expression library and forming an original expression characteristic library, a characteristic library reconstruction module for reconstructing the original expression characteristic library into a structured hash table by using a distance hash algorithm, a field expression characteristic extraction module and an expression recognition module for recognizing facial expressions by using a k-nearest neighbor classification algorithm.
The chinese patent application No. 201310413648.7 discloses a human-computer emotion interaction system and method based on expression recognition, which includes an image acquisition module, an image processing module, an expression recognition module, a human-computer interaction module, and a statistics and storage module.
The expression recognition algorithms used in the two systems do not adopt a neural network training mode, and the training and prediction of the expression recognition model are realized in a mode of self-defining facial features.
In addition, the emotional expression capability of the robot directly influences the emotional interaction experience of the user. How to express a certain specific emotional state by using various external forms of the robot and perform self-adaptive adjustment according to user feedback is a difficulty of emotional expression of the current robot. At present, emotion expression modes of service robots in various industries are mostly limited to voice, intonation and simulation of facial expressions by display screens, and emotion expression capacity of the emotion expression modes is still relatively lacking.
Disclosure of Invention
The invention provides a self-adaptive emotion expression system and a self-adaptive emotion expression method based on expression recognition, which can analyze the emotional state of people through facial expression recognition in natural environment and respond to the emotional state of a user through a motion expression module and a light expression module of the self-adaptive emotion expression system.
An adaptive emotion expression method based on expression recognition comprises the following steps:
(1) acquiring a face image of a user in real time, preprocessing the acquired face image, extracting key points of the face and extracting the directional gradient histogram characteristics to obtain a preprocessed face image and partial face characteristic information;
(2) inputting the preprocessed face image and part of face feature information into an improved convolutional neural network to obtain a face expression recognition result of the user, and analyzing according to the expression recognition result to obtain the emotional state of the user;
the improved convolutional neural network is additionally provided with human face key points and directional gradient histogram features as the other independent input of the neural network, the two parts of information are directly input into a full connection layer, combined with human face feature information extracted by a convolutional layer and input into a softmax layer together to calculate the prediction probability value of each human face expression;
(3) constructing a mapping model between emotion and behavior, inputting the obtained user emotion state into the mapping model, and mapping the emotion state into high-level behavior containing emotion information, including behavior type and behavior intensity;
(4) and transmitting the high-level behaviors obtained by mapping to an interaction module of the intelligent system, wherein the interaction module realizes expression of emotional actions by analyzing action information contained in the high-level behaviors, and realizes the response given to the user by the intelligent system.
In the step (1), the preprocessing comprises gray level binarization, face detection, image cutting, affine transformation and histogram equalization. The partial face feature information comprises face facial features, face contour features and directional gradient histogram features.
In the step (2), the emotional states of the user comprise impairment, anger generation, surprise, neutrality, fear and distraction.
In the step (3), the behavior types comprise comfort, alleviation, curiosity, joy, encouragement and excitement, which respectively correspond to the emotional states of six users, namely casualty, anger, surprise, neutrality, fear and happiness; each behavior type corresponds to a value of the behavior intensity.
In the step (4), the interaction module comprises a motion expression module and a light expression module, wherein the behavior type and the behavior intensity of the motion expression module are respectively a motion form and a motion speed, and the behavior type and the behavior intensity of the light expression module are respectively a light color and a flicker frequency.
In the step (4), after the user responds, acquiring the face image of the user again, identifying the emotional state of the user, analyzing whether the emotional state of the user changes, and recording and storing feedback data and an analysis result; and inputting the analysis result as feedback to the mapping model, and determining the action intensity of the next interaction by combining the emotional state fed back by the user in the next frame.
The invention also provides a self-adaptive emotion expression system based on expression recognition, which comprises the following components:
the image acquisition module is used for acquiring a face image in real time and transmitting the face image to the image processing module;
the image processing module is used for processing the face image to obtain a preprocessed face image and partial face characteristic information;
the expression recognition module is used for recognizing the expression of the facial image by using an expression recognition method based on an improved convolutional neural network according to the processed preprocessed facial image and the part of facial feature information, and analyzing to obtain the emotional state of the facial image;
the emotion-behavior mapping module is used for mapping the emotional state into the high-level behaviors containing certain emotional information through a mapping model, and the high-level behaviors comprise behavior types and behavior intensities;
the motion expression module is used for analyzing the high-level behaviors containing certain emotional information to obtain motion instructions which can be identified by the intelligent system, wherein the motion instructions comprise two motion control instructions of behavior types and behavior intensities, and different motion forms and motion speeds are expressed by executing the motion instructions to serve as responses given to the user by the intelligent system;
the lamplight expression module is used for analyzing the high-level behaviors containing certain emotional information to obtain lamplight instructions which can be identified by the intelligent system, wherein the lamplight instructions comprise behavior types and behavior intensity;
and the storage module is used for recording the emotional state of the user, feeding the emotional state back to the mapping module, and determining the behavior intensity of the next interactive action by combining the emotional state fed back by the user in the next frame.
The system of the invention collects the face image through the image collecting module, uses the image processing module to carry out the processing of face detection, image cutting and the like on the face image, and uses the expression recognition method based on the improved convolution neural network to recognize the expression of the face image and analyze the emotional state of the face image; the emotion-behavior mapping module maps user emotion into high-level behaviors containing certain emotion, and the high-level behaviors are analyzed and expressed by the motion expression module and the light expression module so as to give response to the user; and recording the emotional state of the user at the moment by using the storage module and feeding the emotional state back to the emotion-behavior mapping module. By the method and the system, the social interaction and emotion interaction capability of the service robot can be improved.
The image processing module receives and processes the face image acquired by the image acquisition module in real time, and the face image processing module comprises gray level binarization, face detection, image cutting, affine transformation, histogram equalization, face key point extraction and direction gradient histogram feature extraction, so that the same face picture shot under different imaging conditions (including differences of illumination intensity, direction, angle, distance, posture and the like) has consistency.
The expression recognition module inputs the preprocessed face image and the part of face feature information into a pre-trained convolutional neural network, and extracts the feature information of the preprocessed face; combining the facial feature information extracted by the convolutional neural network and the part of facial feature information extracted by the facial key point extraction and the directional gradient histogram feature extraction, and inputting the two parts of facial feature information into the convolutional neural network trained in advance to realize the expression recognition of the facial image; and analyzing the expression recognition result to obtain the current emotional state of the user.
The emotion-behavior mapping module is used for mapping the emotional state into the high-level behaviors containing certain emotional information through the mapping model, and the high-level behaviors comprise behavior types and behavior intensities.
The motion expression module is mainly matched with the light expression module through the motion of a mechanical structure of the motion expression module to assist in expressing the emotion of the robot system.
The light expression module mainly expresses the emotion of the robot system through the color change and the flicker frequency of the LED.
After the user responds, acquiring the face image of the user again, identifying the emotional state of the user, analyzing whether the emotional state of the user changes or not, and recording and storing feedback data and an analysis result by the storage module; and inputting the analysis result as feedback to the mapping model, and determining the action intensity of the next interaction by combining the emotional state fed back by the user in the next frame.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention improves the expression recognition algorithm based on the convolutional neural network, adds two parts of feature information of face key points and directional gradient histogram features as the other single input of the neural network, and the two parts of feature information are directly input into a full-link layer and are combined with the feature information extracted by the convolutional layer to be input into a softmax layer together for calculating the prediction probability values of 6 expressions. By the method, the recognition accuracy of the convolutional neural network model for recognizing the facial expressions in the natural environment (in a complex background) can be effectively improved.
2. The invention provides a mapping model which can determine the behavior type and the behavior intensity of high-level behaviors with emotion information fed back to a user by an intelligent system according to the emotion state of the user.
3. The response of the intelligent emotion expression system to the user emotion is not single and invariable, but has self-adaptive capacity. After the user responds, acquiring the face image of the user again, identifying the emotional state of the user, analyzing whether the emotional state of the user changes, and recording and storing feedback data and an analysis result; the emotional state of the user feedback collected by the storage module is another input of the mapping model. By adopting the iteration mode, better human-computer interaction feeling experience can be provided.
Drawings
FIG. 1 is a schematic diagram of the overall structure of the system of the present invention;
FIG. 2 is a schematic workflow diagram of an embodiment of the present invention;
fig. 3 is a diagram of a neural network structure according to an embodiment of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
The system can be installed in the emotional robots in the fields of intelligent families, public services and the like. As shown in fig. 1, the system is composed of an image acquisition and processing module, an expression recognition module, an emotion-behavior mapping module, and a motion expression module, and specifically includes:
the image acquisition module is used for acquiring a face image in real time and transmitting the face image to the image processing module;
the image processing module is used for processing the face image to obtain a preprocessed face image and partial face characteristic information;
the expression recognition module is used for recognizing the expression of the face image by using an expression recognition method based on an improved convolutional neural network according to the processed preprocessed face gray level image and the characteristic information of the key points and the directional gradient histogram of the part of the face, and analyzing to obtain the emotional state of the face image;
and the emotion-behavior mapping module is used for mapping the emotional state into the high-level behaviors containing certain emotional information through a mapping model, and the high-level behaviors comprise behavior types and behavior intensities.
The motion expression module is used for analyzing the high-level behaviors containing certain emotional information to obtain motion instructions which can be identified by the intelligent system, wherein the motion instructions comprise two motion control instructions of behavior types and behavior intensities, and different motion forms and motion speeds are expressed by executing the motion instructions to serve as responses given to the user by the intelligent system;
the lamplight expression module is used for analyzing the high-level behaviors containing certain emotional information to obtain lamplight instructions which can be identified by the intelligent system, wherein the lamplight instructions comprise behavior types and behavior intensity;
and the storage module is used for recording the emotional state of the user, feeding the emotional state back to the mapping module, and determining the behavior intensity of the next interactive action by combining the emotional state fed back by the user in the next frame.
As shown in fig. 2, the overall work flow of the system of the present invention is as follows:
firstly, an image acquisition module acquires a face image of a user in real time and transmits image information to an image processing module.
Secondly, the image processing module processes the face image of the user based on an image processing technology, and the processing method comprises the following steps: the method comprises the following steps of gray level binarization, face detection, image cutting, affine transformation, histogram equalization, face key point extraction and directional gradient histogram feature extraction to obtain a preprocessed face gray level image and partial face feature information, and comprises the following steps: human face key points and directional gradient histogram feature information;
thirdly, as shown in fig. 3, the information transmitted from the image processing module to the input layer of the expression recognition module is added with two parts of feature information, namely, a face key point and a histogram of oriented gradients, as separate inputs, in addition to the face gray image. The two parts of feature information are directly input into a full connection layer in a convolutional neural network trained in advance, and combined with face gray image feature information extracted from the convolutional layer to calculate the prediction probability values of 6 expressions together. Obtaining a facial expression recognition result of the user, and analyzing according to the expression recognition result to obtain the emotional state of the user;
fourthly, inputting the emotional state of the user into a mapping model, wherein the mapping model maps the emotional state into a high-level behavior containing certain emotional information; the model can determine the advanced behavior P which is fed back to the user by the intelligent system and has emotion information according to the emotion of the user. The high-level behavior P is determined by a behavior type S and a behavior intensity T, and is defined as follows:
P=αS+βT,α>>β
wherein, alpha is a behavior type coefficient, beta is a behavior intensity coefficient, and alpha is far larger than beta;
fifthly, the motion expression module and the light expression module analyze the high-level behavior information from the mapping model, realize the expression of emotional actions according to the analysis instruction and use the expression as the response given to the user by the intelligent system;
sixthly, after the user responds, acquiring the face image of the user again, identifying the emotional state of the user, and analyzing whether the emotional state of the user changes;
and seventhly, recording and storing the feedback data and the analysis result by the storage module. And simultaneously, inputting an analysis result as feedback to the mapping model, and determining the mapping result of the mapping model, namely the high-level behavior containing certain emotional information, by combining the preprocessed face image of the next frame and the part of face feature information.
The mechanism of mapping the model involved in the fifth step will be further explained below.
Firstly, the mapping model analyzes the emotional state of the user, obtains a behavior type S according to a preset 'emotion-behavior' mapping table, and digitally encodes the S. A typical mapping is shown in table 1.
TABLE 1
Emotional state of user Type of behavior S
Heart injury Comfort
Generating qi Mitigation
Is surprised Curiosity
Neutral property Pleasure of
Fear of Encouragement
Happy Activation of
When the behavior type S is digitally encoded, the numerical value of the encoded S has practical significance in consideration of the relevance of different types of behaviors and behavior intensity. A typical behavior type code is shown in table 2.
TABLE 2
Type of behavior S Digital coding
Comfort S=1
Mitigation S=2
Curiosity S=3
Pleasure of S=4
Encouragement S=5
Activation of S=6
In addition, the emotional state of the user feedback collected by the storage module is another input of the mapping model. The mapping model determines the value of the behavior intensity T according to the emotional state fed back by the user.
The behavior intensity T is defined here as:
Figure BDA0002179426850000091
wherein, R is a feedback coefficient, and N is a behavior intensity upper limit coefficient.
The emotional state fed back by the user influences the behavior intensity T through a feedback coefficient R. The definition rules are shown in table 3:
TABLE 3
User feedback emotional state Feedback coefficient R
Generating qi -6
Fear of -4
Heart injury -2
Happy 2
Is surprised 4
Neutral property 6
Aiming at the formula, the action intensity upper limit coefficient N determines the interaction action intensity upper limit (such as action frequency upper limit, light flicker frequency upper limit and the like) of the intelligent system in the interaction process with the user; a feedback coefficient R represents the user emotional state collected by the intelligent system last time, and influences on the system decision at this time (embodied in influences on the behavior intensity at this time); and the behavior type S represents the influence of the behavior type adopted by the intelligent system on the behavior intensity.
The formula is divided into three parts, when T + S + R is more than N, namely the value of the behavior intensity T reaches a specified upper limit N after multiple feedbacks, and T is defined as N; when-N is less than T + S + R and less than N, namely the value of the behavior intensity T in the feedback process does not exceed the upper limit N, defining T as T + S + R at the moment, and determining the behavior intensity by the previous behavior intensity, the current behavior type and the feedback coefficient at the moment; T-N is defined when T + S + R < -N, i.e. the value of the behaviour intensity T after multiple feedbacks, has reached the specified lower limit-N.
Further, the mapping model transmits the advanced behavior P value to the motion expression module and the light expression module. The motion expression module and the light expression module analyze the P value through the following formula to obtain the corresponding values of the behavior type S and the behavior intensity T:
Figure BDA0002179426850000101
Figure BDA0002179426850000102
for the above formula, the value of behavior type S is equal to: for values rounded down for the result of dividing P by α, the value of the behavior intensity T is equal to: the result of subtracting α from P and dividing by β is rounded down.
After the values of the behavior type S and the behavior intensity T are solved, the motion expression module and the light expression module express corresponding actions according to the values of S and T.
The interactive action definition of the adaptive emotion expression system based on expression recognition is shown in table 4, wherein the definition of different interactive actions is determined by different action types and different action strengths.
TABLE 4
Figure BDA0002179426850000103
Figure BDA0002179426850000111
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (1)

1. An adaptive emotion expression method based on expression recognition is characterized by comprising the following steps:
(1) acquiring a face image of a user in real time, preprocessing the acquired face image, extracting key points of the face and extracting the directional gradient histogram characteristics to obtain a preprocessed face image and partial face characteristic information;
the preprocessing comprises gray level binarization, face detection, image cutting, affine transformation and histogram equalization; the partial face feature information comprises face facial features, face contour features and directional gradient histogram features;
(2) inputting the preprocessed face image and part of face feature information into an improved convolutional neural network to obtain a face expression recognition result of the user, and analyzing according to the expression recognition result to obtain the emotional state of the user; the emotional state of the user comprises impairment, anger, surprise, neutrality, fear and happiness;
the improved convolutional neural network is additionally provided with human face key points and directional gradient histogram features as the other independent input of the neural network, the two parts of information are directly input into a full connection layer, combined with human face feature information extracted by a convolutional layer and input into a softmax layer together to calculate the prediction probability value of each human face expression;
(3) constructing a mapping model between emotion and behavior, inputting the obtained user emotion state into the mapping model, and mapping the emotion state into high-level behavior containing emotion information, including behavior type and behavior intensity;
the behavior types comprise six types of consolation, relaxation, curiosity, joy, encouragement and excitement, which respectively correspond to six user emotional states of impairment, anger, surprise, neutrality, fear and happiness; each type of behavior contains a different behavior intensity;
(4) transmitting the high-level behaviors obtained by mapping to an interaction module of an intelligent system, and realizing expression of emotional actions by the intelligent system through analyzing action information contained in the high-level behaviors and giving responses to a user;
the interactive module comprises a motion expression module and a light expression module, wherein the behavior type and the behavior intensity of the motion expression module are respectively a motion form and a motion speed, and the behavior type and the behavior intensity of the light expression module are respectively a light color and a flicker frequency;
after giving the user response, the method further comprises the following steps: acquiring the face image of the user again, identifying the emotional state of the user, analyzing whether the emotional state of the user changes, and recording and storing feedback data and an analysis result; and inputting the analysis result as feedback to the mapping model, and determining the action intensity of the next interaction by combining the emotional state fed back by the user in the next frame.
CN201910790582.0A 2019-08-26 2019-08-26 Self-adaptive emotion expression system and method based on expression recognition Active CN110555401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910790582.0A CN110555401B (en) 2019-08-26 2019-08-26 Self-adaptive emotion expression system and method based on expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910790582.0A CN110555401B (en) 2019-08-26 2019-08-26 Self-adaptive emotion expression system and method based on expression recognition

Publications (2)

Publication Number Publication Date
CN110555401A CN110555401A (en) 2019-12-10
CN110555401B true CN110555401B (en) 2022-05-03

Family

ID=68738359

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910790582.0A Active CN110555401B (en) 2019-08-26 2019-08-26 Self-adaptive emotion expression system and method based on expression recognition

Country Status (1)

Country Link
CN (1) CN110555401B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428666A (en) * 2020-03-31 2020-07-17 齐鲁工业大学 Intelligent family accompanying robot system and method based on rapid face detection

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107491726A (en) * 2017-07-04 2017-12-19 重庆邮电大学 A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108960114A (en) * 2018-06-27 2018-12-07 腾讯科技(深圳)有限公司 Human body recognition method and device, computer readable storage medium and electronic equipment
CN109684911A (en) * 2018-10-30 2019-04-26 百度在线网络技术(北京)有限公司 Expression recognition method, device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679203B (en) * 2013-12-18 2015-06-17 江苏久祥汽车电器集团有限公司 Robot system and method for detecting human face and recognizing emotion
CN105050247B (en) * 2015-06-24 2017-06-23 河北工业大学 Light intelligent regulating system and its method based on expression Model Identification
CN107273845B (en) * 2017-06-12 2020-10-02 大连海事大学 Facial expression recognition method based on confidence region and multi-feature weighted fusion
KR102570279B1 (en) * 2018-01-05 2023-08-24 삼성전자주식회사 Learning method of emotion recognition, method and apparatus of recognizing emotion
CN109344693B (en) * 2018-08-13 2021-10-26 华南理工大学 Deep learning-based face multi-region fusion expression recognition method
CN109683709A (en) * 2018-12-17 2019-04-26 苏州思必驰信息科技有限公司 Man-machine interaction method and system based on Emotion identification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107491726A (en) * 2017-07-04 2017-12-19 重庆邮电大学 A kind of real-time expression recognition method based on multi-channel parallel convolutional neural networks
CN108108677A (en) * 2017-12-12 2018-06-01 重庆邮电大学 One kind is based on improved CNN facial expression recognizing methods
CN108960114A (en) * 2018-06-27 2018-12-07 腾讯科技(深圳)有限公司 Human body recognition method and device, computer readable storage medium and electronic equipment
CN109684911A (en) * 2018-10-30 2019-04-26 百度在线网络技术(北京)有限公司 Expression recognition method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110555401A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
Nirmala Sreedharan et al. Grey wolf optimisation‐based feature selection and classification for facial emotion recognition
CN110458844B (en) Semantic segmentation method for low-illumination scene
CN111652066B (en) Medical behavior identification method based on multi-self-attention mechanism deep learning
CN112508077B (en) Social media emotion analysis method and system based on multi-modal feature fusion
CN108133188A (en) A kind of Activity recognition method based on motion history image and convolutional neural networks
Rázuri et al. Automatic emotion recognition through facial expression analysis in merged images based on an artificial neural network
CN108416065A (en) Image based on level neural network-sentence description generates system and method
CN104361316B (en) Dimension emotion recognition method based on multi-scale time sequence modeling
Song et al. Dynamic facial models for video-based dimensional affect estimation
CN113947702A (en) Multi-modal emotion recognition method and system based on context awareness
CN117198468B (en) Intervention scheme intelligent management system based on behavior recognition and data analysis
Zhang et al. Intelligent Facial Action and emotion recognition for humanoid robots
Krishnaraj et al. A Glove based approach to recognize Indian Sign Languages
CN115410254A (en) Multi-feature expression recognition method based on deep learning
Varsha et al. Indian sign language gesture recognition using deep convolutional neural network
CN113974627B (en) Emotion recognition method based on brain-computer generated confrontation
CN116110565A (en) Method for auxiliary detection of crowd depression state based on multi-modal deep neural network
CN112668543B (en) Isolated word sign language recognition method based on hand model perception
CN110555401B (en) Self-adaptive emotion expression system and method based on expression recognition
CN111027433A (en) Multiple style face characteristic point detection method based on convolutional neural network
Birhala et al. Temporal aggregation of audio-visual modalities for emotion recognition
Rony et al. An effective approach to communicate with the deaf and mute people by recognizing characters of one-hand bangla sign language using convolutional neural-network
Verma et al. Hmm-based convolutional lstm for visual scanpath prediction
CN116167015A (en) Dimension emotion analysis method based on joint cross attention mechanism
Guodong et al. Multi feature fusion EEG emotion recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant