CN109359599A - Human facial expression recognition method based on combination learning identity and emotion information - Google Patents

Human facial expression recognition method based on combination learning identity and emotion information Download PDF

Info

Publication number
CN109359599A
CN109359599A CN201811220643.1A CN201811220643A CN109359599A CN 109359599 A CN109359599 A CN 109359599A CN 201811220643 A CN201811220643 A CN 201811220643A CN 109359599 A CN109359599 A CN 109359599A
Authority
CN
China
Prior art keywords
facial expression
network
training
information
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811220643.1A
Other languages
Chinese (zh)
Inventor
李明
邹小兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Duke Kunshan University
Third Affiliated Hospital Sun Yat Sen University
Original Assignee
Duke Kunshan University
Third Affiliated Hospital Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Duke Kunshan University, Third Affiliated Hospital Sun Yat Sen University filed Critical Duke Kunshan University
Priority to CN201811220643.1A priority Critical patent/CN109359599A/en
Publication of CN109359599A publication Critical patent/CN109359599A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The human facial expression recognition method based on combination learning identity information and emotion information that the invention discloses a kind of, including recognition of face image data base and facial expression image database, utilize recognition of face image data base stand-alone training face identity information branch of a network, last full articulamentum is removed after training, can extract to obtain the identity characteristic vector of input picture by neural network;Using facial expression image database training facial expression information branch of a network, after full articulamentum is removed, can extract to obtain the affective characteristics vector of input picture by neural network;Identity characteristic vector sum affective characteristics vector is cascaded to obtain series connection facial characteristics expression;The series connection face expression characteristic for merging identity information and facial information is fed to full articulamentum, subsequent training is used only facial expression image database and carries out combination learning and optimization to network is merged.The present invention promotes human facial expression recognition method for the robustness of itself difference between individual subject.

Description

Human facial expression recognition method based on combination learning identity and emotion information
Technical field
The present invention relates to computer visions and affection computation field, are based on combination learning identity more particularly, to one kind The human facial expression recognition method of information and emotion information.
Background technique
With the fast development of computer technology and artificial intelligence technology and its related discipline, the automation journey of entire society Degree is continuously improved, and demand of the people to the human-computer interaction of people and people's exchange way is similar to is increasingly strong.Human face expression is most straight It connects, most effective emotion recognition mode.It has the application in terms of many human-computer interactions.If computer and robot can be as people Class has the ability for understanding and showing emotion like that, will fundamentally change the relationship between people and computer, enables a computer to It is enough preferably to be serviced for the mankind.Human facial expression recognition is the basis of affective comprehension, is the premise of computer understanding people's emotion, It is that people explore and understand the effective way of intelligence.
Human facial expression recognition is one and is intended to identify that character face expresses from static image or video sequence Emotion attribute (such as neutral, sad, despise, happy, surprised, indignation is frightened, detests etc.) task.Although having in recent years Many work all concentrate on the human facial expression recognition task based on video or image sequence, but the facial expression based on static image Identification is still a challenging problem, and present invention research object to be processed is also for static image.
The research history of entire human facial expression recognition is made a general survey of, it is the development for following recognition of face, and face is known The relatively good method in other field can be equally applicable to Expression Recognition.In the face recognition tasks research of early stage, many work are all What the feature based on hand-designed carried out.These methods generally include front end feature extraction and classifier training two of rear end The isolated stage.In feature extraction phases, people devise many useful features, such as local binary using expert's priori knowledge Mode, Gabor wavelet feature, Scale invariant change feature and Gauss face etc..On this basis, using the classifier for having supervision, Such as support vector machines, feedforward neural network and the learning machine etc. that transfinites carry out subsequent modeling.
In recent years, with the development of depth learning technology, appointed based on the method for deep neural network in facial Classical correlation Excellent performance is achieved in business.In facial identification task, depth convolutional neural networks (Covolutional Neural Network, CNN) show the performance better than traditional-handwork design feature method.In human facial expression recognition, CNN model has also been widely used.But in human facial expression recognition task, lack the training data marked on a large scale, it is different It causes and the factors such as insecure mood label and intersubjective changeability all limits table of the CNN in human facial expression recognition task Existing, system performance still has the space further promoted.
In human facial expression recognition problem, there are two the bigger challenge faced is main.Firstly, some facial expressions it Between difference may be inherently very delicate, therefore be difficult in some cases to they carry out Accurate classification.Secondly as tested Difference between person's individual, such as face shape, different subjects may express identical certain surface in different ways Portion's expression.That is, even if being same facial expression attribute, the state expressed between different individual subjects can also There can be larger difference.
Summary of the invention
In view of the above technical problems, the purpose of the present invention is to provide one kind is believed based on combination learning identity information and emotion The human facial expression recognition method of breath.The present invention carrys out assisted face table using the facial identity information in additional facial recognition data Feelings identification, to promote human facial expression recognition method for the robustness of itself difference between individual subject, and finally promotes face The performance of portion's identification system.More specifically, the data volume of usually facial expression recognition database is all considerably less, face simultaneously The different challenge of the mark individual expression way of unreliable and subject.The present invention is exactly to be known using existing magnanimity face The middle school's acquistion of other database merges emotion information with this and carries out combined optimization to facial identity information, thus break through data volume compared with Few bring performance bottleneck, robustness of the enhancing system for individual difference.The present invention is in the training process of model, Neng Gouyou Effect ground carries out the combination learning of identity and emotion information using additional recognition of face training data.
To achieve the above object, the present invention is realized according to following technical scheme:
A kind of human facial expression recognition method based on combination learning identity information and emotion information, which is characterized in that including Following steps:
Come joint training neural network and optimization mind using recognition of face image data base and facial expression image database Through network;
The recognition of face image data base is for stand-alone training and optimizes facial identity information branch of a network, and training finishes Last face identity output layer is removed afterwards, only extracts and obtains the corresponding identity characteristic vector of input picture;
The facial expression image database is finished for stand-alone training and optimization facial expression information branch of a network, training Last facial expression output layer is removed afterwards, only extracts and obtains the corresponding affective characteristics vector of input picture;
Identity characteristic vector sum affective characteristics vector is cascaded to obtain series connection facial characteristics expression;It finally will fusion The facial expression characteristic of the series connection of identity information and facial information is fed to subsequent facial expression output layer;
In subsequent network training process, be used only facial expression image database to merge network carry out combination learning and Optimization, and finally predict human facial expression recognition result.
In above-mentioned technical proposal, due to the difference of network structure and training data, the identity characteristic vector sum emotion is special It levies vector and (Batch Normalization, BN) progress standardization processing is normalized by batch, then the two features are connected Facial expression characteristic of connecting is formed together.
Compared with the prior art, the invention has the following advantages:
Using the trained human facial expression recognition method of the method for the present invention be able to ascend human facial expression recognition method for by The robustness of itself difference between examination person's individual.Final system performance and originally to only use single facial expression data library trained To system compared to there is significant performance boost.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is flow diagram of the invention;
Fig. 2 is the 12 baseline system network structure of ResNet for the design of CK+ facial expression test database;
Fig. 3 is combination learning identity information and emotion information of the present invention for the design of CK+ facial expression test database Facial expression system;
Fig. 4 is ResNet 18 baseline system network structure of the present invention for the design of FER+ facial expression test database Figure;
Fig. 5 is combination learning identity information and emotion information of the present invention for the design of FER+ facial expression test database Facial expression system.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is A part of the embodiment of the present invention, instead of all the embodiments.
Fig. 1 is flow diagram of the invention.It is of the invention a kind of based on combination learning identity information and emotion information Human facial expression recognition method, comprising:
Come joint training neural network and optimization mind using recognition of face image data base and facial expression image database Through network;
The recognition of face image data base is for stand-alone training and optimizes facial identity information branch of a network, and training finishes Last face identity output layer is removed afterwards, only extracts and obtains the corresponding identity characteristic vector of input picture;
The facial expression image database is finished for stand-alone training and optimization facial expression information branch of a network, training Last facial expression output layer is removed afterwards, only extracts and obtains the corresponding affective characteristics vector of input picture;
Identity characteristic vector sum affective characteristics vector is cascaded to obtain series connection facial characteristics expression;It finally will fusion The facial expression characteristic of the series connection of identity information and facial information is fed to subsequent facial expression output layer
In subsequent network training process, be used only facial expression image database to merge network carry out combination learning and Optimization, and finally predict human facial expression recognition result.
In the present invention, due to the difference of network structure and training data, the identity characteristic vector sum that learns sometimes The scale of affective characteristics vector is not in the same range, and therefore, identity characteristic vector sum affective characteristics vector passes through batch Normalization carries out standardization processing, then the two features are cascaded to form the facial expression characteristic of series connection.
Specifically, any one input picture is given, the network that the present invention designs, which is divided into two branches, carries out the picture Processing: the branch on the left side learns identity characteristic information, the branch Latent abilities characteristic information on the right.The two branches are all by wrapping The network structure composition of many convolution (Convolutional, Conv) layer is contained.Facial recognition data and face are used respectively After expression data finishes two sub- network trainings, then the two sub-networks are grouped together, obtain final series connection face Portion's feature representation.The facial expression characteristic of the series connection for having merged identity information and facial information is finally fed to subsequent facial table Feelings output layer.In subsequent network training process, facial expression image database is used only and carries out combination learning to network is merged And optimization, and finally predict human facial expression recognition result.
Fig. 2 is the 12 baseline system network structure of ResNet for the design of CK+ facial expression test database.The base The training of CK+ data set is only used only in linear system system.Network structure includes a 16 channel convolutional layers, 3 residual error block structures, Yi Jiyi A global pool layer.Last prediction result is provided by facial expression output layer.
Fig. 3 is combination learning identity information and emotion information of the present invention for the design of CK+ facial expression test database Facial expression system.Recognition of face branch of a network is increased on the basis of Fig. 2 to learn to obtain face identity characteristic vector, and It is cascaded with original face affective characteristics vector and has been merged the final facial characteristics of identity information and emotion information Expression.Finally, carrying out joint training tuning using network of the CK+ facial expression training data to merger.
Fig. 4 is the 18 baseline system network structure of ResNet for the design of FER+ facial expression test database.The base The training of CK+ data set is only used only in linear system system.Network structure includes a 16 channel convolutional layers, 4 residual error block structures, Yi Jiyi A global pool layer.Last prediction result is provided by facial expression output layer.
Fig. 5 is combination learning identity information and emotion information of the present invention for the design of FER+ facial expression test database Facial expression system.Recognition of face branch of a network is increased on the basis of Fig. 2 to learn to obtain face identity characteristic vector, and It is cascaded with original face affective characteristics vector and has been merged the final series connection face of identity information and emotion information Portion's feature representation.Finally, carrying out joint training tuning using network of the FER+ facial expression training data to merger.
Embodiment one: it is tested in Extended Cohn-Kanade (CK+) data using the present invention.
Step 1: being used to extract face identity information using CASIA-WebFace face recognition database training first Sub-network.CASIA-WebFace includes the 494414 width pictures of 10757 people in total.At the same time, using Labeled The evaluation and test of Faces in the Wild (LFW) data set progress face recognition accuracy rate.The structure of the sub-network contains multiple Convolutional layer and pond layer and, the face identity information feature vector for obtaining 160 dimensions may finally be extracted.The network passes through 91% accuracy rate can be reached after training tuning on LFW data set.Since our final purpose is not to carry out face Verifying, therefore not confirmatory to the face can be carried out excessive optimization.
Step 2: being used to extract the sub-network of facial expression information using the training of CK+ facial expression data library.CK+ data Library contains 327 sequence of pictures with facial expression attribute labeling information.For every sequence of pictures, only last frame Provide effective information mark.In order to be collected into more pictures for training neural network, last 3 figures are had chosen here Piece is as training data.In addition, the first frame of every sequence of pictures is all counted as " neutrality " attribute.Therefore, it can finally obtain To 1308 pictures with 8 expression attribute informations for training.When final test, we are come pair using ten folding cross validations System is evaluated and tested.Using these training datas, we design 12 layers of net of residual error network (Residual Network, ResNet) Network structure.The network structure contains 1 convolutional layer, 3 residual error block structures and last global pool layer.It finally can be with Extraction obtains the facial expression information feature vector of 64 dimensions.
Third step merges the trained sub-network of two above to obtain a joint network.The face of 160 dimensions The facial expression vector that identity information vector sum 64 is tieed up be together in series available one 224 final dimensions face expression it is special Sign.Then this feature vector is further fed to subsequent full articulamentum.In subsequent training process, only using facial table Feelings database carries out combination learning and optimization to this new merging network.
4th step tests trained network.On CK+ test set, baseline system side as shown in Figure 2 is used The system performance that method obtains is 97.56%, and has used the combined optimization method of the invention as shown in Figure 3 may finally Reach 99.31% system performance.
Embodiment two: it is tested in FER+ data using the technology of the present invention
The first step is consistent with embodiment one, is used to extract using CASIA-WebFace face recognition database training first The sub-network of face identity information.CASIA-WebFace includes the 494414 width pictures of 10757 people in total.At the same time, make The evaluation and test of face recognition accuracy rate is carried out with Labeled Faces in the Wild (LFW) data set.The structure of the sub-network It contains multiple convolutional layers and pond layer and the face identity information feature vector for obtaining 160 dimensions may finally be extracted. The network can reach 91% accuracy rate after training tuning on LFW data set.Due to we final purpose not It is to carry out face verification, therefore not confirmatory to the face can be carried out excessive optimization.
Second step, since FER+ data set is compared to for CK+ data set, data volume has an obvious rising, therefore used here as The structure that 18 layers of ResNet replaces original 12 layers of structure of ResNet.The network structure contains 1 convolutional layer, 4 residual errors Block structure and last global pool layer.It can finally extract to obtain the facial expression information feature vector of 64 dimensions.
Third step is consistent with embodiment one, and the trained sub-network of two above is merged to obtain a joint Network.The facial expression vector that the face identity information vector sum 64 of 160 dimensions is tieed up is together in series available one final 224 The facial expression characteristic of dimension.Then this feature vector is further fed to subsequent full articulamentum.In subsequent training process In, combination learning and optimization only are carried out to this new merging network using facial expression data library.
4th step is tested on FER+ data set using the technology of the present invention, uses the invention as shown in Figure 4 The obtained system performance of baseline system method be 83.1%, and used the combined optimization of the invention as shown in Figure 5 Method may finally reach 84.3% system performance.
Using the trained human facial expression recognition method of the method for the present invention be able to ascend human facial expression recognition method for by The robustness of itself difference between examination person's individual.Final system performance and originally to only use single facial expression data library trained To system compared to there is significant performance boost.
Although above having used general explanation and specific embodiment, the present invention is described in detail, at this On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore, These modifications or improvements without departing from theon the basis of the spirit of the present invention are fallen within the scope of the claimed invention.

Claims (2)

1. a kind of human facial expression recognition method based on combination learning identity information and emotion information, which is characterized in that including with Lower step:
Come joint training neural network and optimization nerve net using recognition of face image data base and facial expression image database Network;
The recognition of face image data base is for stand-alone training and optimizes facial identity information branch of a network, will after training Last face identity output layer removes, and only extracts and obtains the corresponding identity characteristic vector of input picture;
The facial expression image database is for stand-alone training and optimization facial expression information branch of a network, handle after training Last facial expression output layer removes, and only extracts and obtains the corresponding affective characteristics vector of input picture;
Identity characteristic vector sum affective characteristics vector is cascaded to obtain series connection facial characteristics expression;Body will finally have been merged The facial expression characteristic of the series connection of part information and facial information is fed to subsequent facial expression output layer;
In subsequent network training process, facial expression image database is used only and carries out combination learning and excellent to network is merged Change, and finally predicts human facial expression recognition result.
2. the human facial expression recognition method according to claim 1 based on combination learning identity information and emotion information, by In the difference of network structure and training data, the identity characteristic vector sum affective characteristics vector is advised by batch normalization Generalized processing, then the two features are cascaded to form the facial expression characteristic of series connection.
CN201811220643.1A 2018-10-19 2018-10-19 Human facial expression recognition method based on combination learning identity and emotion information Pending CN109359599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811220643.1A CN109359599A (en) 2018-10-19 2018-10-19 Human facial expression recognition method based on combination learning identity and emotion information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811220643.1A CN109359599A (en) 2018-10-19 2018-10-19 Human facial expression recognition method based on combination learning identity and emotion information

Publications (1)

Publication Number Publication Date
CN109359599A true CN109359599A (en) 2019-02-19

Family

ID=65345979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811220643.1A Pending CN109359599A (en) 2018-10-19 2018-10-19 Human facial expression recognition method based on combination learning identity and emotion information

Country Status (1)

Country Link
CN (1) CN109359599A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135251A (en) * 2019-04-09 2019-08-16 上海电力学院 A kind of group's image Emotion identification method based on attention mechanism and hybrid network
CN110276403A (en) * 2019-06-25 2019-09-24 北京百度网讯科技有限公司 Method for establishing model and device
CN110348387A (en) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 A kind of image processing method, device and computer readable storage medium
CN110705490A (en) * 2019-10-09 2020-01-17 中国科学技术大学 Visual emotion recognition method
CN111553311A (en) * 2020-05-13 2020-08-18 吉林工程技术师范学院 Micro-expression recognition robot and control method thereof
CN114882566A (en) * 2022-05-23 2022-08-09 支付宝(杭州)信息技术有限公司 Method, device and equipment for training expression recognition model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
US20170160813A1 (en) * 2015-12-07 2017-06-08 Sri International Vpa with integrated object recognition and facial expression recognition
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN107729835A (en) * 2017-10-10 2018-02-23 浙江大学 A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features
CN107766850A (en) * 2017-11-30 2018-03-06 电子科技大学 Based on the face identification method for combining face character information
CN107808146A (en) * 2017-11-17 2018-03-16 北京师范大学 A kind of multi-modal emotion recognition sorting technique
CN108615010A (en) * 2018-04-24 2018-10-02 重庆邮电大学 Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170160813A1 (en) * 2015-12-07 2017-06-08 Sri International Vpa with integrated object recognition and facial expression recognition
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN106407958A (en) * 2016-10-28 2017-02-15 南京理工大学 Double-layer-cascade-based facial feature detection method
CN107729835A (en) * 2017-10-10 2018-02-23 浙江大学 A kind of expression recognition method based on face key point region traditional characteristic and face global depth Fusion Features
CN107808146A (en) * 2017-11-17 2018-03-16 北京师范大学 A kind of multi-modal emotion recognition sorting technique
CN107766850A (en) * 2017-11-30 2018-03-06 电子科技大学 Based on the face identification method for combining face character information
CN108615010A (en) * 2018-04-24 2018-10-02 重庆邮电大学 Facial expression recognizing method based on the fusion of parallel convolutional neural networks characteristic pattern

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HEECHUL JUNG等: "《Joint Fine-Tuning in Deep Neural Networks for Facial Expression Recognition》", 《2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
ZIBO MENG 等: "Identity-Aware Convolutional Neural Network for Facial Expression Recognition", 《2017 12TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION》 *
杨雨浓: "基于深度学习的人脸表情识别方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *
马丽: "基于社会服务机器人的脸部共性信息识别***", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135251A (en) * 2019-04-09 2019-08-16 上海电力学院 A kind of group's image Emotion identification method based on attention mechanism and hybrid network
CN110135251B (en) * 2019-04-09 2023-08-08 上海电力学院 Group image emotion recognition method based on attention mechanism and hybrid network
CN110276403A (en) * 2019-06-25 2019-09-24 北京百度网讯科技有限公司 Method for establishing model and device
CN110348387A (en) * 2019-07-12 2019-10-18 腾讯科技(深圳)有限公司 A kind of image processing method, device and computer readable storage medium
CN110348387B (en) * 2019-07-12 2023-06-27 腾讯科技(深圳)有限公司 Image data processing method, device and computer readable storage medium
CN110705490A (en) * 2019-10-09 2020-01-17 中国科学技术大学 Visual emotion recognition method
CN110705490B (en) * 2019-10-09 2022-09-02 中国科学技术大学 Visual emotion recognition method
CN111553311A (en) * 2020-05-13 2020-08-18 吉林工程技术师范学院 Micro-expression recognition robot and control method thereof
CN114882566A (en) * 2022-05-23 2022-08-09 支付宝(杭州)信息技术有限公司 Method, device and equipment for training expression recognition model

Similar Documents

Publication Publication Date Title
CN109359599A (en) Human facial expression recognition method based on combination learning identity and emotion information
US20200311798A1 (en) Search engine use of neural network regressor for multi-modal item recommendations based on visual semantic embeddings
Poux et al. Dynamic facial expression recognition under partial occlusion with optical flow reconstruction
CN109710744A (en) A kind of data matching method, device, equipment and storage medium
Malinowski et al. A pooling approach to modelling spatial relations for image retrieval and annotation
CN114787883A (en) Automatic emotion recognition method, system, computing device and computer-readable storage medium
CN111915618B (en) Peak response enhancement-based instance segmentation algorithm and computing device
Meena et al. Sentiment analysis on images using convolutional neural networks based Inception-V3 transfer learning approach
CN112418166A (en) Emotion distribution learning method based on multi-mode information
CN116402066A (en) Attribute-level text emotion joint extraction method and system for multi-network feature fusion
CN112668486A (en) Method, device and carrier for identifying facial expressions of pre-activated residual depth separable convolutional network
Leong Deep learning of facial embeddings and facial landmark points for the detection of academic emotions
Chauhan et al. Analysis of Intelligent movie recommender system from facial expression
Mazhar et al. Movie reviews classification through facial image recognition and emotion detection using machine learning methods
Kortum et al. Dissection of AI job advertisements: A text mining-based analysis of employee skills in the disciplines computer vision and natural language processing
Chang et al. DualLabel: secondary labels for challenging image annotation
Maji et al. Part annotations via pairwise correspondence
Pinto et al. A Systematic Review of Facial Expression Detection Methods
Javaid et al. Manual and non-manual sign language recognition framework using hybrid deep learning techniques
CN114238587A (en) Reading understanding method and device, storage medium and computer equipment
Rehman et al. A Real-Time Approach for Finger Spelling Interpretation Based on American Sign Language Using Neural Networks
CN113821610A (en) Information matching method, device, equipment and storage medium
Chang et al. Adversarially-enriched acoustic code vector learned from out-of-context affective corpus for robust emotion recognition
Vimal et al. Facial Emotion Recognition Using Deep Learning
Dubost et al. Hands-free segmentation of medical volumes via binary inputs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190219

RJ01 Rejection of invention patent application after publication