CN112784776A - BPD facial emotion recognition method based on improved residual error network - Google Patents

BPD facial emotion recognition method based on improved residual error network Download PDF

Info

Publication number
CN112784776A
CN112784776A CN202110114564.8A CN202110114564A CN112784776A CN 112784776 A CN112784776 A CN 112784776A CN 202110114564 A CN202110114564 A CN 202110114564A CN 112784776 A CN112784776 A CN 112784776A
Authority
CN
China
Prior art keywords
model
data
emotion
column
bpd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110114564.8A
Other languages
Chinese (zh)
Other versions
CN112784776B (en
Inventor
潘晓光
令狐彬
董虎弟
李娟�
陈智娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Sanyouhe Smart Information Technology Co Ltd
Original Assignee
Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Sanyouhe Smart Information Technology Co Ltd filed Critical Shanxi Sanyouhe Smart Information Technology Co Ltd
Priority to CN202110114564.8A priority Critical patent/CN112784776B/en
Publication of CN112784776A publication Critical patent/CN112784776A/en
Application granted granted Critical
Publication of CN112784776B publication Critical patent/CN112784776B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of facial recognition, and particularly relates to a BPD facial emotion recognition method based on an improved residual error network, which comprises the following steps: collecting data and constructing a data set; reconstructing a label; data set segmentation; constructing a model; training a model; verifying the model; and (6) evaluating the model. According to the method, the ResNet101 is improved by using the Swish activation function, the network is modified into a multi-task network, the model training efficiency is improved through the internal connection among different tasks, the model is helped to have higher recognition accuracy, the face emotion of the BPD crowd is recognized in multiple angles, more deep features of face emotion data of the BPD crowd can be captured, and more accurate emotion recognition is carried out. The method is used for recognizing the facial emotion.

Description

BPD facial emotion recognition method based on improved residual error network
Technical Field
The invention belongs to the technical field of facial recognition, and particularly relates to a BPD facial emotion recognition method based on an improved residual error network.
Background
The existing emotion recognition technology mostly needs to be combined with scene features for recognition, the features mostly need to be selected manually, facial emotion recognition of specific people such as BPD (marginal personality disorder) people is different from emotion features of normal people, and the existing emotion recognition technology cannot carry out high-precision recognition on emotions of the people.
Problems or disadvantages of the prior art: the existing facial emotion recognition technology has the problems that the characteristics need to be manually selected, the recognition accuracy is poor, and the generalization capability is poor when specific people are recognized.
Disclosure of Invention
Aiming at the technical problems of manual feature selection, poor recognition accuracy and poor generalization capability of the existing facial emotion recognition technology, the invention provides the BPD facial emotion recognition method based on the improved residual error network, which has the advantages of high automation degree, high recognition accuracy and strong generalization capability.
In order to solve the technical problems, the invention adopts the technical scheme that:
a BPD facial emotion recognition method based on an improved residual error network comprises the following steps:
s1, collecting data and constructing a data set;
s2, reconstructing a label;
s3, segmenting the data set;
s4, constructing a model;
s5, training a model;
s6, verifying the model;
and S7, model evaluation.
The data in the data set in the S1 comprises a data tag; the data label comprises two parts of contents, wherein the first part is a large emotion class, and is divided into three classes, namely a neutral emotion, a positive emotion and a negative emotion; the second part is emotion subdivision, which is divided into seven groups, namely Happy, Sad, Angry, Surplicated, Scared, Disgusted and Contempt.
The method for reconstructing the label in the step S2 includes: and reconstructing the data tags into a One-Hot form for the network to learn the multi-classification tasks, wherein the neutral emotion, the positive emotion and the negative emotion respectively correspond to the 0 th column, the 1 st column and the 2 nd column, and the Happy column, the Sad column, the Angry column, the surprided column, the scanned column, the distributed column and the Contempt column respectively correspond to the 0 th column to the 6 th column.
The method for segmenting the data set in the step S3 includes: the data set was as follows 7: 1: 2, randomly dividing the data set according to the proportion, and constructing a training set, a verification set and a test set.
The method for constructing the model in the S4 comprises the following steps: model construction is carried out through a ResNet101 network, a ResNet network structure is adjusted, an activation function of the ResNet network structure is replaced by a Swish function, the network is adjusted to be a multi-task network, after the 50 th residual block, a full connection layer is used for carrying out classification output of a large emotion class, and after the 101 th residual block, emotion subdivision output is carried out.
The Swish function is: (x) x sigmoid (β x),
Figure BDA0002917407750000021
where x is the output of the corresponding convolutional layer, β is the hyper-parameter, a is the argument, sigmoid aims to map the argument a to the range 0-1.
The method for training the model in the S5 comprises the following steps: and (3) carrying out iterative training on the model by using the training set data, stopping training when the loss value of the model is not reduced for 20 continuous epochs, and storing the model.
The method for verifying the model in the S6 comprises the following steps: and (3) performing secondary training of 50 epochs on the trained data model by using the verification set data, if the model loss does not decrease, storing the model, if the model loss decreases, setting the learning rate to be 0.5 times of the original learning rate, and continuing training the model by using the training set data until the model loss is stable.
The method for evaluating the model in the S7 comprises the following steps: testing the test set data by using the trained model, and evaluating the recognition effect of the model according to the recognition result of the model and the data label, wherein the evaluation mode is to calculate the accuracy and the recall rate of each category of data;
the accuracy is as follows: acc ═ (TP + TN)/(TP + TN + FN + FP)
The recall ratio is as follows: r is TP/(TP + FN)
Wherein: the TP is the positive type and is judged as the positive type number; the FP is a negative class and is judged as a positive class number; FN is judged as the number of negative classes for the positive class; and the TN is the negative class and is judged as the negative class number.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the ResNet101 is improved by using the Swish activation function, the network is modified into a multi-task network, the model training efficiency is improved through the internal connection among different tasks, the model is helped to have higher recognition accuracy, the face emotion of the BPD crowd is recognized in multiple angles, more deep features of face emotion data of the BPD crowd can be captured, and more accurate emotion recognition is carried out.
Drawings
FIG. 1 is a block flow diagram of the present invention;
FIG. 2 is a schematic diagram of a model training process according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A BPD facial emotion recognition method based on an improved residual error network comprises the following steps:
s1, collecting data and constructing a data set;
s2, reconstructing a label;
s3, segmenting the data set;
s4, constructing a model;
s5, training a model;
s6, verifying the model;
and S7, model evaluation.
Further, the data in the data set in S1 includes a data tag; the data label comprises two parts of contents, wherein the first part is a large emotion class, and is divided into three classes, namely a neutral emotion, a positive emotion and a negative emotion; the second part is emotion subdivision, which is divided into seven groups, namely Happy, Sad, Angry, Surplicated, Scared, Disgusted and Contempt.
Further, the method for reconstructing the label in S2 includes: and reconstructing the data tags into a One-Hot form for the network to learn the multi-classification tasks, wherein the neutral emotion, the positive emotion and the negative emotion respectively correspond to the 0 th column, the 1 st column and the 2 nd column, and the Happy column, the Sad column, the Angry column, the surprided column, the scanned column, the distributed column and the Contempt column respectively correspond to the 0 th column to the 6 th column.
Further, the method for segmenting the data set in S3 includes: the data set was as follows 7: 1: 2, randomly dividing the data set according to the proportion, and constructing a training set, a verification set and a test set.
Further, the method for constructing the model in S4 is as follows: model construction is carried out through a ResNet101 network, a ResNet network structure is adjusted, an activation function of the ResNet network structure is replaced by a Swish function, the network is adjusted to be a multi-task network, after the 50 th residual block, a full connection layer is used for carrying out classification output of a large emotion class, and after the 101 th residual block, emotion subdivision output is carried out.
Further, the Swish function is: (x) x sigmoid (β x),
Figure BDA0002917407750000031
where x is the output of the corresponding convolutional layer, β is the hyper-parameter, a is the argument, sigmoid aims to map the argument a to the range 0-1.
Further, the method for model training in S5 is as follows: and (3) carrying out iterative training on the model by using the training set data, stopping training when the loss value of the model is not reduced for 20 continuous epochs, and storing the model.
Further, the method for verifying the model in S6 is as follows: and (3) performing secondary training of 50 epochs on the trained data model by using the verification set data, if the model loss does not decrease, storing the model, if the model loss decreases, setting the learning rate to be 0.5 times of the original learning rate, and continuing training the model by using the training set data until the model loss is stable.
Further, the method of model evaluation in S7 is: testing the test set data by using the trained model, and evaluating the recognition effect of the model according to the recognition result of the model and the data label, wherein the evaluation mode is to calculate the accuracy and the recall rate of each category of data;
the accuracy is as follows: acc ═ (TP + TN)/(TP + TN + FN + FP)
The recall ratio is as follows: r is TP/(TP + FN)
Wherein: the TP is the positive type and is judged as the positive type number; the FP is a negative class and is judged as a positive class number; FN is judged as the number of negative classes for the positive class; and the TN is the negative class and is judged as the negative class number.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.

Claims (9)

1. A BPD facial emotion recognition method based on an improved residual error network is characterized by comprising the following steps: comprises the following steps:
s1, collecting data and constructing a data set;
s2, reconstructing a label;
s3, segmenting the data set;
s4, constructing a model;
s5, training a model;
s6, verifying the model;
and S7, model evaluation.
2. The BPD facial emotion recognition method based on the improved residual error network as claimed in claim 1, wherein: the data in the data set in the S1 comprises a data tag; the data label comprises two parts of contents, wherein the first part is a large emotion class, and is divided into three classes, namely a neutral emotion, a positive emotion and a negative emotion; the second part is emotion subdivision, which is divided into seven groups, namely Happy, Sad, Angry, Surplicated, Scared, Disgusted and Contempt.
3. The BPD facial emotion recognition method based on the improved residual error network as claimed in claim 1, wherein: the method for reconstructing the label in the step S2 includes: and reconstructing the data tags into a One-Hot form for the network to learn the multi-classification tasks, wherein the neutral emotion, the positive emotion and the negative emotion respectively correspond to the 0 th column, the 1 st column and the 2 nd column, and the Happy column, the Sad column, the Angry column, the surprided column, the scanned column, the distributed column and the Contempt column respectively correspond to the 0 th column to the 6 th column.
4. The BPD facial emotion recognition method based on the improved residual error network as claimed in claim 1, wherein: the method for segmenting the data set in the step S3 includes: the data set was as follows 7: 1: 2, randomly dividing the data set according to the proportion, and constructing a training set, a verification set and a test set.
5. The BPD facial emotion recognition method based on the improved residual error network as claimed in claim 1, wherein: the method for constructing the model in the S4 comprises the following steps: model construction is carried out through a ResNet101 network, a ResNet network structure is adjusted, an activation function of the ResNet network structure is replaced by a Swish function, the network is adjusted to be a multi-task network, after the 50 th residual block, a full connection layer is used for carrying out classification output of a large emotion class, and after the 101 th residual block, emotion subdivision output is carried out.
6. The BPD facial emotion recognition method based on the improved residual error network of claim 5, wherein: the Swish function is: (x) x sigmoid (β x),
Figure FDA0002917407740000011
where x is the output of the corresponding convolutional layer, β is the hyperparameter, a is the argument, sigmoid aims to map the argument a to the 0-1 normInside the enclosure.
7. The BPD facial emotion recognition method based on the improved residual error network as claimed in claim 1, wherein: the method for training the model in the S5 comprises the following steps: and (3) carrying out iterative training on the model by using the training set data, stopping training when the loss value of the model is not reduced for 20 continuous epochs, and storing the model.
8. The BPD facial emotion recognition method based on the improved residual error network as claimed in claim 1, wherein: the method for verifying the model in the S6 comprises the following steps: and (3) performing secondary training of 50 epochs on the trained data model by using the verification set data, if the model loss does not decrease, storing the model, if the model loss decreases, setting the learning rate to be 0.5 times of the original learning rate, and continuing training the model by using the training set data until the model loss is stable.
9. The BPD facial emotion recognition method based on the improved residual error network as claimed in claim 1, wherein: the method for evaluating the model in the S7 comprises the following steps: testing the test set data by using the trained model, and evaluating the recognition effect of the model according to the recognition result of the model and the data label, wherein the evaluation mode is to calculate the accuracy and the recall rate of each category of data;
the accuracy is as follows: acc ═ (TP + TN)/(TP + TN + FN + FP)
The recall ratio is as follows: r is TP/(TP + FN)
Wherein: the TP is the positive type and is judged as the positive type number; the FP is a negative class and is judged as a positive class number; FN is judged as the number of negative classes for the positive class; and the TN is the negative class and is judged as the negative class number.
CN202110114564.8A 2021-01-26 2021-01-26 BPD facial emotion recognition method based on improved residual error network Active CN112784776B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110114564.8A CN112784776B (en) 2021-01-26 2021-01-26 BPD facial emotion recognition method based on improved residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110114564.8A CN112784776B (en) 2021-01-26 2021-01-26 BPD facial emotion recognition method based on improved residual error network

Publications (2)

Publication Number Publication Date
CN112784776A true CN112784776A (en) 2021-05-11
CN112784776B CN112784776B (en) 2022-07-08

Family

ID=75759208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110114564.8A Active CN112784776B (en) 2021-01-26 2021-01-26 BPD facial emotion recognition method based on improved residual error network

Country Status (1)

Country Link
CN (1) CN112784776B (en)

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140298364A1 (en) * 2013-03-26 2014-10-02 Rawllin International Inc. Recommendations for media content based on emotion
CN106372622A (en) * 2016-09-30 2017-02-01 北京奇虎科技有限公司 Facial expression classification method and device
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
CN109508625A (en) * 2018-09-07 2019-03-22 咪咕文化科技有限公司 A kind of analysis method and device of affection data
CN109508640A (en) * 2018-10-12 2019-03-22 咪咕文化科技有限公司 A kind of crowd's sentiment analysis method, apparatus and storage medium
CN109522945A (en) * 2018-10-31 2019-03-26 中国科学院深圳先进技术研究院 One kind of groups emotion identification method, device, smart machine and storage medium
CN109583419A (en) * 2018-12-13 2019-04-05 深圳市淘米科技有限公司 A kind of emotional prediction system based on depth convolutional network
CN109711356A (en) * 2018-12-28 2019-05-03 广州海昇教育科技有限责任公司 A kind of expression recognition method and system
CN109829166A (en) * 2019-02-15 2019-05-31 重庆师范大学 People place customer input method for digging based on character level convolutional neural networks
CN109919047A (en) * 2019-02-18 2019-06-21 山东科技大学 A kind of mood detection method based on multitask, the residual error neural network of multi-tag
CN110309714A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Mental health evaluation method, apparatus and storage medium based on Expression Recognition
CN110418204A (en) * 2019-07-18 2019-11-05 平安科技(深圳)有限公司 Video recommendation method, device, equipment and storage medium based on micro- expression
WO2019228358A1 (en) * 2018-05-31 2019-12-05 华为技术有限公司 Deep neural network training method and apparatus
CN110705490A (en) * 2019-10-09 2020-01-17 中国科学技术大学 Visual emotion recognition method
CN110929624A (en) * 2019-11-18 2020-03-27 西北工业大学 Construction method of multi-task classification network based on orthogonal loss function
US20200202369A1 (en) * 2018-12-19 2020-06-25 Qualtrics, Llc Digital surveys based on digitally detected facial emotions
CN111401193A (en) * 2020-03-10 2020-07-10 海尔优家智能科技(北京)有限公司 Method and device for obtaining expression recognition model and expression recognition method and device
CN111401294A (en) * 2020-03-27 2020-07-10 山东财经大学 Multitask face attribute classification method and system based on self-adaptive feature fusion
CN111626126A (en) * 2020-04-26 2020-09-04 腾讯科技(北京)有限公司 Face emotion recognition method, device, medium and electronic equipment
CN111723198A (en) * 2019-03-18 2020-09-29 北京京东尚科信息技术有限公司 Text emotion recognition method and device and storage medium
US20200349975A1 (en) * 2019-04-30 2020-11-05 Sony Interactive Entertainment Inc. Video tagging by correlating visual features to sound tags
WO2020228811A1 (en) * 2019-05-15 2020-11-19 Huawei Technologies Co., Ltd. Adaptive action recognizer for video
CN112016368A (en) * 2019-05-31 2020-12-01 沈阳新松机器人自动化股份有限公司 Facial expression coding system-based expression recognition method and system and electronic equipment

Patent Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140298364A1 (en) * 2013-03-26 2014-10-02 Rawllin International Inc. Recommendations for media content based on emotion
CN106372622A (en) * 2016-09-30 2017-02-01 北京奇虎科技有限公司 Facial expression classification method and device
CN107358169A (en) * 2017-06-21 2017-11-17 厦门中控智慧信息技术有限公司 A kind of facial expression recognizing method and expression recognition device
WO2019228358A1 (en) * 2018-05-31 2019-12-05 华为技术有限公司 Deep neural network training method and apparatus
CN109508625A (en) * 2018-09-07 2019-03-22 咪咕文化科技有限公司 A kind of analysis method and device of affection data
CN109508640A (en) * 2018-10-12 2019-03-22 咪咕文化科技有限公司 A kind of crowd's sentiment analysis method, apparatus and storage medium
CN109522945A (en) * 2018-10-31 2019-03-26 中国科学院深圳先进技术研究院 One kind of groups emotion identification method, device, smart machine and storage medium
CN109583419A (en) * 2018-12-13 2019-04-05 深圳市淘米科技有限公司 A kind of emotional prediction system based on depth convolutional network
US20200202369A1 (en) * 2018-12-19 2020-06-25 Qualtrics, Llc Digital surveys based on digitally detected facial emotions
CN109711356A (en) * 2018-12-28 2019-05-03 广州海昇教育科技有限责任公司 A kind of expression recognition method and system
CN109829166A (en) * 2019-02-15 2019-05-31 重庆师范大学 People place customer input method for digging based on character level convolutional neural networks
CN109919047A (en) * 2019-02-18 2019-06-21 山东科技大学 A kind of mood detection method based on multitask, the residual error neural network of multi-tag
CN111723198A (en) * 2019-03-18 2020-09-29 北京京东尚科信息技术有限公司 Text emotion recognition method and device and storage medium
US20200349975A1 (en) * 2019-04-30 2020-11-05 Sony Interactive Entertainment Inc. Video tagging by correlating visual features to sound tags
WO2020228811A1 (en) * 2019-05-15 2020-11-19 Huawei Technologies Co., Ltd. Adaptive action recognizer for video
CN110309714A (en) * 2019-05-22 2019-10-08 深圳壹账通智能科技有限公司 Mental health evaluation method, apparatus and storage medium based on Expression Recognition
CN112016368A (en) * 2019-05-31 2020-12-01 沈阳新松机器人自动化股份有限公司 Facial expression coding system-based expression recognition method and system and electronic equipment
CN110418204A (en) * 2019-07-18 2019-11-05 平安科技(深圳)有限公司 Video recommendation method, device, equipment and storage medium based on micro- expression
CN110705490A (en) * 2019-10-09 2020-01-17 中国科学技术大学 Visual emotion recognition method
CN110929624A (en) * 2019-11-18 2020-03-27 西北工业大学 Construction method of multi-task classification network based on orthogonal loss function
CN111401193A (en) * 2020-03-10 2020-07-10 海尔优家智能科技(北京)有限公司 Method and device for obtaining expression recognition model and expression recognition method and device
CN111401294A (en) * 2020-03-27 2020-07-10 山东财经大学 Multitask face attribute classification method and system based on self-adaptive feature fusion
CN111626126A (en) * 2020-04-26 2020-09-04 腾讯科技(北京)有限公司 Face emotion recognition method, device, medium and electronic equipment

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
D DENG等: ""Multitask Emotion Recognition with Incomplete Labels"", 《ARXIV》 *
GERARD PONS等: ""Multitask,multilabel,and Multidomain Learning with convolutional networks for emotion recognition"", 《IEEE TRANSACTIONS ON CYBERNETICS》 *
JIAJUN FAN等: ""Multi-View Facial Expression Recognition based on Multitask Learning and Generative Adversarial Network"", 《2020 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL INFORMATICS》 *
何俊等: ""基于改进的深度残差网络的表情识别研究"", 《计算机应用研究》 *
卢官明等: ""基于深度残差网络的人脸表情识别"", 《数据采集与处理》 *
吴宇豪等: ""基于改进的ResNet的人脸表情识别***"", 《信息通信》 *
彭先霖等: ""基于多任务深度卷积神经网络的人脸/面瘫表情识别方法"", 《西北大学学报(自然科学版)》 *
林子杰等: ""一种基于多任务学习的多模态情感识别方法"", 《北京大学学报(自然科学版)》 *
王一婷等: ""人工智能识别主持人情感"", 《中国广播电视学刊》 *

Also Published As

Publication number Publication date
CN112784776B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
CN110163187B (en) F-RCNN-based remote traffic sign detection and identification method
CN109934293A (en) Image-recognizing method, device, medium and obscure perception convolutional neural networks
CN108804453A (en) A kind of video and audio recognition methods and device
CN111091130A (en) Real-time image semantic segmentation method and system based on lightweight convolutional neural network
Gong et al. Object detection based on improved YOLOv3-tiny
CN105574550A (en) Vehicle identification method and device
CN110781829A (en) Light-weight deep learning intelligent business hall face recognition method
Li et al. Sign language recognition based on computer vision
CN110781897A (en) Semantic edge detection method based on deep learning
Shuai et al. Object detection system based on SSD algorithm
CN114662497A (en) False news detection method based on cooperative neural network
Xu et al. BANet: A balanced atrous net improved from SSD for autonomous driving in smart transportation
CN115223017B (en) Multi-scale feature fusion bridge detection method based on depth separable convolution
Zhang et al. Classroom behavior recognition based on improved yolov3
CN111126155A (en) Pedestrian re-identification method for generating confrontation network based on semantic constraint
CN111783688B (en) Remote sensing image scene classification method based on convolutional neural network
CN112784776B (en) BPD facial emotion recognition method based on improved residual error network
Xiao exYOLO: A small object detector based on YOLOv3 Object Detector
Ren et al. Student behavior detection based on YOLOv4-Bi
Zhang et al. SSRDet: Small Object Detection Based on Feature Pyramid Network
Zhang et al. An improved YOLOv5s algorithm for emotion detection
Zeng et al. Few-shot scale-insensitive object detection for edge computing platform
Ling et al. A facial expression recognition system for smart learning based on YOLO and vision transformer
CN112990336B (en) Deep three-dimensional point cloud classification network construction method based on competitive attention fusion
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant