CN107633203A - Facial emotions recognition methods, device and storage medium - Google Patents

Facial emotions recognition methods, device and storage medium Download PDF

Info

Publication number
CN107633203A
CN107633203A CN201710707943.1A CN201710707943A CN107633203A CN 107633203 A CN107633203 A CN 107633203A CN 201710707943 A CN201710707943 A CN 201710707943A CN 107633203 A CN107633203 A CN 107633203A
Authority
CN
China
Prior art keywords
mood
probability
disaggregated model
face
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710707943.1A
Other languages
Chinese (zh)
Inventor
陈林
张国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201710707943.1A priority Critical patent/CN107633203A/en
Priority to PCT/CN2017/108753 priority patent/WO2019033573A1/en
Publication of CN107633203A publication Critical patent/CN107633203A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of facial emotions recognition methods, this method includes:The realtime graphic of camera device shooting is obtained, a real-time face image is extracted from the realtime graphic using face recognition algorithms;The first good mood disaggregated model of real-time face image input training in advance and the second mood disaggregated model are subjected to Emotion identification, obtain the first probability and the second probability of every kind of mood;The mood and probability identified according to the first mood disaggregated model, the second mood disaggregated model, judges the mood in the real-time face image.The present invention is identified to the mood of face in realtime graphic by the output result of two models of combination, improves the accuracy of facial emotions identification.The invention also discloses a kind of electronic installation and a kind of computer-readable recording medium.

Description

Facial emotions recognition methods, device and storage medium
Technical field
The present invention relates to computer vision processing technology field, more particularly to a kind of facial emotions recognition methods, device and Computer-readable recording medium.
Background technology
In the daily exchange of people, 7% is accounted for come the information transmitted by language, is accounted for by sound come the information transmitted 38%, and then reach 55% by facial expression come the information transmitted.Human face expression is the important load of Human communication as can be seen here Body and a kind of important way of nonverbal communication, it can not only express affective state, cognitive activities and the personality characteristics of the mankind, And the other factors such as the affective state of its human body behavioural information for being rich in and people, the state of mind, health status have extremely Close association.Face Emotion identification is the important component of man-machine interaction and affection computation research, is related to psychology, society The research fields such as, anthropology, life science, cognitive science, computer science, to the great meaning of human-computer interaction intelligent harmonization Justice.
With the continuous improvement that the continuous development and people of artificial intelligence technology require for interactive experience, intelligent interaction Mode gradually starts to substitute some traditional man-machine interaction modes, and the requirement to face Emotion identification also improves constantly.
Face Emotion identification at this stage is arranged to sample, is divided into several generally by a large amount of mood samples are collected Class, Emotion identification model is trained, for carrying out Emotion identification, but this method is identified in a manner of single, however, single Emotion identification method be unable to reach and accurately identify facial emotions effect, and single method has in the data acquired in Emotion identification The problems such as limit, judgment mechanism is single, therefore the degree of accuracy that identification be present is low, error is big and is easily influenceed by extraneous factor.
The content of the invention
The present invention provides a kind of facial emotions recognition methods, device and computer-readable recording medium, and its main purpose exists In the movable information that lip in real-time face image is calculated according to the coordinate of lip feature point, realize analysis to lip region and To the real-time capture of lip motion.
To achieve the above object, the present invention provides a kind of electronic installation, and the device includes:Memory, processor and shooting Device, the memory includes facial emotions recognizer, when the facial emotions recognizer is by the computing device Realize following steps:
Real-time face image acquisition step:Obtain camera device shooting realtime graphic, using face recognition algorithms from this A real-time face image is extracted in realtime graphic;
Emotion identification step:By the first good mood disaggregated model of real-time face image input training in advance and the second feelings Thread disaggregated model carries out Emotion identification, obtains the first probability and the second probability of every kind of mood;And
Emotion judgment step:The mood and probability identified according to the first mood disaggregated model, the second mood disaggregated model, Judge the mood in the real-time face image.
Alternatively, the training step of the first mood disaggregated model and the second mood disaggregated model includes:
Feature point extracting step:A Face Sample Storehouse is established, t facial characteristics is marked in every face sample image Point;
Characteristic vector calculation procedure:By the people after standardizing in the coordinate of each face feature point and the face sample image The width and height in face region carry out division operation, obtain the characteristic vector of face sample image;
First model training step:Support vector machine classifier is entered using the face sample image and its characteristic vector Row learning training, obtain the first mood disaggregated model;
Mood label distribution procedure:A mood label is distributed to every face sample image, and according to mood label pair Face sample image in Face Sample Storehouse is classified;And
Second model training step:Learning training is carried out to convolutional neural networks using sorted face sample image, Obtain the second mood disaggregated model.
Alternatively, the emotion judgment step includes:
Judge one or more moods that the first mood disaggregated model, the second mood disaggregated model identify whether phase Together.
Alternatively, the emotion judgment step also includes:
What is identified when the first mood disaggregated model, the second mood disaggregated model is identical one or more mood, meter Calculate the first probability, the average of the second probability of each mood, with the first probability, the second probability average in mood corresponding to higher value As the mood recognized from the realtime graphic;Or
What is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more moods, with Mood corresponding to higher value is as the mood recognized from the realtime graphic in first probability, the second probability.
In addition, to achieve the above object, the present invention also provides a kind of facial emotions recognition methods, and this method includes:
Real-time face image acquisition step:Obtain camera device shooting realtime graphic, using face recognition algorithms from this A real-time face image is extracted in realtime graphic;
Emotion identification step:By the first good mood disaggregated model of real-time face image input training in advance and the second feelings Thread disaggregated model carries out Emotion identification, obtains the first probability and the second probability of every kind of mood;And
Emotion judgment step:The mood and probability identified according to the first mood disaggregated model, the second mood disaggregated model, Judge the mood in the real-time face image.
Alternatively, the training step of the first mood disaggregated model and the second mood disaggregated model includes:
Feature point extracting step:A Face Sample Storehouse is established, t facial characteristics is marked in every face sample image Point;
Characteristic vector calculation procedure:By the people after standardizing in the coordinate of each face feature point and the face sample image The width and height in face region carry out division operation, obtain the characteristic vector of face sample image;
First model training step:Support vector machine classifier is entered using the face sample image and its characteristic vector Row learning training, obtain the first mood disaggregated model;
Mood label distribution procedure:A mood label is distributed to every face sample image, and according to mood label pair Face sample image in Face Sample Storehouse is classified;And
Second model training step:Learning training is carried out to convolutional neural networks using sorted face sample image, Obtain the second mood disaggregated model.
Alternatively, the emotion judgment step includes:
Judge one or more moods that the first mood disaggregated model, the second mood disaggregated model identify whether phase Together.
Alternatively, the emotion judgment step also includes:
What is identified when the first mood disaggregated model, the second mood disaggregated model is identical one or more mood, meter Calculate the first probability, the average of the second probability of each mood, with the first probability, the second probability average in mood corresponding to higher value As the mood recognized from the realtime graphic;Or
What is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more moods, with Mood corresponding to higher value is as the mood recognized from the realtime graphic in first probability, the second probability.
In addition, to achieve the above object, the present invention also provides a kind of computer-readable recording medium, described computer-readable Storage medium includes facial emotions recognizer, when the facial emotions recognizer is executed by processor, realizes as above institute Arbitrary steps in the facial emotions recognition methods stated.
Facial emotions recognition methods, electronic installation and computer-readable recording medium proposed by the present invention, by by real time Face image inputs the first mood disaggregated model and the second mood disaggregated model, respectively obtains the first probability and the of every kind of mood Two probability, the result exported with reference to two mood disaggregated models, judge the mood in current face image, improve people The accuracy rate of face Emotion identification.
Brief description of the drawings
Fig. 1 is the schematic diagram of electronic installation preferred embodiment of the present invention;
Fig. 2 is the functional block diagram of Fig. 1 septum reset Emotion identification programs;
Fig. 3 is the flow chart of facial emotions recognition methods first embodiment of the present invention;
Fig. 4 is the refined flow chart of step S30 in facial emotions recognition methods first embodiment of the present invention;
Fig. 5 is the refined flow chart of step S30 in facial emotions recognition methods second embodiment of the present invention.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of electronic installation 1.It is the signal of the preferred embodiment of electronic installation 1 of the present invention shown in reference picture 1 Figure.
In the present embodiment, electronic installation 1 can be server, smart mobile phone, tablet personal computer, pocket computer, on table Type computer etc. has the terminal device of calculation function.
The electronic installation 1 includes:Processor 12, memory 11, camera device 13, network interface 14 and communication bus 15. Wherein, camera device 13 is installed on particular place, real-time to the target into the particular place such as office space, monitor area Shooting obtains realtime graphic, is transmitted by network by obtained realtime graphic is shot to processor 12.Network interface 14 is alternatively Wireline interface, the wave point (such as WI-FI interfaces) of standard can be included.Communication bus 15 is used to realize between these components Connection communication.
Memory 11 includes the readable storage medium storing program for executing of at least one type.The readable storage medium storing program for executing of at least one type Can be such as flash memory, hard disk, multimedia card, the non-volatile memory medium of card-type memory.In certain embodiments, it is described can Read the internal storage unit that storage medium can be the electronic installation 1, such as the hard disk of the electronic installation 1.In other realities To apply in example, the readable storage medium storing program for executing can also be the external memory storage of the electronic installation 1, such as on the electronic installation 1 The plug-in type hard disk of outfit, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) Card, flash card (Flash Card) etc..
In the present embodiment, the readable storage medium storing program for executing of the memory 11 is generally used for storage and is installed on the electronic installation 1 facial emotions recognizer 10, facial image Sample Storehouse and the good mood disaggregated model of training in advance etc..The memory 11 It can be also used for temporarily storing the data that has exported or will export.
Processor 12 can be in certain embodiments a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chips, for the program code or processing data stored in run memory 11, example Such as perform facial emotions recognizer 10.
Fig. 1 illustrate only the electronic installation 1 with component 11-15, it should be understood that being not required for implementing all show The component gone out, what can be substituted implements more or less components.
Alternatively, the electronic installation 1 can also include user interface, and user interface can include input block such as keyboard (Keyboard), speech input device such as microphone (microphone) etc. has the equipment of speech identifying function, voice defeated Go out device such as sound equipment, earphone etc., alternatively user interface can also include wireline interface, the wave point of standard.
Alternatively, the electronic installation 1 can also include display, and what display can also be suitably is referred to as display screen or display Unit.Can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and OLED in certain embodiments (Organic Light-Emitting Diode, Organic Light Emitting Diode) touches device etc..Display is used to be shown in electronics dress Put the information that is handled in 1 and for showing visual user interface.
Alternatively, the electronic installation 1 also includes touch sensor.What the touch sensor was provided is touched for user The region for touching operation is referred to as touch area.In addition, touch sensor described here can be resistive touch sensor, electric capacity Formula touch sensor etc..Moreover, the touch sensor not only includes the touch sensor of contact, proximity may also comprise Touch sensor etc..In addition, the touch sensor can be single sensor, or such as multiple biographies of array arrangement Sensor.
In addition, the area of the display of the electronic installation 1 can be identical with the area of the touch sensor, can also not Together.Alternatively, display and touch sensor stacking are set, to form touch display screen.The device, which is based on touching, to be shown The touch control operation of display screen detecting user's triggering.
Alternatively, the electronic installation 1 can also include RF (Radio Frequency, radio frequency) circuit, sensor, audio Circuit etc., it will not be repeated here.
In the device embodiment shown in Fig. 1, as in a kind of memory 11 of computer-readable storage medium can include behaviour Make system and facial emotions recognizer 10;Processor 12 performs the facial emotions recognizer 10 stored in memory 11 Shi Shixian following steps:
The realtime graphic that camera device 13 is shot is obtained, is extracted in real time from the realtime graphic using face recognition algorithms Face image;Processor 12 calls the first good mood disaggregated model of training in advance and the second mood classification mould from memory 11 Type, and the real-time face image is inputted into the first mood disaggregated model and the second mood disaggregated model, respectively obtain every kind of mood The first probability and the second probability;The result exported with reference to two mood disaggregated models, enters to the mood in current face image Row judges.
In other embodiments, facial emotions recognizer 10 can also be divided into one or more module, one Or multiple modules are stored in memory 11, and performed by processor 12, to complete the present invention.Module alleged by the present invention It is the series of computation machine programmed instruction section for referring to complete specific function.
It is the functional block diagram of Fig. 1 septum reset Emotion identifications program 10 shown in reference picture 2.
The facial emotions recognizer 10 can be divided into:Acquisition module 110, identification module 120, judge module 130 and reminding module 140.
Acquisition module 110, it is real-time from this using face recognition algorithms for obtaining the realtime graphic of the shooting of camera device 13 A real-time face image is extracted in image.When camera device 13 photographs a realtime graphic, camera device 13 is by this reality When image be sent to processor 12, after processor 12 receives the realtime graphic, the acquisition module 110 first obtains picture Size, establish the gray level image of a formed objects;By the coloured image of acquisition, gray level image is converted into, while create one Memory headroom;By gray level image histogram equalization, gray level image information amount is reduced, accelerate detection speed, then load The training storehouse of Intel Company, the face in picture is detected, and return to an object for including face information, obtain face place The data of position, and record number;Finally obtain the region of head portrait and preserve, this completes a real-time face figure As the process of extraction.
Specifically, the face recognition algorithms of extraction real-time face image can also be from the realtime graphic:Based on geometry The method of feature, Local Features Analysis method, eigenface method, the method based on elastic model, neural net method, etc..
Identification module 120, for the real-time face image to be inputted into good the first mood disaggregated model of training in advance and the Two mood disaggregated models carry out Emotion identification, obtain the first probability and the second probability of every kind of mood.Wherein, the first mood is classified Model and the second mood disaggregated model build and trained in advance, and embodiment will be in following facial emotions identification sides Illustrated in method.
Assuming that acquisition module 110 has extracted real-time face image A from realtime graphic, identification module 120 is by real-time face Portion image A inputs the first mood model and the second mood model respectively, it may appear that a variety of situations:
The first situation:Mood classification in the result that the result of first mood model output exports with the second model only has One kind, mood classification is consistent, and probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:Real-time face figure As the probable value that A septum resets mood is " happiness " is 0.62;Second mood disaggregated model output result be:Real-time face image A septum resets mood is that the probable value of " happiness " is 0.68;
Second of situation:The result of first mood model output has two with the mood classification in the result of the second model output Kind is two or more, and mood classification is consistent, and probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:It is real When face image A septum resets moods be " indignation ", first probable value of " sadness " be respectively 0.51,0.49;Second mood is classified Model output result be:Real-time face image A septum resets mood is " indignation ", second probable value of " sadness " is respectively 0.41、0.59;
The third situation:Mood classification in the result that the result of first mood model output exports with the second model is not Together, probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:Real-time face image A septum reset moods are The probable value of " happiness " is 0.65;Second mood disaggregated model output result be:Real-time face image A septum reset moods are The probable value of " indignation " is 0.61;And
4th kind of situation:The result of first mood model output has two with the mood classification in the result of the second model output Kind is two or more, and mood classification is different, and probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:It is real When face image A septum resets moods be " indignation ", first probable value of " sadness " be respectively 0.51,0.49;Second mood is classified Model output result be:Real-time face image A septum resets mood is " happiness ", second probable value of " surprised " is respectively 0.45、0.55。
Judge module 130, for the mood that is identified according to the first mood disaggregated model, the second mood disaggregated model and general Rate, judge the mood in the real-time face image.
Specifically, the judge module 130 is used to judge that the first mood disaggregated model, the second mood disaggregated model are known Whether the one or more moods not gone out are identical.
The judge module 130 is additionally operable to, and what is identified when the first mood disaggregated model, the second mood disaggregated model is phase With one or more moods, the first probability, the average of the second probability of each mood are calculated, with the first probability, the second probability Mood corresponding to higher value is as the mood recognized from the realtime graphic in average.
It is a kind of or more for identical for above-mentioned the first and second two kinds of situations, the result of two mood disaggregated models outputs Kind mood, then, average is taken to the first probability and the second probability of each mood:
The first situation:Mood " happiness ":First probability 0.62 and the second probability 0.68 are averaged, obtain average probability For 0.65, judge module 130 finally judges facial emotions in current real-time face image A for " happiness ".
Second of situation:Mood " indignation ", " sadness ":First probability 0.51,0.49 and the second probability 0.41,0.59 are asked Average, the average probability for obtaining each mood is 0.46,0.54, and judge module 130 is finally judged in current real-time face image A Facial emotions are " sadness ".
Specifically, the judge module 130 is additionally operable to, when the first mood disaggregated model, the second mood disaggregated model identify What is gone out is different one or more moods, and mood corresponding to higher value is as real-time from this using in the first probability, the second probability The mood recognized in image.
For above-mentioned the third and fourth two kinds of situations, the result of two mood disaggregated models outputs is different one kind or more Kind mood, then, higher value is taken to the first probability and the second probability of each mood:
The third situation:Mood is that the probable value of " happiness " is 0.65, and mood is that the probable value of " indignation " is 0.61, is judged Module 130 finally judges facial emotions in current real-time face image A for " happiness ".
4th kind of situation:Mood be four indignation ", first probable value of " sadness " be respectively 0.51,0.49, mood for " like It is happy ", second probable value of " surprised " be respectively 0.45,0.55, judge module 130 is finally judged in current real-time face image A Facial emotions be " surprised ".
In other embodiments, judge module 130 is additionally operable to, when the first mood disaggregated model, the second mood disaggregated model What is identified is identical one or more mood, and higher value is default more than first in the average of the first probability, the second probability Threshold value, judge that mood corresponding to higher value is as the mood recognized from the realtime graphic in average;Or
What is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more moods, and Higher value is more than the second predetermined threshold value in first probability, the second probability, then using mood corresponding to higher value as from the real-time figure As recognizing this kind of mood.
Assuming that the first predetermined threshold value is 0.55, the second predetermined threshold value is 0.6, then,
The first situation:The average probability of mood " happiness " is 0.65,0.65 > 0.55, by " happiness " as current real-time Facial emotions in face image A;
Second of situation:Mood " indignation ", the average probability of " sadness " are respectively 0.46,0.54, larger in average probability It is worth for 0.54,0.54 < 0.55, then it is assumed that facial emotions failure is identified from current real-time face image A;
The third situation:Higher value in mood " happiness ", first probability of " indignation ", the second probability is 0.65,0.65 > 0.6, by " happiness " as the facial emotions in current real-time face image A;And
4th kind of situation:It is larger in mood " indignation ", " sadness ", " happiness ", first probability of " surprised ", the second probability It is worth for 0.55,0.55 < 0.6, then it is assumed that facial emotions failure is identified from current real-time face image A.
Reminding module 140, for when first probability, the second probability average in higher value to be less than first default general Rate, and higher value is less than the second predetermined threshold value in first probability, the second probability, then prompts facial emotions recognition failures, and It is back to real-time face image acquisition step.For above-mentioned second of situation, mood " indignation ", first probability of " sadness " and Higher value (0.54) in the average of two probability is less than the first predetermined threshold value (0.55), for above-mentioned 4th kind of situation, mood " anger Anger ", " sadness ", " happiness ", " surprised " the first probability and the second probability in higher value (0.55) be less than the second predetermined threshold value (0.6), the above situation then shows that facial emotions can not be identified from current real-time face image A, in the display of electronic installation Prompting frame is ejected on screen, the prompting of reminding module 140 can not identify type of emotion from real-time face image A, and flow is back to Step S10, and carry out subsequent step.
The electronic installation 1 that the present embodiment proposes, real-time face image is extracted from realtime graphic, by the real-time face image The first mood disaggregated model and the second mood disaggregated model are inputted, respectively obtains the first probability and the second probability of every kind of mood, The result exported with reference to two mood disaggregated models, judges the mood in current face image, improves facial emotions The accuracy rate of identification.
In addition, the present invention also provides a kind of facial emotions recognition methods.Shown in reference picture 3, know for facial emotions of the present invention The flow chart of other method first embodiment.This method can be performed by a device, and the device can be real by software and/or hardware It is existing.
In the present embodiment, facial emotions recognition methods includes:Step S10- steps S30.
Step S10, the realtime graphic of camera device shooting is obtained, is extracted using face recognition algorithms from the realtime graphic One real-time face image.When camera device 13 photographs a realtime graphic, camera device 13 sends this realtime graphic To processor 12, after processor 12 receives the realtime graphic, the size of picture is first obtained, establishes the ash of a formed objects Spend image;By the coloured image of acquisition, gray level image is converted into, while create a memory headroom;By gray level image histogram Equalization, gray level image information amount is reduced, accelerate detection speed, then load the training storehouse of Intel Company, detected in picture Face, and return to an object for include face information, data of acquisition face position, and record number;Finally obtain Take the region of head portrait and preserve, this completes the process of a real-time face image zooming-out.
Specifically, the face recognition algorithms of extraction real-time face image can also be from the realtime graphic:Based on geometry The method of feature, Local Features Analysis method, eigenface method, the method based on elastic model, neural net method, etc..
Step S20, the real-time face image is inputted into the first good mood disaggregated model of training in advance and the second mood point Class model carries out Emotion identification, obtains the first probability and the second probability of every kind of mood.
Specifically, the first mood disaggregated model and the second mood disaggregated model are drawn by following steps:
A Face Sample Storehouse is established, t face feature point is marked in every face sample image;
By the width and height of the human face region after standardizing in the coordinate of each face feature point and the face sample image Degree carries out division operation, obtains the characteristic vector of face sample image;
Learning training is carried out to support vector machine classifier (SVM) using the face sample image and its characteristic vector, Obtain the first mood disaggregated model;
A mood label is distributed to every face sample image, and according to mood label to the face in Face Sample Storehouse Sample image is classified;And
Learning training is carried out to convolutional neural networks (CNN) using sorted face sample image, obtains the second mood Disaggregated model.
N facial images are collected, the human face region in every facial image is standardized, formation Face Sample Storehouse, and In every face sample image, t face feature point of hand labeled, the face feature point includes:Eyes, eyebrow, nose, mouth Bar, the position feature point of face's outline.The rectangle that the human face region after specification in every facial image is an a*b, its Middle width is a, is highly b, and the coordinate of each face feature point in the face sample image is (x, y), and x is removed to a Computing, y is subjected to division operation to b, obtains the characteristic vector of the face sample image.According to every face sample graph in Sample Storehouse Facial emotions as in are classified:It is sad, angry, happy, surprised etc., distributed to every face sample image corresponding to one Mood label.Using the n in Sample Storehouse face sample images and n obtained characteristic vector to support vector machine classifier (Support Vector Machine, SVM) carries out learning training, obtains the first mood disaggregated model.Using according to type of emotion Sorted face sample image is trained to convolutional neural networks (Convolutional Neural Network, CNN), Obtain the second mood disaggregated model.
Assuming that having extracted real-time face image A from realtime graphic, and real-time face image A is inputted into the first feelings respectively Thread model and the second mood model, it may appear that a variety of situations:
The first situation:Mood classification in the result that the result of first mood model output exports with the second model only has One kind, mood classification is consistent, and probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:Real-time face figure As the probable value that A septum resets mood is " happiness " is 0.62;Second mood disaggregated model output result be:Real-time face image A septum resets mood is that the probable value of " happiness " is 0.68;
Second of situation:The result of first mood model output has two with the mood classification in the result of the second model output Kind is two or more, and mood classification is consistent, and probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:It is real When face image A septum resets moods be " indignation ", first probable value of " sadness " be respectively 0.51,0.49;Second mood is classified Model output result be:Real-time face image A septum resets mood is " indignation ", second probable value of " sadness " is respectively 0.41、0.59;
The third situation:Mood classification in the result that the result of first mood model output exports with the second model is not Together, probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:Real-time face image A septum reset moods are The probable value of " happiness " is 0.65;Second mood disaggregated model output result be:Real-time face image A septum reset moods are The probable value of " indignation " is 0.61;And
4th kind of situation:The result of first mood model output has two with the mood classification in the result of the second model output Kind is two or more, and mood classification is different, and probability is not necessarily identical.For example, the result of the first mood disaggregated model output is:It is real When face image A septum resets moods be " indignation ", first probable value of " sadness " be respectively 0.51,0.49;Second mood is classified Model output result be:Real-time face image A septum resets mood is " happiness ", second probable value of " surprised " is respectively 0.45、0.55。
Step S30, the mood identified according to the first mood disaggregated model, the second mood disaggregated model and probability, judge Mood in the real-time face image.
Specifically, shown in reference picture 4, step S30 includes:
Step S31, judge one or more feelings that the first mood disaggregated model, the second mood disaggregated model identify Whether thread is identical;
Step S32, when the first mood disaggregated model, the second mood disaggregated model identify be identical one or more Mood, calculate the first probability, the average of the second probability of each mood, with the first probability, the second probability average in higher value pair The mood answered is as the mood recognized from the realtime graphic;Or
Step S33, what is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more Mood, mood corresponding to higher value is used as the mood recognized from the realtime graphic using in the first probability, the second probability.
It is a kind of or more for identical for above-mentioned the first and second two kinds of situations, the result of two mood disaggregated models outputs Kind mood, then, average is taken to the first probability and the second probability of each mood:
The first situation:Mood " happiness ":First probability 0.62 and the second probability 0.68 are averaged, obtain average probability For 0.65, " happiness " is as the facial emotions in current real-time face image A most at last.
Second of situation:Mood " indignation ", " sadness ":First probability 0.51,0.49 and the second probability 0.41,0.59 are asked Average, the average probability for obtaining each mood is 0.46,0.54, and " sadness " is as the face in current real-time face image A most at last Portion's mood.
For above-mentioned the third and fourth two kinds of situations, the result of two mood disaggregated models outputs is different one kind or more Kind mood, then, higher value is taken to the first probability and the second probability of each mood:
The third situation:Mood is that the probable value of " happiness " is 0.65, and mood is that the probable value of " indignation " is 0.61, finally By " happiness " as the facial emotions in current real-time face image A.
4th kind of situation:Mood is " indignation ", first probable value of " sadness " is respectively 0.51,0.49, and mood is " happiness It is happy ", second probable value of " surprised " be respectively 0.45,0.55, " surprised " is as the face in current real-time face image A most at last Portion's mood.
The facial emotions recognition methods that the present embodiment proposes, real-time face image is extracted from realtime graphic, this is real-time Face image inputs the first mood disaggregated model and the second mood disaggregated model, respectively obtains the first probability and the of every kind of mood Two probability, the result exported with reference to two mood disaggregated models, judge the mood in current face image, improve face The accuracy rate of portion's Emotion identification.
The second embodiment of facial emotions recognition methods, in the present embodiment, this method bag are proposed based on first embodiment Include:Step S10- steps S30.Wherein, step S10, step S20 is roughly the same with first embodiment, repeats no more here.
Step S30, the mood identified according to the first mood disaggregated model, the second mood disaggregated model and probability, judge Mood in the real-time face image.
Specifically, shown in reference picture 5, step S30 includes:
Step S31, judge one or more feelings that the first mood disaggregated model, the second mood disaggregated model identify Whether thread is identical;
Step S32, when the first mood disaggregated model, the second mood disaggregated model identify be identical one or more Mood, the first probability, the second probability to each mood are averaged, and take the higher value in average;
Step S33, judge whether higher value is more than the first predetermined threshold value in the average of the first probability, the second probability;
Step S34, when the first probability, the second probability average in higher value be more than the first predetermined threshold value, judge in average Mood corresponding to higher value is as the mood recognized from the realtime graphic;Or
Step S35, what is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more Mood, take the higher value in each probability of mood first, the second probability;
Step S36, judges whether the higher value in each probability of mood first, the second probability is more than the second predetermined threshold value;
Step S37, when higher value is more than the second predetermined threshold value in the first probability, the second probability, then by corresponding to higher value Mood is used as from the realtime graphic and recognizes this kind of mood.
Assuming that the first predetermined threshold value is 0.55, the second predetermined threshold value is 0.6, then,
The first situation:The average probability of mood " happiness " is 0.65,0.65 > 0.55, by " happiness " as current real-time Facial emotions in face image A;
Second of situation:Mood " indignation ", the average probability of " sadness " are respectively 0.46,0.54, larger in average probability It is worth for 0.54,0.54 < 0.55, then it is assumed that facial emotions failure is identified from current real-time face image A;
The third situation:Higher value in mood " happiness ", first probability of " indignation ", the second probability is 0.65,0.65 > 0.6, by " happiness " as the facial emotions in current real-time face image A;And
4th kind of situation:It is larger in mood " indignation ", " sadness ", " happiness ", first probability of " surprised ", the second probability It is worth for 0.55,0.55 < 0.6, then it is assumed that facial emotions failure is identified from current real-time face image A.
Further, step S30 also includes step S38, when first probability, the second probability average in higher value and Higher value is less than predetermined threshold value in first probability, the second probability, then prompts facial emotions recognition failures, and is back in real time Face image obtaining step.For above-mentioned second of situation, mood " indignation ", first probability of " sadness " and the second probability it is equal Higher value (0.54) in value is less than the first predetermined threshold value (0.55), for above-mentioned 4th kind of situation, mood " indignation ", " compassion Wound ", " happiness ", " surprised " the first probability and the second probability in higher value (0.55) be less than the second predetermined threshold value (0.6), on State situation then to show that facial emotions can not be identified from current real-time face image A, ejected on the display screen of electronic installation Prompting frame, prompting can not identify type of emotion from real-time face image A, and flow is back to step S10, and is subsequently walked Suddenly.
The facial emotions recognition methods that the present embodiment proposes, real-time face image is extracted from realtime graphic, this is real-time Face image inputs the first mood disaggregated model and the second mood disaggregated model, default by the first predetermined threshold value of setting and second Threshold value, the result of two mood disaggregated model outputs is screened, then to the facial emotions of face in real-time face image Judged, improve the accuracy rate of facial emotions identification.
In addition, the embodiment of the present invention also proposes a kind of computer-readable recording medium, the computer-readable recording medium Include facial emotions recognizer, following operation is realized when the facial emotions recognizer is executed by processor:
Real-time face image acquisition step:Obtain camera device shooting realtime graphic, using face recognition algorithms from this A real-time face image is extracted in realtime graphic;
Emotion identification step:By the first good mood disaggregated model of real-time face image input training in advance and the second feelings Thread disaggregated model carries out Emotion identification, obtains the first probability and the second probability of every kind of mood;And
Emotion judgment step:The mood and probability identified according to the first mood disaggregated model, the second mood disaggregated model, Judge the mood in the real-time face image.
Alternatively, the emotion judgment step includes:
Judge one or more moods that the first mood disaggregated model, the second mood disaggregated model identify whether phase Together.
Alternatively, the emotion judgment step also includes:
What is identified when the first mood disaggregated model, the second mood disaggregated model is identical one or more mood, meter Calculate the first probability, the average of the second probability of each mood, with the first probability, the second probability average in mood corresponding to higher value As the mood recognized from the realtime graphic;Or
What is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more moods, with Mood corresponding to higher value is as the mood recognized from the realtime graphic in first probability, the second probability.
The embodiment of the computer-readable recording medium of the present invention is specific with above-mentioned facial emotions recognition methods Embodiment is roughly the same, will not be repeated here.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property includes, so that process, device, article or method including a series of elements not only include those key elements, and And also include the other element being not expressly set out, or also include for this process, device, article or method institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Other identical element also be present in the process of key element, device, article or method.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.Embodiment party more than The description of formula, it is required general that those skilled in the art can be understood that above-described embodiment method can add by software The mode of hardware platform is realized, naturally it is also possible to which by hardware, but the former is more preferably embodiment in many cases.It is based on Such understanding, the part that technical scheme substantially contributes to prior art in other words can be with software products Form embody, the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disc, light as described above Disk) in, including some instructions are make it that a station terminal equipment (can be mobile phone, computer, server, or the network equipment Deng) perform method described in each embodiment of the present invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair The equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of electronic installation, it is characterised in that described device includes memory, processor and camera device, the memory Include facial emotions recognizer, following steps are realized when the facial emotions recognizer is by the computing device:
Real-time face image acquisition step:The realtime graphic of camera device shooting is obtained, it is real-time from this using face recognition algorithms A real-time face image is extracted in image;
Emotion identification step:By the first good mood disaggregated model of real-time face image input training in advance and the second mood point Class model carries out Emotion identification, obtains the first probability and the second probability of every kind of mood;And
Emotion judgment step:The mood and probability identified according to the first mood disaggregated model, the second mood disaggregated model, judge Mood in the real-time face image.
2. electronic installation according to claim 1, it is characterised in that the first mood disaggregated model and the second mood point The training step of class model includes:
Feature point extracting step:A Face Sample Storehouse is established, t face feature point is marked in every face sample image;
Characteristic vector calculation procedure:By the face area after standardizing in the coordinate of each face feature point and the face sample image The width and height in domain carry out division operation, obtain the characteristic vector of face sample image;
First model training step:Using the face sample image and its characteristic vector to support vector machine classifier Training is practised, obtains the first mood disaggregated model;
Mood label distribution procedure:A mood label is distributed to every face sample image, and according to mood label to face Face sample image in Sample Storehouse is classified;And
Second model training step:Learning training is carried out to convolutional neural networks using sorted face sample image, obtained Second mood disaggregated model.
3. electronic installation according to claim 1, it is characterised in that the emotion judgment step includes:
Judge whether one or more moods that the first mood disaggregated model, the second mood disaggregated model identify are identical.
4. the electronic installation according to claim 1 or 3, it is characterised in that the emotion judgment step also includes:
What is identified when the first mood disaggregated model, the second mood disaggregated model is identical one or more mood, is calculated each First probability of mood, the average of the second probability, using the first probability, the second probability average in mood corresponding to higher value as The mood recognized from the realtime graphic;Or
What is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more moods, with first Mood corresponding to higher value is as the mood recognized from the realtime graphic in probability, the second probability.
5. a kind of facial emotions recognition methods, it is characterised in that methods described includes:
Real-time face image acquisition step:The realtime graphic of camera device shooting is obtained, it is real-time from this using face recognition algorithms A real-time face image is extracted in image;
Emotion identification step:By the first good mood disaggregated model of real-time face image input training in advance and the second mood point Class model carries out Emotion identification, obtains the first probability and the second probability of every kind of mood;And
Emotion judgment step:The mood and probability identified according to the first mood disaggregated model, the second mood disaggregated model, judge Mood in the real-time face image.
6. facial emotions recognition methods according to claim 5, it is characterised in that the first mood disaggregated model and The training step of two mood disaggregated models includes:
Feature point extracting step:A Face Sample Storehouse is established, t face feature point is marked in every face sample image;
Characteristic vector calculation procedure:By the face area after standardizing in the coordinate of each face feature point and the face sample image The width and height in domain carry out division operation, obtain the characteristic vector of face sample image;
First model training step:Using the face sample image and its characteristic vector to support vector machine classifier Training is practised, obtains the first mood disaggregated model;
Mood label distribution procedure:A mood label is distributed to every face sample image, and according to mood label to face Face sample image in Sample Storehouse is classified;And
Second model training step:Learning training is carried out to convolutional neural networks using sorted face sample image, obtained Second mood disaggregated model.
7. facial emotions recognition methods according to claim 5, it is characterised in that the emotion judgment step includes:
Judge whether one or more moods that the first mood disaggregated model, the second mood disaggregated model identify are identical.
8. the facial emotions recognition methods according to claim 5 or 7, it is characterised in that the emotion judgment step is also wrapped Include:
What is identified when the first mood disaggregated model, the second mood disaggregated model is identical one or more mood, is calculated each First probability of mood, the average of the second probability, using the first probability, the second probability average in mood corresponding to higher value as The mood recognized from the realtime graphic;Or
What is identified when the first mood disaggregated model, the second mood disaggregated model is different one or more moods, with first Mood corresponding to higher value is as the mood recognized from the realtime graphic in probability, the second probability.
9. facial emotions recognition methods according to claim 5, it is characterised in that the face recognition algorithms can be with For:Method based on geometric properties, Local Features Analysis method, eigenface method, the method based on elastic model, neutral net Method.
10. a kind of computer-readable recording medium, it is characterised in that the computer-readable recording medium includes facial emotions Recognizer, when the facial emotions recognizer is executed by processor, realize as any one of claim 5 to 9 The step of facial emotions recognition methods.
CN201710707943.1A 2017-08-17 2017-08-17 Facial emotions recognition methods, device and storage medium Pending CN107633203A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710707943.1A CN107633203A (en) 2017-08-17 2017-08-17 Facial emotions recognition methods, device and storage medium
PCT/CN2017/108753 WO2019033573A1 (en) 2017-08-17 2017-10-31 Facial emotion identification method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710707943.1A CN107633203A (en) 2017-08-17 2017-08-17 Facial emotions recognition methods, device and storage medium

Publications (1)

Publication Number Publication Date
CN107633203A true CN107633203A (en) 2018-01-26

Family

ID=61099621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710707943.1A Pending CN107633203A (en) 2017-08-17 2017-08-17 Facial emotions recognition methods, device and storage medium

Country Status (2)

Country Link
CN (1) CN107633203A (en)
WO (1) WO2019033573A1 (en)

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108536803A (en) * 2018-03-30 2018-09-14 百度在线网络技术(北京)有限公司 Song recommendations method, apparatus, equipment and computer-readable medium
CN108710839A (en) * 2018-05-08 2018-10-26 中山大学 A kind of sentry's drowsiness intelligent monitor system based on deep learning computer vision
CN109077719A (en) * 2018-07-05 2018-12-25 广州视源电子科技股份有限公司 signal identification method, device, equipment and storage medium
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109271599A (en) * 2018-08-13 2019-01-25 百度在线网络技术(北京)有限公司 Data sharing method, equipment and storage medium
CN109376711A (en) * 2018-12-06 2019-02-22 深圳市淘米科技有限公司 A kind of face mood pre-judging method based on ILTP
CN109376633A (en) * 2018-10-15 2019-02-22 北京车联天下信息技术有限公司 A kind of children pacify method and device
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109583431A (en) * 2019-01-02 2019-04-05 上海极链网络科技有限公司 A kind of face Emotion identification model, method and its electronic device
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109657582A (en) * 2018-12-10 2019-04-19 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of face mood
CN109711300A (en) * 2018-12-18 2019-05-03 深圳壹账通智能科技有限公司 Blind person's augmentative communication method, apparatus, computer equipment and storage medium
CN109816518A (en) * 2019-01-04 2019-05-28 深圳壹账通智能科技有限公司 Face core result acquisition methods, device, computer equipment and readable storage medium storing program for executing
CN109830280A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Psychological aided analysis method, device, computer equipment and storage medium
CN110363187A (en) * 2019-08-29 2019-10-22 上海云从汇临人工智能科技有限公司 A kind of face identification method, device, machine readable media and equipment
CN110395260A (en) * 2018-04-20 2019-11-01 比亚迪股份有限公司 Vehicle, safe driving method and device
CN110399837A (en) * 2019-07-25 2019-11-01 深圳智慧林网络科技有限公司 User emotion recognition methods, device and computer readable storage medium
WO2019218140A1 (en) * 2018-05-15 2019-11-21 合刃科技(武汉)有限公司 Object identification method and computer readable storage medium
CN110598612A (en) * 2019-08-30 2019-12-20 深圳智慧林网络科技有限公司 Patient nursing method based on mobile terminal, mobile terminal and readable storage medium
CN110705419A (en) * 2019-09-24 2020-01-17 新华三大数据技术有限公司 Emotion recognition method, early warning method, model training method and related device
CN111354053A (en) * 2020-02-27 2020-06-30 北京华峰创业科技有限公司 Method and device for generating cartoon image icon and storage medium
WO2020182121A1 (en) * 2019-03-14 2020-09-17 腾讯科技(深圳)有限公司 Expression recognition method and related device
CN111723198A (en) * 2019-03-18 2020-09-29 北京京东尚科信息技术有限公司 Text emotion recognition method and device and storage medium
CN112395979A (en) * 2020-11-17 2021-02-23 平安科技(深圳)有限公司 Image-based health state identification method, device, equipment and storage medium
CN112541425A (en) * 2020-12-10 2021-03-23 深圳地平线机器人科技有限公司 Emotion detection method, device, medium and electronic equipment
CN112686195A (en) * 2021-01-07 2021-04-20 风变科技(深圳)有限公司 Emotion recognition method and device, computer equipment and storage medium
CN112699945A (en) * 2020-12-31 2021-04-23 青岛海尔科技有限公司 Data labeling method and device, storage medium and electronic device
WO2021127916A1 (en) * 2019-12-23 2021-07-01 深圳市优必选科技股份有限公司 Facial emotion recognition method, smart device and computer-readabel storage medium
CN113569924A (en) * 2021-07-09 2021-10-29 常州大学 Emotion identification classification method based on support vector machine multi-core cooperation
CN113920575A (en) * 2021-12-15 2022-01-11 深圳佑驾创新科技有限公司 Facial expression recognition method and device and storage medium
CN114681258A (en) * 2020-12-25 2022-07-01 深圳Tcl新技术有限公司 Method for adaptively adjusting massage mode and massage equipment
CN115409104A (en) * 2022-08-25 2022-11-29 贝壳找房(北京)科技有限公司 Method, apparatus, device, medium and program product for identifying object type
CN117370768A (en) * 2023-12-08 2024-01-09 北京回龙观医院(北京心理危机研究与干预中心) Mood fluctuation detection method and system for mental patients

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414323A (en) * 2019-06-14 2019-11-05 平安科技(深圳)有限公司 Mood detection method, device, electronic equipment and storage medium
CN112395922A (en) * 2019-08-16 2021-02-23 杭州海康威视数字技术股份有限公司 Face action detection method, device and system
CN110587621B (en) * 2019-08-30 2023-06-06 深圳智慧林网络科技有限公司 Robot, robot-based patient care method, and readable storage medium
CN110728182B (en) * 2019-09-06 2023-12-26 平安科技(深圳)有限公司 Interview method and device based on AI interview system and computer equipment
CN111768481B (en) * 2020-05-19 2024-06-21 北京奇艺世纪科技有限公司 Expression package generation method and device
TWI811605B (en) * 2020-12-31 2023-08-11 宏碁股份有限公司 Method and system for mental index prediction
CN113920568B (en) * 2021-11-02 2024-07-02 中电万维信息技术有限责任公司 Face and human body posture emotion recognition method based on video image
CN114565964A (en) * 2022-03-03 2022-05-31 网易(杭州)网络有限公司 Emotion recognition model generation method, recognition method, device, medium and equipment
CN116935480B (en) * 2023-09-18 2023-12-29 四川天地宏华导航设备有限公司 Emotion recognition method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616005A (en) * 2015-03-10 2015-05-13 南京宜开数据分析技术有限公司 Domain-self-adaptive facial expression analysis method
CN106228145A (en) * 2016-08-04 2016-12-14 网易有道信息技术(北京)有限公司 A kind of facial expression recognizing method and equipment
CN106529503A (en) * 2016-11-30 2017-03-22 华南理工大学 Method for recognizing face emotion by using integrated convolutional neural network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101178773B (en) * 2007-12-13 2010-08-11 北京中星微电子有限公司 Image recognition system and method based on characteristic extracting and categorizer
CN103488974A (en) * 2013-09-13 2014-01-01 南京华图信息技术有限公司 Facial expression recognition method and system based on simulated biological vision neural network
CN104036255B (en) * 2014-06-21 2017-07-07 电子科技大学 A kind of facial expression recognizing method
CN104123545B (en) * 2014-07-24 2017-06-16 江苏大学 A kind of real-time human facial feature extraction and expression recognition method
CN105512624B (en) * 2015-12-01 2019-06-21 天津中科智能识别产业技术研究院有限公司 A kind of smiling face's recognition methods of facial image and its device
CN106447625A (en) * 2016-09-05 2017-02-22 北京中科奥森数据科技有限公司 Facial image series-based attribute identification method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616005A (en) * 2015-03-10 2015-05-13 南京宜开数据分析技术有限公司 Domain-self-adaptive facial expression analysis method
CN106228145A (en) * 2016-08-04 2016-12-14 网易有道信息技术(北京)有限公司 A kind of facial expression recognizing method and equipment
CN106529503A (en) * 2016-11-30 2017-03-22 华南理工大学 Method for recognizing face emotion by using integrated convolutional neural network

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108536803A (en) * 2018-03-30 2018-09-14 百度在线网络技术(北京)有限公司 Song recommendations method, apparatus, equipment and computer-readable medium
CN110395260A (en) * 2018-04-20 2019-11-01 比亚迪股份有限公司 Vehicle, safe driving method and device
CN108710839A (en) * 2018-05-08 2018-10-26 中山大学 A kind of sentry's drowsiness intelligent monitor system based on deep learning computer vision
WO2019218140A1 (en) * 2018-05-15 2019-11-21 合刃科技(武汉)有限公司 Object identification method and computer readable storage medium
CN109077719A (en) * 2018-07-05 2018-12-25 广州视源电子科技股份有限公司 signal identification method, device, equipment and storage medium
CN109190487A (en) * 2018-08-07 2019-01-11 平安科技(深圳)有限公司 Face Emotion identification method, apparatus, computer equipment and storage medium
CN109271599A (en) * 2018-08-13 2019-01-25 百度在线网络技术(北京)有限公司 Data sharing method, equipment and storage medium
CN109376633A (en) * 2018-10-15 2019-02-22 北京车联天下信息技术有限公司 A kind of children pacify method and device
CN109635838B (en) * 2018-11-12 2023-07-11 平安科技(深圳)有限公司 Face sample picture labeling method and device, computer equipment and storage medium
CN109635838A (en) * 2018-11-12 2019-04-16 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109583325A (en) * 2018-11-12 2019-04-05 平安科技(深圳)有限公司 Face samples pictures mask method, device, computer equipment and storage medium
CN109583325B (en) * 2018-11-12 2023-06-27 平安科技(深圳)有限公司 Face sample picture labeling method and device, computer equipment and storage medium
WO2020098074A1 (en) * 2018-11-12 2020-05-22 平安科技(深圳)有限公司 Face sample picture marking method and apparatus, computer device, and storage medium
CN109376711A (en) * 2018-12-06 2019-02-22 深圳市淘米科技有限公司 A kind of face mood pre-judging method based on ILTP
CN109657582A (en) * 2018-12-10 2019-04-19 平安科技(深圳)有限公司 Recognition methods, device, computer equipment and the storage medium of face mood
CN109657582B (en) * 2018-12-10 2023-10-31 平安科技(深圳)有限公司 Face emotion recognition method and device, computer equipment and storage medium
CN109711300A (en) * 2018-12-18 2019-05-03 深圳壹账通智能科技有限公司 Blind person's augmentative communication method, apparatus, computer equipment and storage medium
CN109830280A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Psychological aided analysis method, device, computer equipment and storage medium
CN109583431A (en) * 2019-01-02 2019-04-05 上海极链网络科技有限公司 A kind of face Emotion identification model, method and its electronic device
CN109816518A (en) * 2019-01-04 2019-05-28 深圳壹账通智能科技有限公司 Face core result acquisition methods, device, computer equipment and readable storage medium storing program for executing
WO2020182121A1 (en) * 2019-03-14 2020-09-17 腾讯科技(深圳)有限公司 Expression recognition method and related device
CN111723198A (en) * 2019-03-18 2020-09-29 北京京东尚科信息技术有限公司 Text emotion recognition method and device and storage medium
CN111723198B (en) * 2019-03-18 2023-09-01 北京汇钧科技有限公司 Text emotion recognition method, device and storage medium
CN110399837A (en) * 2019-07-25 2019-11-01 深圳智慧林网络科技有限公司 User emotion recognition methods, device and computer readable storage medium
CN110399837B (en) * 2019-07-25 2024-01-05 深圳智慧林网络科技有限公司 User emotion recognition method, device and computer readable storage medium
CN110363187A (en) * 2019-08-29 2019-10-22 上海云从汇临人工智能科技有限公司 A kind of face identification method, device, machine readable media and equipment
CN110598612A (en) * 2019-08-30 2019-12-20 深圳智慧林网络科技有限公司 Patient nursing method based on mobile terminal, mobile terminal and readable storage medium
CN110705419A (en) * 2019-09-24 2020-01-17 新华三大数据技术有限公司 Emotion recognition method, early warning method, model training method and related device
WO2021127916A1 (en) * 2019-12-23 2021-07-01 深圳市优必选科技股份有限公司 Facial emotion recognition method, smart device and computer-readabel storage medium
CN111354053A (en) * 2020-02-27 2020-06-30 北京华峰创业科技有限公司 Method and device for generating cartoon image icon and storage medium
CN112395979A (en) * 2020-11-17 2021-02-23 平安科技(深圳)有限公司 Image-based health state identification method, device, equipment and storage medium
CN112395979B (en) * 2020-11-17 2024-05-10 平安科技(深圳)有限公司 Image-based health state identification method, device, equipment and storage medium
CN112541425A (en) * 2020-12-10 2021-03-23 深圳地平线机器人科技有限公司 Emotion detection method, device, medium and electronic equipment
CN114681258B (en) * 2020-12-25 2024-04-30 深圳Tcl新技术有限公司 Method for adaptively adjusting massage mode and massage equipment
CN114681258A (en) * 2020-12-25 2022-07-01 深圳Tcl新技术有限公司 Method for adaptively adjusting massage mode and massage equipment
CN112699945A (en) * 2020-12-31 2021-04-23 青岛海尔科技有限公司 Data labeling method and device, storage medium and electronic device
CN112699945B (en) * 2020-12-31 2023-10-27 青岛海尔科技有限公司 Data labeling method and device, storage medium and electronic device
CN112686195B (en) * 2021-01-07 2024-06-14 风变科技(深圳)有限公司 Emotion recognition method, emotion recognition device, computer equipment and storage medium
CN112686195A (en) * 2021-01-07 2021-04-20 风变科技(深圳)有限公司 Emotion recognition method and device, computer equipment and storage medium
CN113569924A (en) * 2021-07-09 2021-10-29 常州大学 Emotion identification classification method based on support vector machine multi-core cooperation
CN113569924B (en) * 2021-07-09 2024-03-12 常州大学 Emotion identification classification method based on support vector machine multi-core cooperation
CN113920575A (en) * 2021-12-15 2022-01-11 深圳佑驾创新科技有限公司 Facial expression recognition method and device and storage medium
CN115409104A (en) * 2022-08-25 2022-11-29 贝壳找房(北京)科技有限公司 Method, apparatus, device, medium and program product for identifying object type
CN117370768B (en) * 2023-12-08 2024-03-05 北京回龙观医院(北京心理危机研究与干预中心) Mood fluctuation detection method and system for mental patients
CN117370768A (en) * 2023-12-08 2024-01-09 北京回龙观医院(北京心理危机研究与干预中心) Mood fluctuation detection method and system for mental patients

Also Published As

Publication number Publication date
WO2019033573A1 (en) 2019-02-21

Similar Documents

Publication Publication Date Title
CN107633203A (en) Facial emotions recognition methods, device and storage medium
CN107633207B (en) AU characteristic recognition methods, device and storage medium
CN107862292B (en) Personage's mood analysis method, device and storage medium
CN110321477B (en) Information recommendation method and device, terminal and storage medium
Hossain et al. An emotion recognition system for mobile applications
Meng et al. Time-delay neural network for continuous emotional dimension prediction from facial expression sequences
CN107894833B (en) Multi-modal interaction processing method and system based on virtual human
Pantic Machine analysis of facial behaviour: Naturalistic and dynamic behaviour
CN104573706B (en) A kind of subject image recognition methods and its system
CN107633204A (en) Face occlusion detection method, apparatus and storage medium
CN107679447A (en) Facial characteristics point detecting method, device and storage medium
CN110569795A (en) Image identification method and device and related equipment
CN109583431A (en) A kind of face Emotion identification model, method and its electronic device
CN106528859A (en) Data pushing system and method
EP2630635B1 (en) Method and apparatus for recognizing an emotion of an individual based on facial action units
CN107679448A (en) Eyeball action-analysing method, device and storage medium
CN110298212B (en) Model training method, emotion recognition method, expression display method and related equipment
CN107633205A (en) lip motion analysis method, device and storage medium
CN113435335B (en) Microscopic expression recognition method and device, electronic equipment and storage medium
CN109241336A (en) Music recommendation method and device
CN107679449A (en) Lip motion method for catching, device and storage medium
Brandizzi et al. Automatic rgb inference based on facial emotion recognition
Siddiquie et al. Affect analysis in natural human interaction using joint hidden conditional random fields
Samara et al. Sensing affective states using facial expression analysis
CN112149599B (en) Expression tracking method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1246920

Country of ref document: HK

RJ01 Rejection of invention patent application after publication

Application publication date: 20180126

RJ01 Rejection of invention patent application after publication