CN108960022B - Emotion recognition method and device - Google Patents

Emotion recognition method and device Download PDF

Info

Publication number
CN108960022B
CN108960022B CN201710837855.3A CN201710837855A CN108960022B CN 108960022 B CN108960022 B CN 108960022B CN 201710837855 A CN201710837855 A CN 201710837855A CN 108960022 B CN108960022 B CN 108960022B
Authority
CN
China
Prior art keywords
image
emotion
personal
parameters
library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710837855.3A
Other languages
Chinese (zh)
Other versions
CN108960022A (en
Inventor
潘景良
林健哲
陈灼
李腾
夏敏
陈嘉宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juda Technology Co ltd
Original Assignee
Juda Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juda Technology Co ltd filed Critical Juda Technology Co ltd
Priority to CN201710837855.3A priority Critical patent/CN108960022B/en
Publication of CN108960022A publication Critical patent/CN108960022A/en
Application granted granted Critical
Publication of CN108960022B publication Critical patent/CN108960022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/257Belief theory, e.g. Dempster-Shafer

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for emotion recognition, wherein the method comprises the steps of collecting and recognizing facial images of an object, then carrying out data processing, storing in a database, establishing an expression discrimination feature model, and obtaining emotion output; the identification device is composed of an acquisition unit, a data processing unit and a database unit. The emotion recognition method has high recognition degree, the expression characteristic discrimination model has high integrity and reliability, and the emotion result obtained by the model is more accurate.

Description

Emotion recognition method and device
Technical Field
The invention relates to the field of image processing application, in particular to an emotion recognition method and device.
Background
With the rapid development of artificial intelligence science, how to enable a computer to recognize human expressions and further obtain human emotional states has been increasingly concerned by subjects such as computer science and psychology.
At present, a plurality of emotion models appear in the field of emotion calculation, but the emotion models are mostly limited to the integral fusion training of images, the emotion model results are not accurate enough, and the emotion state judgment under the fine expression state cannot be met.
The Chinese patent with the application number of 201611014343.9 discloses an emotion recognition method based on deep learning and a device thereof. Analyzing the emotion of the employee by adopting an expression analysis algorithm of a technical deep learning algorithm for collecting a facial image of the employee when the employee is punched, and comparing and analyzing the emotion with historical emotion, wherein when the emotion is abnormal, the device sends alarm information to related personnel; the method obtains the face front image of the employee, stores the image into an image database according to the ID of the employee and the card punching date, is inconvenient to use in a fixed ID mode, adjusts the face facial image into an RGB image with the size of 227 x 227 through image processing, carries out emotion classification through a trained VGGNet network, is rough in image processing, cannot accurately distinguish micro expressions, is not ideal in emotion classification effect, and needs to be improved urgently.
Disclosure of Invention
In order to solve the problems, the invention provides an emotion recognition method and device, wherein the emotion recognition method is high in recognition degree, the expression characteristic discrimination model is high in integrity and reliability, and an emotion result obtained through the model is more accurate.
In order to realize the technical purpose, the technical scheme of the invention is as follows: a method for recognizing emotion comprises the steps of collecting and recognizing facial images of an object, then processing the facial images, storing the facial images in a database, establishing an expression distinguishing feature model, and obtaining emotion output;
the data processing method comprises the steps of extracting characteristic part change parameters in each area according to facial characteristic subareas, establishing personal ID, sending the parameters to a database, and then carrying out image preprocessing;
the database establishment is the integration of a parameter database and an image database which are established by a personal ID directory after the parameters and the preprocessed images are referred to a historical database;
the method for establishing the expression distinguishing feature model is based on the database and is trained through a convolutional neural network.
Further, the data processing method comprises the following steps:
s1: collecting and identifying a facial image of an object, dividing the facial image into at least five regions including a forehead region, a nose wing region, an eyebrow region, a perilabial region and a cheek region according to facial features by a facial identification technology, and respectively amplifying the regions for data analysis, wherein the data analysis is analysis of parameters of forehead fine line change, nose wing contraction, eyebrow spacing and jitter, perilabial contour change and face color change;
s2: carrying out image preprocessing on an image acquired by utilizing an image compression technology;
s3: and establishing a personal ID, sending the parameters after the data analysis in the step S1 to the parameter library, and sending the image after the image preprocessing in the step S2 to the image library.
Further, the data analysis in step S1 further includes analysis of blood pressure parameters, pulse parameters, and body temperature parameters.
Further, the establishing a database comprises the following steps:
the first step is as follows: respectively referring the parameters obtained after data analysis and the images after image preprocessing to a parameter library and an image library in a comparison historical database by fuzzy logic and D-S fusion technology to judge whether the historical parameters or the images of the personal ID exist;
the second step is that: when the historical parameters or images of the personal ID are in the historical database, the personal ID is updated to be the historical personal ID, and the parameters and the preprocessed images are added under a historical personal ID directory; when the history parameter or image of the personal ID is not available in the history database, the personal ID is stored as a new personal ID directory, and the parameter and the preprocessed image are stored under the new personal ID directory.
Further, the fuzzy logic is a reference comparison based on weights of the partitions of the facial features.
Further, the method for establishing the expression distinguishing feature model comprises the following steps:
firstly, the method comprises the following steps: training with an image library and a parameter library under a personal ID directory to obtain a personal expression distinguishing feature model;
secondly, the method comprises the following steps: training the whole of the image library and the parameter library to obtain a whole expression distinguishing feature model;
thirdly, the method comprises the following steps: the personal expression distinguishing feature model combines the preprocessed image and the parameters to form an emotion feature index, integrates and contrasts the personal expression distinguishing feature model to classify and grade emotions, and outputs the emotions as emotion categories and emotion progression;
fourthly: and when the image library and the parameter library under the personal ID directory have insufficient empirical data, combining the processed images and the parameters by the overall expression distinguishing feature model to form an emotion feature index, classifying and grading emotions by fusing and comparing the overall expression distinguishing feature model, and outputting the emotion classification and emotion progression.
Furthermore, the expression discrimination model is trained by taking the image library as a main body and the parameter library as a reference body.
Further, the personal expression distinguishing feature model and the overall expression distinguishing feature model are continuously and deeply learned according to the updating of the image library and the parameter library.
An emotion recognition apparatus comprising: the system comprises an acquisition unit, a data processing unit and a database unit;
the acquisition unit acquires images by a high-resolution camera;
the data processing unit comprises an image preprocessing module and a data analysis module which respectively preprocess and analyze the facial image captured by the acquisition unit;
the database unit comprises a personal ID library, an image library and a parameter library; the image library and the parameter library are placed under a personal ID library directory, and a database unit is stored in a server;
the server stores the expression distinguishing feature model, parameters and images are input through the expression distinguishing feature model through the input module, and the emotion classification and the emotion progression are output through the output module.
The acquisition unit and the data processing unit further perform data interaction through the close-distance point-to-point communication module, and the output module sets interaction emotion categories and emotion progression of the interaction module.
The invention has the beneficial effects that:
1) the ID can be automatically generated for different users, so that the personal ID image and parameters of the user can be stored in a mode of referring to and comparing a historical database, and the use of different users is facilitated; when the same user uses the device again, the personal ID is not required to be input, and the personal historical ID directory is automatically matched according to the image and the parameters; the user experience is greatly facilitated, and the continuous accumulation of the experience data of the database is ensured.
2) The expression distinguishing feature model takes an individual as a main body to train out an individual model and also trains an integral model; when the experience data of a specific user is insufficient, the overall model is utilized, and the stability of emotion recognition is ensured.
3) The expression discrimination model disclosed by the invention not only depends on the processing and fusion of images, but also combines parameters such as forehead fine line change, nose wing contraction (expressed as respiratory frequency), eyebrow spacing and shaking, lip periphery shape change, face color change parameter, blood pressure parameter, pulse parameter, body temperature parameter and the like, and the parameters are closely related to the emotional state of a person, so that the integrity of the expression discrimination model is perfected. Meanwhile, the subtle facial parameters of the micro expression can be accurately acquired by partitioning the facial features, and the recognition degree of the emotion is improved.
In conclusion, the emotion recognition method disclosed by the invention has high recognition degree, the expression characteristic discrimination model has high integrity and reliability, and the emotion result obtained through the model is more accurate.
Drawings
FIG. 1 is a schematic diagram of a database structure of the present invention;
fig. 2 is a block schematic diagram of the emotion recognition apparatus of the present invention.
Detailed Description
The technical solution of the present invention will be clearly and completely described below.
A method for recognizing emotion comprises the steps of collecting and recognizing facial images of an object, then processing the facial images, storing the facial images in a database, establishing an expression distinguishing feature model, and obtaining emotion output;
the data processing method comprises the steps of extracting characteristic part change parameters in each area according to facial characteristic subareas, establishing personal ID, sending the parameters to a database, and then carrying out image preprocessing; the operation expense of the server is greatly reduced and the operation speed of the server is improved by a data processing method carried out in the data processing unit.
As shown in fig. 1, the database establishment is an integration of a parameter library and an image library constructed by a personal ID directory after the parameters and the preprocessed images are referenced to a history database; the establishment of the database taking the personal ID as the directory facilitates the data comparison and the model establishment.
The expression distinguishing feature model establishing method is obtained by training through a convolutional neural network based on the database.
Further, the data processing method comprises the following steps:
s1: collecting and identifying a facial image of an object, dividing the facial image into at least five regions including a forehead region, a nose wing region, an eyebrow region, a perilabial region and a cheek region according to facial features by a facial identification technology, and respectively amplifying the regions for data analysis, wherein the data analysis is analysis of parameters of forehead fine line change, nose wing contraction, eyebrow spacing and jitter, perilabial contour change and face color change; wherein, the forehead area is extended, the fine lines are less, the mouth corner is pulled backwards and upwarps, the cheek is lifted upwards, and the like, and the emotion expressed when the larger facial muscles exercise is happy; the emotion expressed by the change parameters of double-eyebrow high-lift in the eyebrow area, lip circumference mouth opening, cheek relaxation (lower jaw opening) and the like is 'surprise'; the emotion expressed by the change parameters of eyebrow area eyebrow wrinkle, nose wing area nose wing expansion and opening square or close is angry, and the operation expense of the server is greatly reduced and the operation speed of the server is improved by a method for carrying out data processing through subareas.
S2: carrying out image preprocessing on an image acquired by utilizing an image compression technology; the purpose of image preprocessing is to compress images, save storage space in a database and facilitate high-speed image transmission to a server.
S3: and establishing a personal ID, sending the parameters after the data analysis in the step S1 to the parameter library, and sending the image after the image preprocessing in the step S2 to the image library.
Further, the data analysis in step S1 further includes analysis of blood pressure parameters, pulse parameters, and body temperature parameters. The parameters are closely related to the emotional state of the person, and the parameters are added into the expression discrimination model, so that the accuracy of the expression discrimination model is greatly improved.
Further, the establishing a database comprises the following steps:
the first step is as follows: respectively referring the parameters obtained after data analysis and the images after image preprocessing to a parameter library and an image library in a comparison historical database by fuzzy logic and D-S fusion technology to judge whether the historical parameters or the images of the personal ID exist;
the second step is that: when the historical parameters or images of the personal ID are in the historical database, the personal ID is updated to be the historical personal ID, and the parameters and the preprocessed images are added under a historical personal ID directory; when the history parameter or image of the personal ID is not available in the history database, the personal ID is stored as a new personal ID directory, and the parameter and the preprocessed image are stored under the new personal ID directory. By the reference comparison mode, IDs can be automatically generated for different users, and images and parameters of personal IDs of the users can be stored, so that the use of different users is facilitated; when the same user uses the device again, the personal ID is not needed to be input, and the personal history ID list is automatically matched according to the image and the parameters.
Further, the fuzzy logic is a reference comparison based on weights of the partitions of the facial features. Through fuzzy logic operation, the ID establishing and identifying efficiency is improved.
Further, the method for establishing the expression distinguishing feature model comprises the following steps:
firstly, the method comprises the following steps: training with an image library and a parameter library under a personal ID directory to obtain a personal expression distinguishing feature model; the personal expression distinguishing feature model established aiming at the individual improves the reliability of the model, so that the database of the data with larger emotion output results is more accurate.
Secondly, the method comprises the following steps: training the whole of the image library and the parameter library to obtain a whole expression distinguishing feature model;
thirdly, the method comprises the following steps: the personal expression distinguishing feature model combines the preprocessed image and the parameters to form an emotion feature index, integrates and contrasts the personal expression distinguishing feature model to classify and grade emotions, and outputs the emotions as emotion categories and emotion progression;
fourthly: and when the image library and the parameter library under the personal ID directory have insufficient empirical data, combining the processed images and the parameters by the overall expression distinguishing feature model to form an emotion feature index, classifying and grading emotions by fusing and comparing the overall expression distinguishing feature model, and outputting the emotion classification and emotion progression. When the experience data of a specific user is insufficient, the overall expression distinguishing feature model ensures the stability and accuracy of emotion recognition.
Furthermore, the expression discrimination model is trained by taking the image library as a main body and the parameter library as a reference body. The expression discrimination model disclosed by the invention not only depends on the processing and fusion of images, but also combines parameters such as forehead fine line change, nose wing contraction (expressed as respiratory frequency), eyebrow spacing and shaking, lip periphery shape change, face color change parameter, blood pressure parameter, pulse parameter, body temperature parameter and the like, and the parameters are closely related to the emotional state of a person, so that the integrity of the expression discrimination model is perfected. Meanwhile, as an embodiment of the present invention, the expression discrimination model is further improved by a method based on deep micro-expression, where the deep micro-expression is: the face image is enlarged by at least 1000 times, and the relationship between factors such as local facial trembling and subtle facial color changes and emotion categories is mined to perform deep learning. Through the deep micro-expression technology, the subtle changes of the face caused by the emotion are excavated, and the emotion recognition accuracy is greatly improved.
Further, the personal expression distinguishing feature model and the overall expression distinguishing feature model are continuously and deeply learned according to the updating of the image library and the parameter library. The mode of deep learning and training the model according to the updating of the database continuously improves the credibility of the model.
As shown in fig. 2, an emotion recognition apparatus includes: the system comprises an acquisition unit, a data processing unit and a database unit;
the acquisition unit acquires images by a high-resolution camera;
the data processing unit comprises an image preprocessing module and a data analysis module which respectively preprocess and analyze the facial image captured by the acquisition unit;
the database unit comprises a personal ID library, an image library and a parameter library; the image library and the parameter library are placed under a personal ID library directory, and a database unit is stored in a server;
the server stores the expression distinguishing feature model, parameters and images are input through the expression distinguishing feature model through the input module, and the emotion classification and the emotion progression are output through the output module.
Further, the acquisition unit and the data processing unit perform data interaction through the close-distance point-to-point communication module, and the output module sets interaction emotion categories and emotion progression of the interaction module.
As an embodiment of the present invention, an emotion system, especially applied to infant or patient care applications, the image acquisition unit may be installed around the face of an infant or a patient, and the data processing unit and the interaction module are integrated on a wearable device such as a bracelet; communication modules are arranged outside the image acquisition unit and the data processing unit on the bracelet to form short-distance point-to-point communication, an internet module is arranged outside the data processing unit to communicate with the remote terminal server unit, and each module on the bracelet supplies power to the battery and integrates a rectification filter and a voltage stabilizing circuit to charge the battery; blood pressure sensor, pulse sensor, body temperature sensor are integrated be in on the bracelet.
The infant or the patient wears the bracelet, the image acquisition unit positioned around the face acquires facial images of an object to be identified through the camera, the images are sent to the data processing unit for image compression and image identification in a point-to-point communication mode, and the identified images are subjected to zoning and face characteristic change parameter analysis according to the face characteristics; the facial features are divided into at least five regions including a forehead region, a nose wing region, an eyebrow region, a perilip region and a cheek region according to the facial features, each region is amplified through a deep micro expression technology to carry out data analysis, local trembling, subtle face color changes and the like caused by emotion are excavated through the deep micro expression technology, and the emotion recognition accuracy is greatly improved.
The data analysis is the analysis of parameters of forehead fine line change, nose wing contraction, eyebrow spacing and shaking, lip circumference shape change and face color change;
a blood pressure sensor, a pulse sensor and a body temperature sensor are further arranged on the bracelet and are respectively connected to the database unit and the expression distinguishing feature model; the expression discrimination model takes the image library as a main body and takes the parameter library as a reference body for training. The expression discrimination model disclosed by the invention not only depends on the processing and fusion of images, but also combines parameters such as forehead fine line change, nose wing contraction (expressed as respiratory frequency), eyebrow spacing and shaking, lip periphery shape change, face color change parameter, blood pressure parameter, pulse parameter, body temperature parameter and the like, and the parameters are closely related to the emotional state of a person, so that the integrity of the expression discrimination model is perfected.
Meanwhile, the bracelet judges whether the blood pressure, pulse, body temperature and facial parameters are greater than a set threshold, if so, the bracelet prompts parents or medical staff by voice, and can send short messages or calls to the parents or the medical staff if necessary; the system comprises a remote terminal server unit, an expression judging feature model, an output module, an interaction module, a remote terminal server unit, an expression judging feature model and a display module, wherein the preprocessed image and the analyzed data are sent to the input module in the expression judging feature model of the remote terminal server unit through an internet such as wifi or LTE, the expression judging feature model combines the image preprocessed by the image preprocessing unit and the parameters analyzed by a data analyzing unit to form an emotion feature index, the emotion characteristic index is classified and graded by fusing and comparing the expression judging feature model, the emotion category and the emotion progression are output through the output module, and the emotion category and the emotion progression are directly fed back to the interaction module; and judging the working mode of the interaction module according to the category and the progression, for example, representing the emotion category by using the vibration times, representing the emotion progression by using the vibration strength or directly carrying out voice prompt. The portable emotion recognition and detection device facilitates parents to understand the health condition and emotion of infants and medical staff to understand the health condition and emotion of patients. Has better practicability in the fields of infant and patient care and the like. Meanwhile, in such application fields, the identification object is often relatively fixed, so that image data and parameter data of the personal ID are created.
The image library and the parameter library are an integration of image data and parameter data constructed with the individual ID. And the expression discrimination feature model comprises: a personal expression distinguishing feature model trained by the image data and the parameter data constructed by the personal ID and an overall expression distinguishing feature model trained by an image library and a parameter library; the expression discrimination model is trained by taking the image preprocessed by the image preprocessing unit as a main body and taking the parameter obtained after the analysis of the data analysis unit as a reference body; the personal expression distinguishing feature model and the overall expression distinguishing feature model are continuously and deeply learned according to the updating of the image and the parameters, and the reliability of the models is ensured. The personal expression distinguishing feature model depends on the experience data accumulated by a specific user, so that the emotion is more accurately identified; when the experience data of a specific user is insufficient, the overall expression distinguishing feature model ensures the stability and accuracy of emotion recognition.
As an embodiment of the present invention, an emotional system, especially for security inspection or some highly mental stress-demanding posts, the acquisition unit may be installed around a worker, and the data processing unit and the interaction module are integrated on a bracelet; communication modules are arranged outside the acquisition unit and the data processing unit on the bracelet to form short-distance point-to-point communication, and an internet module is arranged outside the data processing unit to communicate with the remote terminal server unit; each module on the bracelet supplies power through a battery, and the rectification filtering and voltage stabilizing circuit is integrated on the bracelet to charge the battery; blood pressure sensor, pulse sensor, body temperature sensor are integrated be in on the bracelet.
The method comprises the following steps that a worker wears a bracelet, an acquisition unit positioned around the face acquires facial images of an object to be identified through a camera, the images are sent to a data processing unit in a point-to-point communication mode to be subjected to image compression and image identification, and the identified images are subjected to subarea entering according to facial features and facial feature change parameter analysis;
a blood pressure sensor, a pulse sensor and a body temperature sensor are further arranged on the bracelet and are respectively connected to the database unit and the expression distinguishing feature model; the expression discrimination model takes the image library as a main body and takes the parameter library as a reference body for training. The expression discrimination model disclosed by the invention not only depends on the processing and fusion of images, but also combines parameters such as forehead fine line change, nose wing contraction (expressed as respiratory frequency), eyebrow spacing and shaking, lip periphery appearance change, face color change parameter, blood pressure parameter, pulse parameter, body temperature parameter and the like, wherein the emotion expressed when the forehead area is stretched, the fine lines are few, the mouth corner is pulled backwards and upwards, the cheek is lifted and the like and the larger facial muscle moves is 'happy'; the emotion expressed by the change parameters of double-eyebrow high-lift in the eyebrow area, lip circumference mouth opening, cheek relaxation (lower jaw opening) and the like is 'surprise'; the emotion expressed by the change parameters of the eyebrow area, the eyebrow in the eyebrow area, the nasal wing in the nasal wing area, the opening of the nasal wing area, the nose wing area, the mouth opening of the nasal wing area, and the like is angry; the parameters are closely related to the emotional state of the person, and the integrity of the expression discrimination model is improved.
The bracelet judges whether the parameters of blood pressure, pulse, body temperature and facial complexion are greater than a set threshold value, if so, the bracelet prompts a manager by voice, and can send a short message or a telephone to the manager if necessary to forcibly stop the work of workers; the compressed images and the data obtained through analysis are sent to the input module of the remote terminal server unit through the internet such as wifi or LTE, the expression judging feature model combines the images preprocessed by the image preprocessing unit and the parameters obtained after the data analyzing unit analyzes to form an emotion feature index, the emotion is classified and graded by fusing and comparing the expression judging feature model, the emotion category and the emotion progression are output through the output module, and the emotion category and the emotion progression are fed back to the bracelet through the internet. The working mode of the interaction module is judged according to the category and the progression, for example, the emotion category is represented by the vibration times, the emotion progression is represented by the vibration intensity or the voice prompt is directly carried out, and the working record of the interaction module is obtained by the manager, so that the psychological intervention and the working management of the manager on workers are facilitated by parents. Meanwhile, in such application fields, the identification object is often relatively fixed, so that image data and parameter data of the personal ID are created.
The image library and the parameter library are an integration of image data and parameter data constructed with the individual ID. And the expression discrimination feature model comprises: a personal expression distinguishing feature model trained by the image data and the parameter data constructed by the personal ID and an overall expression distinguishing feature model trained by an image library and a parameter library; the expression discrimination model is trained by taking the image preprocessed by the image preprocessing unit as a main body and taking the parameter obtained after the analysis of the data analysis unit as a reference body; the personal expression distinguishing feature model and the overall expression distinguishing feature model are continuously and deeply learned according to the updating of the image and the parameters, and the reliability of the models is ensured. The personal expression distinguishing feature model depends on the experience data accumulated by a specific user, so that the emotion is more accurately identified; when the experience data of a specific user is insufficient, the overall expression distinguishing feature model ensures the stability and accuracy of emotion recognition.
It will be apparent to those skilled in the art that various changes and modifications can be made without departing from the inventive concept thereof, and these changes and modifications can be made without departing from the spirit and scope of the invention.

Claims (8)

1. A method of emotion recognition, comprising the steps of:
a step of acquiring a face image of a recognition target;
a step of processing the face image,
the method comprises the steps of extracting the change parameters of the facial image feature region and sending the parameters to a database after establishing a personal ID;
establishing a database;
obtaining emotion output through training an expression discrimination feature model;
wherein the face image processing step includes the steps of:
s1: collecting and identifying a facial image of an object, dividing the facial image into at least five regions including a forehead region, a nose wing region, an eyebrow region, a perilabial region and a cheek region according to facial features by a facial identification technology, and respectively amplifying the regions for data analysis, wherein the data analysis is analysis of parameters of forehead fine line change, nose wing contraction, eyebrow spacing and jitter, perilabial contour change and face color change;
s2: carrying out image preprocessing on the acquired image by utilizing an image compression technology;
s3: establishing a personal ID, sending the parameters after the data analysis in the step S1 to a parameter library in a database, and sending the image after the image preprocessing in the step S2 to an image library in the database; and is
The step of training the expression discrimination feature model to obtain emotion output comprises the following steps:
firstly, the method comprises the following steps: training with an image library and a parameter library under a personal ID directory to obtain a personal expression distinguishing feature model;
secondly, the method comprises the following steps: training the whole of the image library and the parameter library to obtain a whole expression distinguishing feature model;
thirdly, the method comprises the following steps: the personal expression distinguishing feature model combines the preprocessed image and parameters of the individual to form an emotion feature index, and combines and contrasts the personal expression distinguishing feature model to classify and grade emotions and output the emotions as emotion categories and emotion progression;
fourthly: when the image library and the parameter library under the personal ID directory have insufficient experience data, the overall expression distinguishing feature model combines the preprocessed images and the parameters of all people to form an emotion feature index, and the overall expression distinguishing feature model is fused and contrasted to classify and grade emotions and output the emotion classification and emotion progression.
2. The emotion recognition method of claim 1, wherein the data analysis in step S1 further includes analysis of blood pressure parameters, pulse parameters, and body temperature parameters.
3. The emotion recognition method of claim 1, wherein the database creating step includes the steps of:
the first step is as follows: respectively referring the parameters obtained after data analysis and the images after image preprocessing to a parameter library and an image library in a comparison historical database by fuzzy logic and D-S fusion technology to judge whether the historical parameters or the images of the personal ID exist;
the second step is that: when the historical parameters or images of the personal ID are in the historical database, the personal ID is updated to be the historical personal ID, and the parameters and the preprocessed images are added under a historical personal ID directory; when the history parameter or image of the personal ID is not available in the history database, the personal ID is stored as a new personal ID directory, and the parameter and the preprocessed image are stored under the new personal ID directory.
4. A method of emotion recognition as claimed in claim 3, wherein the fuzzy logic is a reference comparison based on the weight of each region of the facial feature.
5. The emotion recognition method of claim 1, wherein the training expression discrimination feature model is trained by using the image library as a main body and the parameter library as a reference body.
6. The emotion recognition method of claim 1, wherein the personal expression discrimination feature model and the global expression discrimination feature model are continuously and deeply learned according to the updating of the image library and the parameter library.
7. An emotion recognition apparatus, comprising:
an acquisition unit configured to acquire a face image of an identification target;
the data processing unit is used for image preprocessing and parameter analysis;
the database unit includes:
a personal ID library for storing personal IDs,
an image library for storing the pre-processed images,
the parameter library is used for storing parameters;
the server includes:
the input module is communicated with the data processing unit and takes the processed image and the analyzed parameters as the input of the expression distinguishing characteristic model;
the expression distinguishing feature model is used for obtaining an expression distinguishing feature model used for recognizing emotion according to the training of the database unit;
the output module is used for outputting the expression discrimination feature model;
wherein the image preprocessing and parameter analysis of the data processing unit comprises the following steps:
s1: collecting and identifying a facial image of an object, dividing the facial image into at least five regions including a forehead region, a nose wing region, an eyebrow region, a perilabial region and a cheek region according to facial features by a facial identification technology, and respectively amplifying the regions for data analysis, wherein the data analysis is analysis of parameters of forehead fine line change, nose wing contraction, eyebrow spacing and jitter, perilabial contour change and face color change;
s2: carrying out image preprocessing on the acquired image by utilizing an image compression technology;
s3: establishing a personal ID, sending the parameters after the data analysis in the step S1 to a parameter library in a database, and sending the image after the image preprocessing in the step S2 to an image library in the database; and is
The step of obtaining emotion output by the expression discrimination feature model comprises the following steps:
firstly, the method comprises the following steps: training with an image library and a parameter library under a personal ID directory to obtain a personal expression distinguishing feature model;
secondly, the method comprises the following steps: training the whole of the image library and the parameter library to obtain a whole expression distinguishing feature model;
thirdly, the method comprises the following steps: the personal expression distinguishing feature model combines the preprocessed image and parameters of the individual to form an emotion feature index, and combines and contrasts the personal expression distinguishing feature model to classify and grade emotions and output the emotions as emotion categories and emotion progression;
fourthly: when the image library and the parameter library under the personal ID directory have insufficient experience data, the overall expression distinguishing feature model combines the preprocessed images and the parameters of all people to form an emotion feature index, and the overall expression distinguishing feature model is fused and contrasted to classify and grade emotions and output the emotion classification and emotion progression.
8. The emotion recognition device of claim 7, wherein the data interaction is performed between the acquisition unit and the data processing unit through a near-distance point-to-point communication module, and the output module sets interaction emotion categories and emotion progression of the interaction module.
CN201710837855.3A 2017-09-19 2017-09-19 Emotion recognition method and device Active CN108960022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710837855.3A CN108960022B (en) 2017-09-19 2017-09-19 Emotion recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710837855.3A CN108960022B (en) 2017-09-19 2017-09-19 Emotion recognition method and device

Publications (2)

Publication Number Publication Date
CN108960022A CN108960022A (en) 2018-12-07
CN108960022B true CN108960022B (en) 2021-09-07

Family

ID=64494747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710837855.3A Active CN108960022B (en) 2017-09-19 2017-09-19 Emotion recognition method and device

Country Status (1)

Country Link
CN (1) CN108960022B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109830280A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Psychological aided analysis method, device, computer equipment and storage medium
CN109815817A (en) * 2018-12-24 2019-05-28 北京新能源汽车股份有限公司 Driver emotion recognition method and music pushing method
CN111582896A (en) * 2019-02-15 2020-08-25 普罗文化股份有限公司 Data identification definition and superposition system
CN110222597B (en) * 2019-05-21 2023-09-22 平安科技(深圳)有限公司 Method and device for adjusting screen display based on micro-expressions
CN110288551B (en) * 2019-06-29 2021-11-09 北京字节跳动网络技术有限公司 Video beautifying method and device and electronic equipment
CN111717219A (en) * 2020-06-03 2020-09-29 智车优行科技(上海)有限公司 Method and system for converting skylight pattern and automobile
CN111956243A (en) * 2020-08-20 2020-11-20 大连理工大学 Stress assessment system for counter
CN113288062A (en) * 2021-05-28 2021-08-24 深圳中科健安科技有限公司 Multi-dimensional staff emotion analysis method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609475A (en) * 2008-06-16 2009-12-23 佳能株式会社 Personal authentication apparatus and authenticating method
KR20100132592A (en) * 2009-06-10 2010-12-20 연세대학교 산학협력단 Individual optimization system of recognizing emotion apparatus, method thereof
CN103530912A (en) * 2013-09-27 2014-01-22 深圳市迈瑞思智能技术有限公司 Attendance recording system having emotion identification function, and method thereof
CN103871200A (en) * 2012-12-14 2014-06-18 深圳市赛格导航科技股份有限公司 Safety warning system and method used for automobile driving
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101609475A (en) * 2008-06-16 2009-12-23 佳能株式会社 Personal authentication apparatus and authenticating method
KR20100132592A (en) * 2009-06-10 2010-12-20 연세대학교 산학협력단 Individual optimization system of recognizing emotion apparatus, method thereof
CN103871200A (en) * 2012-12-14 2014-06-18 深圳市赛格导航科技股份有限公司 Safety warning system and method used for automobile driving
CN103530912A (en) * 2013-09-27 2014-01-22 深圳市迈瑞思智能技术有限公司 Attendance recording system having emotion identification function, and method thereof
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system

Also Published As

Publication number Publication date
CN108960022A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN108960022B (en) Emotion recognition method and device
Ferrari et al. On the personalization of classification models for human activity recognition
CN109993093B (en) Road rage monitoring method, system, equipment and medium based on facial and respiratory characteristics
CN110458101B (en) Criminal personnel sign monitoring method and equipment based on combination of video and equipment
JP4401079B2 (en) Subject behavior analysis
Tong et al. Facial action unit recognition by exploiting their dynamic and semantic relationships
CN111563480B (en) Conflict behavior detection method, device, computer equipment and storage medium
KR101689021B1 (en) System for determining psychological state using sensing device and method thereof
CN111353366A (en) Emotion detection method and device and electronic equipment
Shu et al. Emotion sensing for mobile computing
Kumar et al. Neuro-phone: An assistive framework to operate Smartphone using EEG signals
CN110755091A (en) Personal mental health monitoring system and method
CN113080855A (en) Facial pain expression recognition method and system based on depth information
CN111667599A (en) Face recognition card punching system and method
CN108960023A (en) A kind of portable Emotion identification device
KR102285482B1 (en) Method and apparatus for providing content based on machine learning analysis of biometric information
CN116570246A (en) Epileptic monitoring and remote alarm system
CN110148234A (en) Campus brush face picks exchange method, storage medium and system
KR101736403B1 (en) Recognition of basic emotion in facial expression using implicit synchronization of facial micro-movements
CN115148336A (en) AI discernment is supplementary psychological disorders of lower System for evaluating treatment effect of patient
CN113921098A (en) Medical service evaluation method and system
CN114140849A (en) Electric power infrastructure field personnel state management method and system based on expression recognition
CN209734011U (en) Non-contact human body state monitoring system
CN115210754A (en) Accessibility determination device, accessibility determination method, and program
Xing et al. EVAL cane: Nonintrusive monitoring platform with a novel gait-based user-identification scheme

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant