CN110705656A - Facial action recognition method based on EEG sensor - Google Patents

Facial action recognition method based on EEG sensor Download PDF

Info

Publication number
CN110705656A
CN110705656A CN201911095547.3A CN201911095547A CN110705656A CN 110705656 A CN110705656 A CN 110705656A CN 201911095547 A CN201911095547 A CN 201911095547A CN 110705656 A CN110705656 A CN 110705656A
Authority
CN
China
Prior art keywords
vector machine
support vector
eeg sensor
peak
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911095547.3A
Other languages
Chinese (zh)
Inventor
王众
单东升
梅剑峰
李明
李锦瑭
周俊宇
章学良
刘亚群
王艳萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 14 Research Institute
Original Assignee
CETC 14 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 14 Research Institute filed Critical CETC 14 Research Institute
Priority to CN201911095547.3A priority Critical patent/CN110705656A/en
Publication of CN110705656A publication Critical patent/CN110705656A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/398Electrooculography [EOG], e.g. detecting nystagmus; Electroretinography [ERG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Psychiatry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Physiology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Signal Processing (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a facial action recognition method based on an EEG sensor, and belongs to the technical field of facial action recognition. The invention relates to a method for recognizing facial actions based on an electroencephalogram (EEG) sensor, which is characterized in that an EEG sensor worn on a brain is used for collecting EMG signals which are usually used as pseudo signals for filtering and removing and recognizing, and a Support Vector Machine (SVM) is used for recognizing and classifying facial actions in real time. The EEG-based high-precision signal acquisition capability of the invention can realize high-precision identification and classification of the lower actions under the condition of small samples.

Description

Facial action recognition method based on EEG sensor
Technical Field
The invention belongs to the technical field of facial motion recognition, and particularly relates to a facial motion recognition method based on an EEG (electroencephalogram) sensor.
Background
In the research of human emotional intelligence, the research of facial actions has an extremely important meaning for understanding human emotional expression intention. The traditional facial motion recognition method is based on a visible light image recognition scheme, for example, the method is wide in application range at present, a software algorithm is mature, however, because a human face has a special flexible structure and muscle tissues, expression recognition is difficult to perform accurate modeling on the human face, and facial motion recognition based on a depth camera can only obtain depth information of an image on the basis. Compared with the two identification schemes, the identification scheme of the electromyogram (electromyogram) sensor has the advantages that the accuracy is improved to a large extent, but the EMG sensor needs to be worn on the face in the mode, and poor experience feeling can be brought to a wearer.
Disclosure of Invention
The invention aims to: aiming at the defects of the prior art, a facial action recognition method based on an electroencephalogram (EEG) sensor is provided. The method comprises the steps of collecting EMG signals which are usually filtered as pseudo signals through an EEG sensor worn on the brain to be identified, carrying out real-time pattern identification and classification on facial actions by using a Support Vector Machine (SVM), and realizing high-precision identification and classification of the facial actions under the condition of small samples based on the high-precision signal collection capability of the EEG sensor.
Specifically, the invention is realized by adopting the following technical scheme: collecting EMG signals by using an EEG sensor, and preprocessing to obtain sample data and data to be classified;
respectively training an electro-oculogram classification support vector machine and a myoelectricity support vector machine by using sample data to realize real-time pattern recognition and classification of facial actions;
and carrying out facial motion recognition on the data to be classified based on the trained electro-oculogram classification support vector machine and the trained myoelectricity support vector machine.
The above solution is further characterized in that the EEG sensor collects EMG signals using electrodes of the frontal areas F3 and F4 and electrodes of the central areas C3 and C4.
The technical scheme is further characterized in that the sample data comprises 25 test times in total, the test times are divided into 5 groups, and each group of 5 test sample data respectively corresponds to five facial movements of blinking a left eye, blinking a right eye, biting a left gum, biting a right gum and biting two sides together.
The technical scheme is further characterized in that the electro-oculogram classification support vector machine and the electromyography support vector machine adopt a Gaussian kernel support vector machine, peak-to-peak values of an electro-oculogram signal and a masseter electromyography signal are used as main features, wherein the electro-oculogram signal is divided into two types of states through two features, the electromyography signal is divided into three types of states through three features, and accordingly five types of facial action classifications are formed.
The above solution is further characterized in that the two characteristics of the ocular electrical signal are the peak-to-peak value of channel F3 and the peak-to-peak value of channel F4.
The above solution is further characterized in that the three characteristics of the electromyographic signal are the peak-to-peak value of channel C3, the peak-to-peak value of channel C4 and the correlation consistency of channels C3 and C4.
The technical scheme is further characterized in that the error penalty constant C of the Gaussian kernel support vector machine is 100, and the width sigma of the Gaussian kernel is 1.0.
The invention has the following beneficial effects: aiming at the defects of the traditional facial motion recognition method, the invention combines the high-precision signal acquisition capability of the EEG sensor, not only can accurately improve the recognition accuracy, but also avoids the poor wearing experience brought by the EMG sensor in the process of acquiring the EMG signal, and efficiently realizes the real-time facial motion recognition and classification under the condition of small sample. Compared with other expression recognition schemes, the facial recognition accuracy can be improved, and the experience of the person who is collected is more comfortable in the process of collecting facial action signals. Therefore, the invention has certain significance in the research of human emotional intelligence.
Drawings
Fig. 1 is a schematic diagram of EEG signals under different facial movements.
FIG. 2 is a diagram illustrating the results of SVM-based facial motion recognition and classification.
Fig. 3 is a schematic diagram of the wearing position of the EEG sensor.
Detailed Description
The present invention will be described in further detail with reference to the following examples and the accompanying drawings.
Example 1:
according to the embodiment of the invention, a Gaussian kernel support vector machine is adopted to classify the characteristics of facial action signals, the eye electrical signals and the electromyographic signals are divided into five facial actions to be identified, and meanwhile, the characteristics of the EMG signals generated by the facial actions are extracted and classified based on the high-precision signal acquisition capability of an EEG sensor, so that the facial actions can be effectively identified. And the wearer makes corresponding facial actions according to the screen prompt. The method comprises the following specific steps:
1) EEG signal acquisition
The present embodiment collects EMG signals that would normally be filtered out as spurious signals by an EEG sensor worn on the brain. The EEG acquisition and analysis equipment used in this embodiment comprises a recording system amplifier, a 16-conductor electrode cap of a fixed electrode system, and an active electrode. The active electrode is a novel electrode, has higher signal-to-noise ratio, input impedance and common mode rejection ratio compared with the traditional electrode, and can effectively suppress motion noise. In the present embodiment, as shown in fig. 3, the frontal area F (F3, F4) and the central area C (C3, C4) are selected to place electrodes, Cz is used as the reference electrode REF, the scalp impedance is controlled to be less than 200k Ω, and the sampling rate is set to 1 kHz.
2) Data acquisition and preprocessing
Data acquisition was performed in a quiet shielded room. The wearer sits on a comfortable chair, is about 70 cm away from the screen, and makes 5 corresponding facial movements according to the screen prompts, including blinking left eye, blinking right eye, biting left gum, biting right gum, and biting both sides together. The EEG sensor collects EMG signals generated by facial movements of a wearer, carries out preprocessing such as trapping, smoothing and normalization and removes noise interference such as power frequency and baseline drift.
A Support Vector Machine (SVM) needs to obtain sample data for training first for feature vector learning. In this embodiment, the acquisition of the sample data includes the following 3 steps:
when t = 0-1 s in step 1, a face action prompting instruction appears in the screen (5 action instructions appear in sequence).
And 2, when t = 1-4 s, the computer sends out a short beep to prompt the wearer to start the experiment. And immediately keeping the silent state after the wearer makes corresponding facial actions according to the screen prompt.
And 3, when t = 4-7 s, prompting rest in the screen, and enabling the wearer to rest for 3 s.
Each different facial movement was performed in 5 trials, each trial consisting of 25 trials. Finally 5 groups of 5 test sample data are formed for subsequent processing.
3) Feature extraction and pattern classification
Five facial action classifications are formed by 4 paths of signals collected by EEG equipment, wherein two paths are eye electrical signals, and the other two paths are myoelectrical signals generated by occlusion of facial muscles. Through the analysis of the facial action signals, when different expressions appear on the face of the wearer, the signals in the corresponding acquisition channels respectively have different amplitude characteristics. As shown in fig. 1, the abscissa of the graph represents time and the ordinate represents signal amplitude. The amplitude peak of the corresponding F3 channel rises when the wearer blinks the left eye, the amplitude peak of the corresponding F4 channel rises when the wearer blinks the right eye, the amplitude peak of the C3 channel, which is generated by the left face muscle when the wearer bites the left gum, rises, and the amplitude peak of the C4 channel rises when the wearer bites the right gum.
Because the change of the amplitude of the EMG signal is the main feature of the muscle in the motion state, the peak-peak value of the EMG signal in different channels is selected as the feature in the embodiment, and the feature classification of the facial action signal is performed by adopting the gaussian kernel support vector machine. Specifically, the peak-to-peak value of the channel F3 and the peak-to-peak value of the channel F4 are used as two feature attributes, the peak-to-peak value of the channel C3, the peak-to-peak value of the channel C4, and the correlation consistency of the channels C3 and C4 are used as three feature attributes, and an electrooculogram classification support vector machine and an electromyography support vector machine are respectively created, wherein the electrooculogram signal is divided into two types of states through two features, and the electromyography signal is divided into three types of states through three features. The differentiation of five facial movements is linearly inseparable, so a nonlinear support vector machine is employed. In the gaussian kernel support vector machine adopted in this embodiment, it is recommended that the penalty constant C of the gaussian kernel is 100, and the width σ of the gaussian kernel is 1.0.
Inputting the 25 sample data into an electro-oculogram classification support vector machine and an electromyography support vector machine for classification learning, and training two support vector machine learners as an algorithm for real-time facial motion classification. Fig. 2 shows the test results of five facial movements and the silence state, wherein the abscissa 1, 2, 3, 4 represents four channels F3, F4, C3, C4, and the ordinate represents the signal amplitude. Therefore, a small amount of sample training is adopted to obtain a classification result with high accuracy. After the support vector machine is trained, inputting formal test data into the vector machine to obtain a classification result, wherein the accuracy rate of the actual test result reaches 95%.
In summary, EEG sensors are widely used for detecting electroencephalograms and analyzing the control intention of the human brain. EMG signals generated as a result of facial motion are often filtered out as spurious signals. However, these signals contain a large amount of information on the facial movements, and the facial movements can be effectively recognized by extracting and classifying the features of this information. And further, data fusion can be carried out with feature extraction and classification of the EEG signal, and the multi-modal physiological signal feature identification and classification capability is formed. Meanwhile, high-precision recognition and classification of facial actions under a small sample can be realized.
Although the present invention has been described in terms of the preferred embodiment, it is not intended that the invention be limited to the embodiment. Any equivalent changes or modifications made without departing from the spirit and scope of the present invention also belong to the protection scope of the present invention. The scope of the invention should therefore be determined with reference to the appended claims.

Claims (7)

1. A facial motion recognition method based on an EEG sensor is characterized in that:
collecting EMG signals by using an EEG sensor, and preprocessing to obtain sample data and data to be classified;
respectively training an electro-oculogram classification support vector machine and a myoelectricity support vector machine by using sample data to realize real-time pattern recognition and classification of facial actions;
and carrying out facial motion recognition on the data to be classified based on the trained electro-oculogram classification support vector machine and the trained myoelectricity support vector machine.
2. The EEG sensor-based facial motion recognition method according to claim 1, wherein the EEG sensor acquires EMG signals using electrodes of frontal areas F3 and F4 and electrodes of central areas C3 and C4.
3. The EEG sensor-based facial motion recognition method of claim 2, wherein said sample data comprises 25 trial runs, divided into 5 groups of 5 sample data, each group of 5 sample data corresponding to five facial motions of blinking left eye, blinking right eye, biting left gum, biting right gum, and biting both sides together.
4. The EEG sensor-based facial motion recognition method according to claim 3, wherein said electro-oculogram classification support vector machine and electromyography support vector machine use Gaussian kernel support vector machine, and have peak-to-peak values of electro-oculogram signal and biting muscle electromyography signal as main features, wherein the electro-oculogram signal is divided into two types of states by two features, and the electromyography signal is divided into three types of states by three features, thereby forming five types of facial motion classifications.
5. The EEG sensor-based facial motion recognition method according to claim 4, wherein the two features of the electro-ocular signal are the peak-to-peak value of channel F3 and the peak-to-peak value of channel F4.
6. The EEG sensor-based facial motion recognition method according to claim 4, wherein the three features of the electromyographic signals are the peak-to-peak value of channel C3, the peak-to-peak value of channel C4, and the correlation consistency of channels C3 and C4.
7. The EEG sensor-based facial motion recognition method according to claim 4, wherein the penalty constant for error C of the Gaussian kernel support vector machine is 100 and the width of the Gaussian kernel σ is 1.0.
CN201911095547.3A 2019-11-11 2019-11-11 Facial action recognition method based on EEG sensor Pending CN110705656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911095547.3A CN110705656A (en) 2019-11-11 2019-11-11 Facial action recognition method based on EEG sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911095547.3A CN110705656A (en) 2019-11-11 2019-11-11 Facial action recognition method based on EEG sensor

Publications (1)

Publication Number Publication Date
CN110705656A true CN110705656A (en) 2020-01-17

Family

ID=69205709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911095547.3A Pending CN110705656A (en) 2019-11-11 2019-11-11 Facial action recognition method based on EEG sensor

Country Status (1)

Country Link
CN (1) CN110705656A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112606022A (en) * 2020-12-28 2021-04-06 苏州源睿尼科技有限公司 Use method of facial expression acquisition mask
CN113855019A (en) * 2021-08-25 2021-12-31 杭州回车电子科技有限公司 Expression recognition method and device based on EOG, EMG and piezoelectric signals

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537382A (en) * 2015-01-12 2015-04-22 杭州电子科技大学 Electromyographic signal gait recognition method for optimizing support vector machine based on genetic algorithm
CN106383579A (en) * 2016-09-14 2017-02-08 西安电子科技大学 EMG and FSR-based refined gesture recognition system and method
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104537382A (en) * 2015-01-12 2015-04-22 杭州电子科技大学 Electromyographic signal gait recognition method for optimizing support vector machine based on genetic algorithm
CN106383579A (en) * 2016-09-14 2017-02-08 西安电子科技大学 EMG and FSR-based refined gesture recognition system and method
CN110037693A (en) * 2019-04-24 2019-07-23 中央民族大学 A kind of mood classification method based on facial expression and EEG

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王众等: "基于脑电传感器的面部动作识别研究", 《计算机工程与应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112606022A (en) * 2020-12-28 2021-04-06 苏州源睿尼科技有限公司 Use method of facial expression acquisition mask
CN113855019A (en) * 2021-08-25 2021-12-31 杭州回车电子科技有限公司 Expression recognition method and device based on EOG, EMG and piezoelectric signals
CN113855019B (en) * 2021-08-25 2023-12-29 杭州回车电子科技有限公司 Expression recognition method and device based on EOG (Ethernet over coax), EMG (electro-magnetic resonance imaging) and piezoelectric signals

Similar Documents

Publication Publication Date Title
Ma et al. Deep channel-correlation network for motor imagery decoding from the same limb
Prashant et al. Brain computer interface: A review
Pun et al. Brain-computer interaction research at the Computer Vision and Multimedia Laboratory, University of Geneva
Giudice et al. 1D Convolutional Neural Network approach to classify voluntary eye blinks in EEG signals for BCI applications
Fatima et al. Towards a low cost Brain-computer Interface for real time control of a 2 DOF robotic arm
CN110705656A (en) Facial action recognition method based on EEG sensor
Sharma et al. Novel eeg based schizophrenia detection with iomt framework for smart healthcare
Ahmed et al. Effective hybrid method for the detection and rejection of electrooculogram (EOG) and power line noise artefacts from electroencephalogram (EEG) mixtures
Ahmed et al. A non Invasive Brain-Computer-Interface for Service Robotics
Gao et al. An ICA/HHT hybrid approach for automatic ocular artifact correction
Nam et al. Feature selection based on layer-wise relevance propagation for eeg-based mi classification
Memari et al. Design and Manufacture of a Guided Mechanical Arm by EEG Signals
CN115392287A (en) Electroencephalogram signal online self-adaptive classification method based on self-supervision learning
Kæseler et al. Brain patterns generated while using a tongue control interface: a preliminary study with two individuals with ALS
Das et al. A Review on Algorithms for EEG-Based BCIs
Ramli et al. Classification of eyelid position and eyeball movement using EEG signals
Tibdewal et al. ANN based automatic identification and classification of ocular artifacts and non-artifactual EEG
Vadivu et al. An Novel Versatile Inspiring Wavelet Transform and Resilient Direct Neural Network Classification Techniques for Monitoring Brain Activity System Based on EEG Signal
Matsuno et al. Machine learning using brain computer interface system
Hamzah et al. EEG signal classification to detect left and right command using artificial neural network (ANN)
Mehendale et al. Review on electromyography signal acquisition, processing and its applications
Yu Detecting Arm Movements from EEG-signal with Machine Learning
Roja et al. EEG-based Mouse Cursor Control using Motor Imagery Brain-Computer Interface
Nasim et al. ARTIFICIAL INTELLIGENCE IN MOTOR IMAGERY-BASED BCI SYSTEMS: A NARRATIVE
Goel et al. Spiking neural network based classification of task-evoked EEG signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200117