CN110353702A - A kind of emotion identification method and system based on shallow-layer convolutional neural networks - Google Patents
A kind of emotion identification method and system based on shallow-layer convolutional neural networks Download PDFInfo
- Publication number
- CN110353702A CN110353702A CN201910591898.7A CN201910591898A CN110353702A CN 110353702 A CN110353702 A CN 110353702A CN 201910591898 A CN201910591898 A CN 201910591898A CN 110353702 A CN110353702 A CN 110353702A
- Authority
- CN
- China
- Prior art keywords
- layer
- shallow
- convolutional neural
- eeg signals
- neural networks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2134—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Psychiatry (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Veterinary Medicine (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Physiology (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Psychology (AREA)
- Educational Technology (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Hospice & Palliative Care (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to emotion recognition technical fields, are related to a kind of emotion identification method and system based on shallow-layer convolutional neural networks.It is pre-processed by EEG signals, designs shallow-layer convolutional neural networks according to FBCSP feature extracting method, be based on trained shallow-layer convolutional neural networks model, classify to pretreated EEG signals, obtain emotion recognition result.In conjunction with FBCSP algorithm effective to eeg signal classification and convolutional neural networks at present, and it is applied to emotion electroencephalogramrecognition recognition, the recognition accuracy of different moods can be significantly improved, and have better wide usage for different tested individuals.Using shallow-layer convolutional network, to treated, emotion EEG signals are classified, more preferable than traditional characteristic extracting method recognition effect, have good application prospect in emotion recognition research field.
Description
Technical field
The invention belongs to emotion recognition technical fields, are related to a kind of emotion identification method based on shallow-layer convolutional neural networks
And system.
Background technique
The brain of people is the highest part of nervous system, controls the behavior, thought words and deeds and happiness anger sorrow of the mankind
It is happy, it is the part that structure is most complicated in human body.The secret explored and disclose cerebral function is always what scientists were chased
Dream and target.In in the past few decades, brain science research constantly makes a breakthrough.In first world brain in 1999
In machine interface conference, brain-computer interface (BCI) this concept is formally proposed for the first time, is set by the brain in human or animal with the external world
Communication road is established between standby, carries out information exchange, is directly the control instruction of computer by EEG Processing.
The research direction in the field BCI has much at present, achieves good research achievement, for example P300 event is mutually powered-down
Position, imagination movement brain electricity, epileptic electroencephalogram (eeg) identification etc., mood electroencephalogramrecognition recognition is also the one of popular research direction BCI.Due to
The smeared out boundary of mood, carrying out detection identification to mood using conventional method is still a huge challenge, and there is many
Problem, and it is used for most of feature, expression, behavior, speech or even the heartbeat etc. of Emotion identification, it can be pseudo- easily
It loads, this kind of external feature not necessarily represents the true emotional in the minds of people.External feature more than comparing, brain electricity
The physiological signals such as signal can more truly reveal out the time of day of people, if we can be by EEG signals come to people's
Mood is identified that result has more reliability than the result obtained by external feature.
Summary of the invention
In view of the deficiencies of the prior art, the present invention provides a kind of emotion identification method based on shallow-layer convolutional neural networks,
It is pre-processed by EEG signals, designs shallow-layer convolutional neural networks according to FBCSP feature extracting method, be based on trained shallow-layer
Convolutional neural networks model classifies to pretreated EEG signals, obtains emotion recognition result.
The present invention also provides a kind of emotion recognition systems based on shallow-layer convolutional neural networks.
Emotion identification method of the invention adopts the following technical scheme that realization:
A kind of emotion identification method based on shallow-layer convolutional neural networks, comprising:
EEG signals under S1, acquisition different emotions state, classify by different labels;
S2, all EEG signals are pre-processed, obtains the emotion of the high s/n ratio of several set time length
EEG signals;
S3, pretreated emotion EEG signals are inputted into the shallow-layer convolution mind according to the building of FBCSP feature extracting method
In network, a series of training are carried out, obtain shallow-layer convolutional neural networks optimal parameter, obtain trained shallow-layer convolutional Neural
Network;
S4, emotion recognition knot is obtained to emotion eeg signal classification based on trained shallow-layer convolutional neural networks model
Fruit.
Preferably, EEG signals, which pre-process, includes:
1., EEG signals sample separation, the EEG signals segment after being separated;
2., removal separation after EEG signals segment in bad sample and bad channel;
3., EEG signals segment bandpass filtering and down-sampled processing;
4., independent component analysis and principal component analysis;
5., using machine learning method independent element and the signal source that principal component analysis is isolated are judged whether it is
Noise source, and noise source is removed.
Preferably, it carries out judging whether it is bad sample by following calculation formula:
Wherein: an,cFor the average value in c-th of channel of n-th of sample, mcAnd σcIt is then all period segments in the channel c respectively
Mean value and standard deviation;
One threshold value t=3 is set, z is worked asn,cWhen > 3, illustrate amplitude and other periods of entire channel in the channel c of the sample
Segment differs greatly, and can be judged as of poor quality;
For the same sample, if it exceeds 20% channel is considered to of poor quality, then entire sample will be moved directly
It removes, sample number subtracts 1;If being no more than 20%, which can be retained, but bad channel can be removed and according to peripheral channel number
According to being repaired.
Preferably, the detection in bad channel is judged by calculating the maximum correlation coefficient in each channel and other channels, phase
It is as follows to close coefficient formulas:
X, Y is two signals, and r (X, Y) is the related coefficient of two signals, and Cov (X, Y) is covariance, Var [X] and Var
[Y] is respectively the variance of X, Y;
If there is the related coefficient maximum value with other channels less than 0.4 in the epochs that certain channel is more than 2%
Phenomenon, then the channel is judged as bad channel.
Preferably, bandpass filter frequency band is 0.1-49.5Hz.
Preferably, it further comprises the steps of:
Building shallow-layer convolutional neural networks are carried out according to FBCSP feature extracting method.
Further, shallow-layer convolutional neural networks are constructed according to FBCSP feature extracting method, and first two layers is convolution
Layer carries out feature extraction by different convolution kernels;First convolutional layer is used to extract the temporal signatures of eeg data, is equivalent to
Time domain filtering corresponds to the Filter banks step in FBCSP algorithm, for extracting the feature of different frequency;Second
Convolutional layer is for carrying out space characteristics extraction, the spatial filtering step being equivalent in common space pattern algorithm, by data projection
To another space;What and then two convolutional layers exported is a quadratic nonlinearity activation primitive and pond layer, and pond layer swashs
The log-variance characteristic extraction step for being log activation primitive, corresponding in FBCSP of function selection living;The last layer is
Full articulamentum adds softmax activation primitive, plays a part of classifier.
Preferably, shallow-layer convolutional neural networks include l0~l4Several layers of structures, in which:
l0:
The layer is input layer, and as input X by taking 1-sec EEG signals segment as an example, size is 62 × 100 matrixes;
l1:
For l1The output of t-th of convolution kernel of layer, is one 62 × 70 matrix,WithRespectively t-th
The weight and offset of convolution kernel, i, j are respectively the transverse and longitudinal coordinate of output matrix;Due to l1Layer is set as 10 convolution kernels, therefore
Export a(1)Size is 10 × 62 × 70;All convolution process all only carry out on a channel, and the time domain for only extracting data is special
Sign;
l2:
a(2)=(BN (z(2)))2
For l2S-th of spatial filter output of layer, j represent j-th of neuron of filter output, and matrix is big
Small is 1 × 70, therefore 1≤j≤70;WithThe weight and offset of respectively s-th convolution kernel,Indicating should
Some specific value of convolution kernel two-dimensional matrix;l2Layer one shares 30 spatial filters, therefore exports z(2)Size is 30 × 70;l2
Convolution is carried out in Spatial Dimension, only extracts spatial feature;Subsequent convolutional layer input enters BN layers and square non-linear layer, obtains most
Output a eventually(2);
l3:
a(3)=g (m(3))
l3It is mean value pond layer, whereinFor l2The output of s-th of spatial filter of layer is after BN and quadratic nonlinearity
As a result,ForMean value Chi Huahou output, size be 1 × 5;Subsequent m(3)By log activation primitive g (x)=
Logmax (x, 10-6) obtain final output a(3);l3The output a of layer(3)Size be 30 × 5;
l4:
d(4)=dropout (a(3), p)
f(4)=flatten (d(4))
l4It is full articulamentum, d(4)It is a(3)Output after carrying out dropout according to Probability p, j represent l4J-th nerve
Member;Then 30 × 5 matrixes are shakeout to obtain f(4), size becomes 1 × 150;Be represent c classification output layer neuron it is defeated
Out, it is calculated by softmax layers, according to the maximum classification of output layer value as final classification classification.
Emotion recognition system of the invention adopts the following technical scheme that realization:
A kind of emotion recognition system based on shallow-layer convolutional neural networks, comprising: eeg signal acquisition device, EEG signals
Pretreatment unit, shallow-layer convolutional neural networks and thymencephalon electricity tranining database, eeg signal acquisition device and EEG signals are pre-
Processing unit is connected, and shallow-layer convolutional neural networks are connected with EEG signals pretreatment unit and thymencephalon electricity tranining database
It connects, in which:
Eeg signal acquisition device is usually wrapped for acquiring the EEG signals under the different emotions state as test set
Include a series of facilities such as electrode cap, signal amplifier, PC machine;
EEG signals pretreatment unit, for handling the brain electricity original signal of eeg signal acquisition device acquisition, removal is not
The noise signal needed improves EEG signals signal-to-noise ratio, obtains pretreated emotion EEG signals;
Pretreated emotion EEG signals are classified as classifier, obtain emotion by shallow-layer convolutional neural networks
Recognition result;
Thymencephalon electricity tranining database, for providing training sample for shallow-layer convolutional neural networks, comprising sufficient amount of
Training sample has been subjected to pretreatment.
Further, shallow-layer convolutional neural networks are constructed according to FBCSP feature extracting method, and first two layers is convolution
Layer carries out feature extraction by different convolution kernels;First convolutional layer is used to extract the temporal signatures of eeg data, is equivalent to
Time domain filtering corresponds to the Filter banks step in FBCSP algorithm, for extracting the feature of different frequency;Second
Convolutional layer is for carrying out space characteristics extraction, the spatial filtering step being equivalent in common space pattern algorithm, by data projection
To another space;What and then two convolutional layers exported is a quadratic nonlinearity activation primitive and pond layer, and pond layer swashs
The log-variance characteristic extraction step for being log activation primitive, corresponding in FBCSP of function selection living;The last layer is
Full articulamentum adds softmax activation primitive, plays a part of classifier.
Compared with prior art, the invention has the following beneficial effects:
(1) FBCSP algorithm effective to eeg signal classification and convolutional neural networks at present are combined, and are applied
In emotion electroencephalogramrecognition recognition, the recognition accuracy of different moods can be significantly improved, and is had more for different tested individuals
Good wide usage.
(2) activation primitive of log and square nonlinear function as convolutional network is used, not with general network structure
Together, feature extraction effect similar with FBCSP can be reached.
(3) using shallow-layer convolutional network, to treated, emotion EEG signals are classified, than traditional characteristic extracting method
Recognition effect is more preferable, has good application prospect in emotion recognition research field.
(4) emotion identification method of the present invention is huge in training sample, individual difference it is biggish across individual experiment in can play
The advantage of deep learning, and effectively improve across emotion recognition accuracy rate in individual experiment, there is preferable application prospect.
Detailed description of the invention
Fig. 1 is emotion identification method flow chart in one embodiment of the invention;
Fig. 2 is EEG signals pretreatment process figure in one embodiment of the invention;
Fig. 3 is FBCSP feature extracting method flow chart in one embodiment of the invention;
Fig. 4 is that the improved shallow-layer convolutional neural networks frame of FBCSP feature extracting method is based in one embodiment of the invention
Frame figure.
Specific embodiment
The present invention is described in further detail below by specific embodiment, but embodiments of the present invention are not
It is limited to this.
A kind of emotion identification method based on shallow-layer convolutional neural networks, as shown in Figure 1, comprising:
EEG signals under S1, acquisition different emotions state, classify by different labels;
S2, all EEG signals are pre-processed, obtains the emotion of the high s/n ratio of several set time length
EEG signals;
In the present embodiment, EEG signals pretreatment is as shown in Figure 2, comprising:
1., EEG signals sample separation, the EEG signals segment after being separated;
Emotion EEG signals to be processed are read, and are separated into the segment of several set time length.General point
From the EEG signals segment for becoming 1 second or 4 seconds, each EEG signals segment is a sample.
2., removal separation after EEG signals segment in bad segment (also known as " bad sample ") and bad channel;
The segment of poor quality in EEG signals segment after separation and acquisition channel are removed, and pass through relevant method
It is repaired.
Step each sample that 1. middle segmentation obtains can carry out judging whether it is bad sample by following calculation formula:
Wherein: an,cFor the average value in c-th of channel of n-th of sample, mcAnd σcIt is then all period segments in the channel c respectively
Mean value and standard deviation.One threshold value t=3 is set, z is worked asn,cWhen > 3, illustrates the amplitude in the channel c of the sample and entirely lead to
Other period segments of road differ greatly, and can be judged as of poor quality.For the same sample, if it exceeds 20% channel all by
Think of poor quality, then entire sample will be removed directly, and sample number subtracts 1;If being no more than 20%, which can be retained, but
It is that bad channel can be removed and be repaired according to peripheral channel data.
The detection in bad channel judges by calculating the maximum correlation coefficient in each channel and other channels, related coefficient meter
It is as follows to calculate formula:
X, Y is two signals, and r (X, Y) is the related coefficient of two signals, and Cov (X, Y) is covariance, Var [X] and Var
[Y] is respectively the variance of X, Y.
If all there is the related coefficient maximum value with other channels less than 0.4 in the period segment that certain channel is more than 2%
The phenomenon that, then the channel is judged as bad channel.The signal data in entire channel can be then removed, and according to circumferential passages signal
It carries out repairing filling data.All bad channels re-start an average reference after repairing, then again to bad channel into
Row detection repeats above step until can't detect any one bad channel.
3., EEG signals segment bandpass filtering and down-sampled processing;
Emotion recognition does not need too high-frequency signal, it is therefore desirable to filter out extra make an uproar by a bandpass filtering
Sound, bandpass filter filter out high-frequency noise and low frequency floats for screening to the EEG signals segment of different frequency range
Move phenomenon;
In the present embodiment, bandpass filter frequency band can remove the alternating current power frequency of 50Hz using 0.1-49.5Hz
Interference and high-frequency noise, while can also inhibit the drift phenomenon of low frequency.
It then carries out down-sampled, EEG signals sample rate is reduced to 100Hz, subsequent process is reduced and calculates pressure, improve
Efficiency of algorithm.
4., independent component analysis and principal component analysis;
Independent component analysis (ICA) is for removing still remaining various noises, such as electrocardio interference, blink and eye movement
Interfere, do not remove clean electromyography signal etc..EEG signals can be separated into multiple signal sources by ICA, and some of them is identification institute
The signal source needed, some are then noise source.In addition, also using principal component analysis reduces ingredient number, to reduce calculation amount.
5., using machine learning method, above-mentioned independent element and the signal source that principal component analysis is isolated are judged
Whether it is noise source, and noise source is removed.
Classified using the method for machine learning to signal source, can effectively identify noise source and removed it.
S3, pretreated emotion EEG signals are inputted into the shallow-layer convolution mind according to the building of FBCSP feature extracting method
In network, a series of training are carried out, obtain shallow-layer convolutional neural networks optimal parameter, obtain trained shallow-layer convolutional Neural
Network;
FBCSP (filtering group common space mode) feature extracting method as shown in figure 3, comprising steps of
A, brain electricity original signal is separated to multiple and different frequency ranges using multiple bandpass filters.
B, CSP common space mode is calculated to the signal of each frequency range, it is maximum that EEG signals is transformed into inter-class variance
In space, can preferably it classify.
C, the signal after spatial alternation is subjected to feature extraction, is extracted according to following calculation formula:
F=log (1+VAR (Z))
Wherein: f is the feature finally extracted, and Z is transformed signal, and VAR is variance.
D, classify to the step C feature extracted, generally use SVM or neural network as classifier.
In the present embodiment, shallow-layer convolutional neural networks model is constructed according to FBCSP feature extracting method, such as Fig. 4 institute
Show, including l0~l4Several layers of structures, in which:
l0:
The layer is input layer, and as input X by taking 1-sec EEG signals segment as an example, size is 62 × 100 matrixes.
l1:
For l1The output of t-th of convolution kernel of layer, is one 62 × 70 matrix,WithRespectively t-th
The weight and offset of convolution kernel, i, j are respectively the transverse and longitudinal coordinate of output matrix.Due to l1Layer is set as 10 convolution kernels, therefore
Export a(1)Size is 10 × 62 × 70.All convolution process all only carry out on a channel, and the time domain for only extracting data is special
Sign.
l2:
a(2)=(BN (z(2)))2
For l2S-th of spatial filter output of layer, j represent j-th of neuron of filter output, and matrix is big
Small is 1 × 70, therefore 1≤j≤70.WithThe weight and offset of respectively s-th convolution kernel,Indicating should
Some specific value of convolution kernel two-dimensional matrix.l2Layer one shares 30 spatial filters, therefore exports z(2)Size is 30 × 70.l2
Convolution is carried out in Spatial Dimension, only extracts spatial feature.Subsequent convolutional layer input enters BN layers and square non-linear layer, obtains most
Output a eventually(2)。
l3:
a(3)=g (m(3))
l3It is mean value pond layer, whereinFor l2The output of s-th of spatial filter of layer is after BN and quadratic nonlinearity
As a result,ForMean value Chi Huahou output, size be 1 × 5.Subsequent m(3)By log activation primitive g (x)=
Logmax (x, 10-6) obtain final output a(3), choose x and 10-6Maximum value as the input of log function be in order to avoid pole
Small value exports a very big negative by log function.l3The output a of layer(3)Size be 30 × 5.
l4:
d(4)=dropout (a(3), p)
f(4)=flatten (d(4))
l4It is full articulamentum, d(4)It is a(3)Output after carrying out dropout according to Probability p, j represent l4J-th nerve
Member;Then 30 × 5 matrixes are shakeout to obtain f(4), size becomes 1 × 150;Be represent c classification output layer neuron it is defeated
Out, it is calculated by softmax layers, finally according to the maximum classification of output layer value as final classification classification.
The propagated forward process of shallow-layer convolutional neural networks SCN above, network parameter training by back transfer into
Row, more sorter networks for softmax as the last layer generally use negative log-likelihood function as loss function, right
It is X in input, label is the sample of y, and loss function is shown below:
Wherein: n is output layer neuron number, i.e. classification sum, lcFor the corresponding label value of c class, y is network output,It is the output for representing the output layer neuron of c classification, δ (x) is impulse function.
The set time length sample obtained after pretreatment is generally 1 second or 4 seconds segments, for different time spans,
Used shallow-layer convolutional neural networks can be also adjusted therewith.
Network, using small lot gradient decline (Mini-batch gradient descent, MBGD), is criticized big in training
It is small to be selected as 128.
The training of shallow-layer convolutional neural networks specifically includes:
S31, the database for establishing different emotions EEG signals collect the specific emotional brain telecommunications of different tested individuals
Number, need to guarantee that each EEG signals clip size is consistent, i.e., port number is identical with time span;
S32, EEG signals are improved into signal-to-noise ratio by the preprocessing process of step S2, and divides training according to a certain percentage
Collection and test set and verifying collection;
S33, shallow-layer convolutional neural networks model is built by pytorch deep learning library, first initialization model hyper parameter,
Including each layer of neuron number, convolution nuclear volume and size, activation primitive, batch size etc.;
S34, the data of training set are inputted into training in the shallow-layer convolutional neural networks of initialization, is calculated by reverse conduction
Method adjusts each parameter of shallow-layer convolutional neural networks, stops instruction when verifying collection accuracy rate higher value does not occur within 30 periods
Practice, taking the network model of current highest accuracy rate is best model.In addition to this, network model also takes a series of processing and mentions
High model generalization ability allows to apply in the big subject of multiple individual differences.
Shallow-layer convolutional neural networks are the convolution nets improved according to filtering group common space mode (FBCSP) algorithm
Network, first two layers of network are convolutional layer, carry out feature extraction by different convolution kernels.First convolutional layer is for extracting brain electricity
The temporal signatures of data, are equivalent to time domain filtering, correspond to the Filter banks step in FBCSP algorithm, can extract
The feature of different frequency.Second convolutional layer is equivalent in common space mode (CSP) algorithm for carrying out space characteristics extraction
Spatial filtering step, by data projection to another space.What and then two convolutional layers exported is that a quadratic nonlinearity swashs
Function and pond layer living, the activation primitive selection of pond layer is log activation primitive, corresponds to the log- in FBCSP
Variance characteristic extraction step.The last layer is full articulamentum plus softmax activation primitive, plays a part of classifier.
In the present embodiment, shallow-layer convolutional neural networks hyper parameter specifically:
The feature of input is the EEG signals sample of 62*100 (1 second length sample, 4 seconds length are 62*400), network
Batch size is 128, and convolution kernel size used in first convolutional layer is 1 × 31, step-length 1, convolution nuclear volume be 10 (if
It is 4-sec sample, convolution nuclear volume increases to 20).Second convolutional layer convolution kernel size and step-length are respectively 62 × 1 and 1,
Convolution nuclear volume increases to 30 (4-sec sample, convolution nuclear volume are constant).The Chi Huahe size of pond layer is 1 × 30, step-length
It is 10, local average is carried out to the feature that convolutional layer extracts.
Above-mentioned shallow-layer convolutional neural networks are in order to improve the measure that generalization ability is taken are as follows: batch normalization (BN) and
Dropout layers, BN layers can be normalized a certain layer input data, accelerate model convergence rate, improve convolutional network and stablize
Property, it is applied between the output of spatial convoluted layer and quadratic nonlinearity layer;Dropout layers are that can effectively inhibit deep learning network mistake
One of method of fitting phenomenon, applied to last full articulamentum to reduce network complexity.
S4, it is based on trained shallow-layer convolutional neural networks model, classified to emotion EEG signals, obtain emotion knowledge
Other result.
A kind of emotion recognition system based on shallow-layer convolutional neural networks, comprising: eeg signal acquisition device, EEG signals
Pretreatment unit, shallow-layer convolutional neural networks and thymencephalon electricity tranining database, eeg signal acquisition device and EEG signals are pre-
Processing unit is connected, and shallow-layer convolutional neural networks are connected with EEG signals pretreatment unit and thymencephalon electricity tranining database
It connects, in which:
Eeg signal acquisition device is usually wrapped for acquiring the EEG signals under the different emotions state as test set
Include a series of facilities such as electrode cap, signal amplifier, PC machine;
EEG signals pretreatment unit, for handling the brain electricity original signal of eeg signal acquisition device acquisition, removal is not
The noise signal needed improves EEG signals signal-to-noise ratio, obtains pretreated emotion EEG signals;
Pretreated emotion EEG signals are classified as classifier, obtain emotion by shallow-layer convolutional neural networks
Recognition result;
Thymencephalon electricity tranining database, for providing training sample for shallow-layer convolutional neural networks, comprising sufficient amount of
Training sample has been subjected to pretreatment.
Shallow-layer convolutional neural networks in the present embodiment emotion recognition system are set according to FBCSP feature extracting method
Meter, first two layers is convolutional layer, carries out feature extraction by different convolution kernels;First convolutional layer is for extracting eeg data
Temporal signatures are equivalent to time domain filtering, correspond to the Filter banks step in FBCSP algorithm, can extract different frequencies
The feature of rate;Second convolutional layer is for carrying out space characteristics extraction, the space filtering being equivalent in common space pattern algorithm
Step, by data projection to another space;What and then two convolutional layers exported is a quadratic nonlinearity activation primitive and pond
Change layer, the log-variance feature extraction for being log activation primitive, corresponding in FBCSP of the activation primitive selection of pond layer
Step;The last layer is full articulamentum plus softmax activation primitive, plays a part of classifier.
The present invention is directed to emotion recognition problem, and the method for brain electric treatment and deep learning is incorporated wherein.By being based on
The improved shallow-layer convolutional neural networks of FBCSP feature extracting method carry out feature extraction and model learning to emotion EEG signals,
The identification mission to the EEG signals under different emotions state is realized, emotion signal identification rate is effectively improved.
The above embodiment is a preferred embodiment of the present invention, but embodiments of the present invention are not by above-described embodiment
Limitation, other any changes, modifications, substitutions, combinations, simplifications made without departing from the spirit and principles of the present invention,
It should be equivalent substitute mode, be included within the scope of the present invention.
Claims (10)
1. a kind of emotion identification method based on shallow-layer convolutional neural networks characterized by comprising
EEG signals under S1, acquisition different emotions state, classify by different labels;
S2, all EEG signals are pre-processed, obtains the thymencephalon electricity of the high s/n ratio of several set time length
Signal;
S3, pretreated emotion EEG signals are inputted into the shallow-layer convolutional Neural net according to the building of FBCSP feature extracting method
In network, a series of training are carried out, obtain shallow-layer convolutional neural networks optimal parameter, obtain trained shallow-layer convolutional Neural net
Network;
S4, emotion recognition result is obtained to emotion eeg signal classification based on trained shallow-layer convolutional neural networks model.
2. emotion identification method according to claim 1, which is characterized in that EEG signals, which pre-process, includes:
1., EEG signals sample separation, the EEG signals segment after being separated;
2., removal separation after EEG signals segment in bad sample and bad channel;
3., EEG signals segment bandpass filtering and down-sampled processing;
4., independent component analysis and principal component analysis;
5., using machine learning method the signal source that independent element and principal component analysis are isolated is carried out judging whether it is noise
Source, and noise source is removed.
3. emotion identification method according to claim 2, which is characterized in that judged whether by following calculation formula
For bad sample:
Wherein: aN, cFor the average value in c-th of channel of n-th of sample, mcAnd σcIt is then the equal of all period segments in the channel c respectively
Value and standard deviation;
One threshold value t=3 is set, z is worked asN, cWhen > 3, illustrate the amplitude and other period segments of entire channel in the channel c of the sample
It differs greatly, can be judged as of poor quality;
For the same sample, if it exceeds 20% channel is considered to of poor quality, then entire sample will be removed directly, sample
This number subtracts 1;If be no more than 20%, which can be retained, but bad channel can be removed and according to peripheral channel data into
Row is repaired.
4. emotion identification method according to claim 2, which is characterized in that the detection in bad channel is by calculating each channel
Judge with the maximum correlation coefficient in other channels, related coefficient calculation formula is as follows:
X, Y is two signals, and r (X, Y) is the related coefficient of two signals, and Cov (X, Y) is covariance, and Var [X] and Var [Y] divide
Not Wei X, Y variance;
If there is with the related coefficient maximum value in other channels the phenomenon that less than 0.4 in the epochs that certain channel is more than 2%,
Then the channel is judged as bad channel.
5. the emotion identification method according to any one of claim 2-4, which is characterized in that the bandpass filter frequency
Band is 0.1-49.5Hz.
6. emotion identification method according to claim 1, which is characterized in that further comprise the steps of:
Building shallow-layer convolutional neural networks are carried out according to FBCSP feature extracting method.
7. emotion identification method according to claim 6, which is characterized in that shallow-layer convolutional neural networks are special according to FBCSP
Sign extracting method is constructed, and first two layers is convolutional layer, carries out feature extraction by different convolution kernels;First convolutional layer is used
In the temporal signatures for extracting eeg data, it is equivalent to time domain filtering, corresponds to the Filter banks step in FBCSP algorithm
Suddenly, for extracting the feature of different frequency;Second convolutional layer is equivalent to common space mode for carrying out space characteristics extraction
Spatial filtering step in algorithm, by data projection to another space;What and then two convolutional layers exported is one square non-
Linear activation primitive and pond layer, the activation primitive selection of pond layer is log activation primitive, corresponds to the log- in FBCSP
Variance characteristic extraction step;The last layer is full articulamentum plus softmax activation primitive, plays a part of classifier.
8. emotion identification method according to claim 7, which is characterized in that shallow-layer convolutional neural networks include l0~l4It is several
Layer structure, in which:
l0:
The layer is input layer, and as input X by taking 1-sec EEG signals segment as an example, size is 62 × 100 matrixes;
l1:
For l1The output of t-th of convolution kernel of layer, is one 62 × 70 matrix,WithRespectively t-th of convolution
The weight and offset of core, i, j are respectively the transverse and longitudinal coordinate of output matrix;Due to l1Layer is set as 10 convolution kernels, therefore exports a(1)Size is 10 × 62 × 70;All convolution process all only carry out on a channel, only extract the temporal signatures of data;
l2:
a(2)=(BN (z(2)))2
For l2S-th of spatial filter output of layer, j represent j-th of neuron of filter output, matrix size 1
× 70, therefore 1≤j≤70;WithThe weight and offset of respectively s-th convolution kernel,Indicate the convolution kernel
Some specific value of two-dimensional matrix;l2Layer one shares 30 spatial filters, therefore exports z(2)Size is 30 × 70;l2In space
Dimension carries out convolution, only extracts spatial feature;Subsequent convolutional layer input enters BN layers and square non-linear layer, obtains final output
a(2);
l3:
a(3)=g (m(3))
l3It is mean value pond layer, whereinFor l2Knot of the output of s-th of spatial filter of layer after BN and quadratic nonlinearity
Fruit,ForMean value Chi Huahou output, size be 1 × 5;Subsequent m(3)By log activation primitive g (x)=logmax
(x, 10-6) obtain final output a(3);l3The output a of layer(3)Size be 30 × 5;
l4:
d(4)=dropout (a(3), p)
f(4)=flatten (d(4))
l4It is full articulamentum, d(4)It is a(3)Output after carrying out dropout according to Probability p, j represent l4J-th of neuron;So
Afterwards 30 × 5 matrixes are shakeout to obtain f(4), size becomes 1 × 150;It is the output for representing the output layer neuron of c classification, by
Softmax layers are calculated, according to the maximum classification of output layer value as final classification classification.
9. a kind of emotion recognition system based on shallow-layer convolutional neural networks characterized by comprising eeg signal acquisition dress
Set, EEG signals pretreatment unit, shallow-layer convolutional neural networks and thymencephalon electricity tranining database, eeg signal acquisition device with
EEG signals pretreatment unit is connected, and shallow-layer convolutional neural networks and EEG signals pretreatment unit and thymencephalon electricity train number
It is connected according to library, in which:
Eeg signal acquisition device generally includes electricity for acquiring the EEG signals under the different emotions state as test set
A series of facilities of polar cap, signal amplifier, PC machine;
EEG signals pretreatment unit, for handling the brain electricity original signal of eeg signal acquisition device acquisition, removal is not needed
Noise signal, improve EEG signals signal-to-noise ratio, obtain pretreated emotion EEG signals;
Pretreated emotion EEG signals are classified as classifier, obtain emotion recognition by shallow-layer convolutional neural networks
As a result;
Thymencephalon electricity tranining database includes sufficient amount of training for providing training sample for shallow-layer convolutional neural networks
Sample has been subjected to pretreatment.
10. emotion recognition system according to claim 9, which is characterized in that shallow-layer convolutional neural networks are special according to FBCSP
Sign extracting method is constructed, and first two layers is convolutional layer, carries out feature extraction by different convolution kernels;First convolutional layer is used
In the temporal signatures for extracting eeg data, it is equivalent to time domain filtering, corresponds to the Filter banks step in FBCSP algorithm
Suddenly, for extracting the feature of different frequency;Second convolutional layer is equivalent to common space mode for carrying out space characteristics extraction
Spatial filtering step in algorithm, by data projection to another space;What and then two convolutional layers exported is one square non-
Linear activation primitive and pond layer, the activation primitive selection of pond layer is log activation primitive, corresponds to the log- in FBCSP
Variance characteristic extraction step;The last layer is full articulamentum plus softmax activation primitive, plays a part of classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910591898.7A CN110353702A (en) | 2019-07-02 | 2019-07-02 | A kind of emotion identification method and system based on shallow-layer convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910591898.7A CN110353702A (en) | 2019-07-02 | 2019-07-02 | A kind of emotion identification method and system based on shallow-layer convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110353702A true CN110353702A (en) | 2019-10-22 |
Family
ID=68217768
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910591898.7A Pending CN110353702A (en) | 2019-07-02 | 2019-07-02 | A kind of emotion identification method and system based on shallow-layer convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110353702A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461204A (en) * | 2020-03-30 | 2020-07-28 | 华南理工大学 | Emotion identification method based on electroencephalogram signals and used for game evaluation |
CN111671445A (en) * | 2020-04-20 | 2020-09-18 | 广东食品药品职业学院 | Consciousness disturbance degree analysis method |
CN111709267A (en) * | 2020-03-27 | 2020-09-25 | 吉林大学 | Electroencephalogram signal emotion recognition method of deep convolutional neural network |
CN111783942A (en) * | 2020-06-08 | 2020-10-16 | 北京航天自动控制研究所 | Brain cognition process simulation method based on convolution cyclic neural network |
CN111860463A (en) * | 2020-08-07 | 2020-10-30 | 北京师范大学 | Emotion identification method based on joint norm |
CN111882036A (en) * | 2020-07-22 | 2020-11-03 | 广州大学 | Convolutional neural network training method, electroencephalogram signal identification method, device and medium |
CN112036229A (en) * | 2020-06-24 | 2020-12-04 | 宿州小马电子商务有限公司 | Intelligent bassinet electroencephalogram signal channel configuration method with demand sensing function |
CN112084935A (en) * | 2020-09-08 | 2020-12-15 | 南京邮电大学 | Emotion recognition method based on expansion of high-quality electroencephalogram sample |
CN112101152A (en) * | 2020-09-01 | 2020-12-18 | 西安电子科技大学 | Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment |
CN112381008A (en) * | 2020-11-17 | 2021-02-19 | 天津大学 | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network |
CN112465069A (en) * | 2020-12-15 | 2021-03-09 | 杭州电子科技大学 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
CN112488002A (en) * | 2020-12-03 | 2021-03-12 | 重庆邮电大学 | Emotion recognition method and system based on N170 |
CN112869711A (en) * | 2021-01-19 | 2021-06-01 | 华南理工大学 | Automatic sleep staging and migration method based on deep neural network |
CN113052113A (en) * | 2021-04-02 | 2021-06-29 | 中山大学 | Depression identification method and system based on compact convolutional neural network |
CN113069117A (en) * | 2021-04-02 | 2021-07-06 | 中山大学 | Electroencephalogram emotion recognition method and system based on time convolution neural network |
CN113128353A (en) * | 2021-03-26 | 2021-07-16 | 安徽大学 | Emotion sensing method and system for natural human-computer interaction |
CN113180659A (en) * | 2021-01-11 | 2021-07-30 | 华东理工大学 | Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network |
CN113191225A (en) * | 2021-04-19 | 2021-07-30 | 华南师范大学 | Emotional electroencephalogram recognition method and system based on graph attention network |
CN113576493A (en) * | 2021-08-23 | 2021-11-02 | 安徽七度生命科学集团有限公司 | User state identification method for health physiotherapy cabin |
CN113627518A (en) * | 2021-08-07 | 2021-11-09 | 福州大学 | Method for realizing multichannel convolution-recurrent neural network electroencephalogram emotion recognition model by utilizing transfer learning |
CN113642528A (en) * | 2021-09-14 | 2021-11-12 | 西安交通大学 | Hand movement intention classification method based on convolutional neural network |
CN113723557A (en) * | 2021-09-08 | 2021-11-30 | 山东大学 | Depression electroencephalogram classification system based on multiband time-space convolution network |
CN113791691A (en) * | 2021-09-18 | 2021-12-14 | 中国科学院自动化研究所 | Electroencephalogram signal band positioning method and device |
CN114578967A (en) * | 2022-03-08 | 2022-06-03 | 天津理工大学 | Emotion recognition method and system based on electroencephalogram signals |
CN116671881A (en) * | 2023-08-03 | 2023-09-01 | 北京九叁有方物联网科技有限公司 | Head-wearing brain body operation capability assessment device and method based on graph neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5601090A (en) * | 1994-07-12 | 1997-02-11 | Brain Functions Laboratory, Inc. | Method and apparatus for automatically determining somatic state |
CN107714057A (en) * | 2017-10-01 | 2018-02-23 | 南京邮电大学盐城大数据研究院有限公司 | A kind of three classification Emotion identification model methods based on convolutional neural networks |
CN109508651A (en) * | 2018-10-24 | 2019-03-22 | 辽宁师范大学 | Brain electricity sensibility classification method based on convolutional neural networks |
-
2019
- 2019-07-02 CN CN201910591898.7A patent/CN110353702A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5601090A (en) * | 1994-07-12 | 1997-02-11 | Brain Functions Laboratory, Inc. | Method and apparatus for automatically determining somatic state |
CN107714057A (en) * | 2017-10-01 | 2018-02-23 | 南京邮电大学盐城大数据研究院有限公司 | A kind of three classification Emotion identification model methods based on convolutional neural networks |
CN109508651A (en) * | 2018-10-24 | 2019-03-22 | 辽宁师范大学 | Brain electricity sensibility classification method based on convolutional neural networks |
Non-Patent Citations (3)
Title |
---|
ROBIN TIBOR SCHIRRMEISTER等: "Deep Learning With Convolutional Neural Networks for EEG Decoding and Visualization", 《HUMAN BRAIN MAPPING》 * |
张本禹: "基于卷积神经网络的EEG情绪分类研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
邱嘉裕: "基于静息态脑电的脑卒中患者大脑异常活动研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑辑》 * |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111709267A (en) * | 2020-03-27 | 2020-09-25 | 吉林大学 | Electroencephalogram signal emotion recognition method of deep convolutional neural network |
CN111709267B (en) * | 2020-03-27 | 2022-03-29 | 吉林大学 | Electroencephalogram signal emotion recognition method of deep convolutional neural network |
CN111461204A (en) * | 2020-03-30 | 2020-07-28 | 华南理工大学 | Emotion identification method based on electroencephalogram signals and used for game evaluation |
CN111461204B (en) * | 2020-03-30 | 2023-05-26 | 华南理工大学 | Emotion recognition method based on electroencephalogram signals for game evaluation |
CN111671445A (en) * | 2020-04-20 | 2020-09-18 | 广东食品药品职业学院 | Consciousness disturbance degree analysis method |
CN111783942A (en) * | 2020-06-08 | 2020-10-16 | 北京航天自动控制研究所 | Brain cognition process simulation method based on convolution cyclic neural network |
CN111783942B (en) * | 2020-06-08 | 2023-08-01 | 北京航天自动控制研究所 | Brain cognitive process simulation method based on convolutional recurrent neural network |
CN112036229A (en) * | 2020-06-24 | 2020-12-04 | 宿州小马电子商务有限公司 | Intelligent bassinet electroencephalogram signal channel configuration method with demand sensing function |
CN112036229B (en) * | 2020-06-24 | 2024-04-19 | 宿州小马电子商务有限公司 | Intelligent bassinet electroencephalogram signal channel configuration method with demand sensing function |
CN111882036A (en) * | 2020-07-22 | 2020-11-03 | 广州大学 | Convolutional neural network training method, electroencephalogram signal identification method, device and medium |
CN111882036B (en) * | 2020-07-22 | 2023-10-31 | 广州大学 | Convolutional neural network training method, electroencephalogram signal identification method, device and medium |
CN111860463B (en) * | 2020-08-07 | 2024-02-02 | 北京师范大学 | Emotion recognition method based on joint norm |
CN111860463A (en) * | 2020-08-07 | 2020-10-30 | 北京师范大学 | Emotion identification method based on joint norm |
CN112101152A (en) * | 2020-09-01 | 2020-12-18 | 西安电子科技大学 | Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment |
CN112101152B (en) * | 2020-09-01 | 2024-02-02 | 西安电子科技大学 | Electroencephalogram emotion recognition method, electroencephalogram emotion recognition system, computer equipment and wearable equipment |
CN112084935A (en) * | 2020-09-08 | 2020-12-15 | 南京邮电大学 | Emotion recognition method based on expansion of high-quality electroencephalogram sample |
CN112084935B (en) * | 2020-09-08 | 2022-07-26 | 南京邮电大学 | Emotion recognition method based on expansion of high-quality electroencephalogram sample |
CN112381008A (en) * | 2020-11-17 | 2021-02-19 | 天津大学 | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network |
CN112381008B (en) * | 2020-11-17 | 2022-04-29 | 天津大学 | Electroencephalogram emotion recognition method based on parallel sequence channel mapping network |
CN112488002A (en) * | 2020-12-03 | 2021-03-12 | 重庆邮电大学 | Emotion recognition method and system based on N170 |
CN112465069A (en) * | 2020-12-15 | 2021-03-09 | 杭州电子科技大学 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
CN112465069B (en) * | 2020-12-15 | 2024-02-06 | 杭州电子科技大学 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
CN113180659B (en) * | 2021-01-11 | 2024-03-08 | 华东理工大学 | Electroencephalogram emotion recognition method based on three-dimensional feature and cavity full convolution network |
CN113180659A (en) * | 2021-01-11 | 2021-07-30 | 华东理工大学 | Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network |
CN112869711A (en) * | 2021-01-19 | 2021-06-01 | 华南理工大学 | Automatic sleep staging and migration method based on deep neural network |
CN113128353B (en) * | 2021-03-26 | 2023-10-24 | 安徽大学 | Emotion perception method and system oriented to natural man-machine interaction |
CN113128353A (en) * | 2021-03-26 | 2021-07-16 | 安徽大学 | Emotion sensing method and system for natural human-computer interaction |
CN113052113A (en) * | 2021-04-02 | 2021-06-29 | 中山大学 | Depression identification method and system based on compact convolutional neural network |
CN113069117A (en) * | 2021-04-02 | 2021-07-06 | 中山大学 | Electroencephalogram emotion recognition method and system based on time convolution neural network |
CN113191225A (en) * | 2021-04-19 | 2021-07-30 | 华南师范大学 | Emotional electroencephalogram recognition method and system based on graph attention network |
CN113191225B (en) * | 2021-04-19 | 2023-07-04 | 华南师范大学 | Emotion electroencephalogram recognition method and system based on graph attention network |
CN113627518B (en) * | 2021-08-07 | 2023-08-08 | 福州大学 | Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning |
CN113627518A (en) * | 2021-08-07 | 2021-11-09 | 福州大学 | Method for realizing multichannel convolution-recurrent neural network electroencephalogram emotion recognition model by utilizing transfer learning |
CN113576493A (en) * | 2021-08-23 | 2021-11-02 | 安徽七度生命科学集团有限公司 | User state identification method for health physiotherapy cabin |
CN113723557B (en) * | 2021-09-08 | 2023-08-08 | 山东大学 | Depression brain electricity classifying system based on multiband space-time convolution network |
CN113723557A (en) * | 2021-09-08 | 2021-11-30 | 山东大学 | Depression electroencephalogram classification system based on multiband time-space convolution network |
CN113642528A (en) * | 2021-09-14 | 2021-11-12 | 西安交通大学 | Hand movement intention classification method based on convolutional neural network |
CN113791691B (en) * | 2021-09-18 | 2022-05-20 | 中国科学院自动化研究所 | Electroencephalogram signal band positioning method and device |
CN113791691A (en) * | 2021-09-18 | 2021-12-14 | 中国科学院自动化研究所 | Electroencephalogram signal band positioning method and device |
CN114578967A (en) * | 2022-03-08 | 2022-06-03 | 天津理工大学 | Emotion recognition method and system based on electroencephalogram signals |
CN116671881A (en) * | 2023-08-03 | 2023-09-01 | 北京九叁有方物联网科技有限公司 | Head-wearing brain body operation capability assessment device and method based on graph neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110353702A (en) | A kind of emotion identification method and system based on shallow-layer convolutional neural networks | |
Kachenoura et al. | ICA: a potential tool for BCI systems | |
CN105956624B (en) | Mental imagery brain electricity classification method based on empty time-frequency optimization feature rarefaction representation | |
CN110353673B (en) | Electroencephalogram channel selection method based on standard mutual information | |
Bashar et al. | Human identification from brain EEG signals using advanced machine learning method EEG-based biometrics | |
CN112656427A (en) | Electroencephalogram emotion recognition method based on dimension model | |
Pan et al. | Emotion recognition based on EEG using generative adversarial nets and convolutional neural network | |
CN112488002B (en) | Emotion recognition method and system based on N170 | |
CN114533086B (en) | Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation | |
CN111091074A (en) | Motor imagery electroencephalogram signal classification method based on optimal region common space mode | |
CN106725452A (en) | Based on the EEG signal identification method that emotion induces | |
CN108256579A (en) | A kind of multi-modal sense of national identity quantization measuring method based on priori | |
CN113180659B (en) | Electroencephalogram emotion recognition method based on three-dimensional feature and cavity full convolution network | |
Cheng et al. | Emotion recognition algorithm based on convolution neural network | |
Lu et al. | Combined CNN and LSTM for motor imagery classification | |
Wang et al. | Classification of EEG signal using convolutional neural networks | |
CN111000556A (en) | Emotion recognition method based on deep fuzzy forest | |
AU2013100576A4 (en) | Human Identification with Electroencephalogram (EEG) for the Future Network Security | |
CN113576498B (en) | Visual and auditory aesthetic evaluation method and system based on electroencephalogram signals | |
CN113128353B (en) | Emotion perception method and system oriented to natural man-machine interaction | |
CN114081505A (en) | Electroencephalogram signal identification method based on Pearson correlation coefficient and convolutional neural network | |
CN110192864A (en) | A kind of cross-domain electrocardiogram biometric identity recognition methods | |
CN117883082A (en) | Abnormal emotion recognition method, system, equipment and medium | |
Rammy et al. | Sequence-to-sequence deep neural network with spatio-spectro and temporal features for motor imagery classification | |
CN116421200A (en) | Brain electricity emotion analysis method of multi-task mixed model based on parallel training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191022 |
|
RJ01 | Rejection of invention patent application after publication |