CN114795246A - Brain region local-global attention-based electroencephalogram emotion classification method and system - Google Patents

Brain region local-global attention-based electroencephalogram emotion classification method and system Download PDF

Info

Publication number
CN114795246A
CN114795246A CN202210271282.3A CN202210271282A CN114795246A CN 114795246 A CN114795246 A CN 114795246A CN 202210271282 A CN202210271282 A CN 202210271282A CN 114795246 A CN114795246 A CN 114795246A
Authority
CN
China
Prior art keywords
electroencephalogram
attention
vector
brain
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210271282.3A
Other languages
Chinese (zh)
Other versions
CN114795246B (en
Inventor
徐向民
许见微
张鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Zhongshan Institute of Modern Industrial Technology of South China University of Technology
Original Assignee
South China University of Technology SCUT
Zhongshan Institute of Modern Industrial Technology of South China University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT, Zhongshan Institute of Modern Industrial Technology of South China University of Technology filed Critical South China University of Technology SCUT
Priority to CN202210271282.3A priority Critical patent/CN114795246B/en
Publication of CN114795246A publication Critical patent/CN114795246A/en
Application granted granted Critical
Publication of CN114795246B publication Critical patent/CN114795246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Psychology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a brain region local-global attention-based electroencephalogram emotion classification method and system. The method comprises the following steps: playing video materials for inducing emotion; collecting an electroencephalogram signal; extracting local features of the electroencephalogram from the electroencephalogram signals; and simulating a brain area topological structure through brain electricity characteristic slicing and position coding, and classifying and identifying emotion labels of the brain electricity by using local and global characteristics of the brain electricity. The invention applies the data processing method for simulating the local-global topology of the brain area to the electroencephalogram emotion recognition, improves the accuracy of the electroencephalogram emotion recognition, has better stability, can reflect the topological connection of the brain functional area under different emotional states, and has wide application prospect.

Description

Brain region local-global attention-based electroencephalogram emotion classification method and system
Technical Field
The invention relates to the technical field of machine learning, deep learning and emotion calculation, in particular to an electroencephalogram emotion classification method and system based on brain area local-global attention.
Background
In recent years, cognitive neuroscience research shows that emotion occurrence and corresponding activities are closely related to activities of cerebral cortex, and the research results show that the neuronal potential is further used as the basis of physiological activities of the central nervous system and can reflect the changes of functions and physiology. The electroencephalogram has millisecond-level time resolution, and is suitable for being applied to scenes with high time response requirements, such as real-time detection and feedback of emotion changes. Meanwhile, the electroencephalogram acquisition equipment is portable and easy to use in measurement, so that the electroencephalogram can play more roles in the fields of education, medical treatment and entertainment.
Because of the close relation between the electroencephalogram and the central nervous system and the advantages of acquisition cost and use, the electroencephalogram is always concerned about emotion classification research. In the field of emotion calculation, electroencephalogram signals are used as a physiological index to research the change of emotional states. The researcher gives visual and auditory stimuli to the tested person through pictures, audios and videos, records the electroencephalogram of the tested person during the experiment, and then researches the relation between the electroencephalogram and the induced emotion through the material labels which are evaluated in advance.
Currently, many researchers have used machine learning and deep learning methods in the emotion classification research of electroencephalogram. Algorithms such as a K-nearest neighbor algorithm (KNN), a Support Vector Machine (SVM), a fully connected neural network (FC), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), and the like have been applied to the current study of electroencephalogram emotion classification. In addition, the brain function area research result obtained by electroencephalogram traceability analysis is similar to the existing brain science conclusion, and the electroencephalogram has great potential in the aspect of further researching brain functions. However, the current electroencephalogram emotion classification research ignores the action of global topological features of electroencephalograms, a stable and effective method for simulating brain area topology is still lacked, and a model is poor in accuracy and generalization and cannot meet the requirement. For example, in the "end-to-end electroencephalogram emotion recognition method based on attention mechanism" disclosed in chinese patent publication CN113297981A, such as liu, sequentially yao, etc., only the attention mechanism is added to CNN and LSTM, and local and global spatial feature information of electroencephalogram signals on brain physiological structures is not combined, so that the prediction accuracy and interpretability are not high.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides the electroencephalogram emotion classification method and system based on local-global attention of the brain area.
In order to achieve the purpose of the invention, the electroencephalogram emotion classification method based on local-global attention of the brain area comprises the following steps:
s1, playing a video material for inducing emotion;
s2, acquiring electroencephalogram signals, converting the acquired electroencephalogram signals into digital signals from analog signals and storing the digital signals;
s3, extracting electroencephalogram local feature L from electroencephalogram signals i Wherein i is 1, 2, 3, and N is the number of input data samples;
s4, slicing the electroencephalogram signal, simulating a brain area topological structure by combining position coding, and extracting electroencephalogram global features G from the brain area topological structure i Wherein i is 1, 2, 3, and N is the number of input data samples and is based on the local characteristics L of the brain electricity i Global characteristics of electroencephalogram G i And classifying and identifying the emotion labels y of the electroencephalogram through a multilayer self-attention mechanism to obtain an electroencephalogram emotion classification result.
Further, the step S1 is as follows:
s101, a stimulation material presenting module, which is connected with a 1280 multiplied by 1024 pixel, a 60Hz liquid crystal display, a loudspeaker and a desktop computer host to enable the desktop computer host to have the condition of playing video;
s102, through an application interface programmed by Qt, the emotion-induced video material subjected to subjective emotion assessment can be played in a random sequence;
s103, through the application interface programmed by Qt, after the tested object watches the emotion-induced video, the video can be scored according to the awakening degree and the pleasure degree of the video within the range of 1-9 minutes and stored as a video emotion label.
Further, the procedure of step S2 is as follows:
s201, distributing electrode caps and electrodes adopted by an electroencephalogram acquisition module according to international 10-20 standard leads, wherein the electrodes are made of silver chloride and are used for acquiring electroencephalogram signals of cerebral cortex, and the acquired signals flow into an analog-to-digital converter;
s202, converting the electroencephalogram signals into electric signals from analog signals through a cascaded analog-to-digital converter, wherein the analog-to-digital conversion speed of each channel is 24bit, and the selectable sampling rate of each electrode channel is as follows: 2, 4, 6, 8 kHz;
s203, the analog-to-digital converter transmits the electroencephalogram signals to an application host through optical fibers, the electroencephalogram is further down-sampled to 128Hz, myoelectricity and eye movement artifacts are removed, and filtering processing is carried out on the electroencephalogram through a band-pass filter;
s204, rearranging the electroencephalogram signals in a matrix representation mode according to the planar two-dimensional distribution of the international 10-20 standard 32 lead distribution.
Further, step S3 includes the following sub-steps:
channel weighting is carried out on the electroencephalogram characteristics through a channel attention weighting algorithm;
extracting electroencephalogram local features of the electroencephalogram features through convolution operation;
unfolding local features of the brain electricity into one-dimensional vectors;
the channel weighting of the electroencephalogram features through the channel attention weighting algorithm specifically comprises the following steps:
the input data is:
X=[x 1 ,x 2 ,x 3 ,…,x c ],X∈R H×W×C
wherein X is the three-dimensional vector of the input, H is the length of the input vector, W is the width of the input vector, C is the number of channels of the input vector, X c For the component of the input vector X on the C-th channel, R H×W×C Indicating that a length, width and height are H, W, C respectivelyThree-dimensional real space of (a);
and (3) carrying out average pooling on the input vectors, and calculating a formula of channel numerical distribution information:
Figure BDA0003554761380000031
where v is channel number distribution information, and is a 1 × 1 × C vector, F avgpool Is an average pooling function, i denotes the ith position over length H, and j denotes the jth position over width W;
the sigmoid function is known:
Figure BDA0003554761380000032
the activation function RELU expression is:
g(x)=max(0,x)
the weight formula of the channel is:
u=σ(W 2 g(W 1 v))
wherein, x is variable, u is weight of channel, and is a vector of 1 × 1 × C, W 1 Is composed of
Figure BDA0003554761380000033
R is a scaling parameter, W 2 Is composed of
Figure BDA0003554761380000034
A matrix of (a);
multiplying the channel weight by the input to obtain:
Figure BDA0003554761380000035
wherein the content of the first and second substances,
Figure BDA0003554761380000036
is an H × W vector, u c Is the component of the channel weight vector u on the channel with the sequence number C;
the local feature extraction of the electroencephalogram features through convolution operation comprises the following steps:
Figure BDA0003554761380000037
Figure BDA0003554761380000038
the output of the convolutional neural network is an H multiplied by W vector, i, j are its subscripts, and K is the size of K 1 ×k 2 K is a convolution kernel of 1 、k 2 Respectively, the length and width of the convolution kernel K, m and n are subscripts thereof, and K (m, n) represents the value of the convolution kernel K at positions m and n;
the expanding of the local features of the brain electricity into one-dimensional vectors comprises the following steps:
Figure BDA0003554761380000041
further developed into a 1 Xn 1 Wherein n is 1 Inputting a linear network layer H multiplied by W, and finally obtaining the local characteristics of the brain electricity:
Figure BDA0003554761380000042
wherein L is the local feature of brain electricity and is n 2 Vector of dimensions, W 3 Is n 2 ×n 1 Matrix of b 1 Is n 2 A column vector of dimensions.
Further, the step S4 specifically includes:
the input data is:
X=[x 1 ,x 2 ,x 3 ,…,x c ],X∈R H×W×C
wherein X is an input three-dimensional vector, H is the length of the input vector, W is the width of the input vector, and C is the channel number of the input vector;
performing electroencephalogram feature slicing on the input features to obtain:
Figure BDA0003554761380000043
wherein x is p Is the electroencephalogram feature vector obtained by slicing,
Figure BDA0003554761380000044
p is the side length of the slice;
the channel positions contained in the obtained brain electrical characteristic slice respectively correspond to frontal lobe, temporal lobe, parietal lobe and occipital lobe on the physiological structure of cerebral cortex, and the rest parts are defined as edge electrodes;
carrying out position coding on the electroencephalogram feature slice, wherein the mathematical expression of the position coding is as follows:
the electroencephalogram feature vector x p From NxP 2 The XC is expanded into a vector of NXdim, and the dim is the length of the set characteristic vector; randomly initializing a 1 xdim sized classified feature vector x' p And the electroencephalogram feature vector x p Is connected to obtain
Figure BDA0003554761380000045
Figure BDA0003554761380000046
Is a vector of size (N +1) × dim as input for position coding;
randomly initializing a vector v of size (N +1) x dim p And obtaining a position coding formula:
Figure BDA0003554761380000047
Figure BDA0003554761380000048
is the output of the position code, is a vector of size (N +1) × dim;
extracting global topological features of the electroencephalogram by adopting a self-attention mechanism:
Figure BDA0003554761380000049
wherein Q, K and V are all of the size
Figure BDA00035547613800000410
H is the number of optional attention moment arrays, T is the transpose,
Figure BDA00035547613800000411
the multi-head attention mechanism formula is as follows:
X m =Concat(head 1 ,…head h )W o
Figure BDA0003554761380000051
wherein, X m As an attention weight, it is a vector of (N +1) × dim in size, Concat (head) 1 ,…head h ) The h attention moment arrays are shown to be vector-connected,
Figure BDA0003554761380000052
is a model trainable parameter matrix, head h The h-th attention matrix is represented,
Figure BDA0003554761380000053
Figure BDA0003554761380000054
W i Q
Figure BDA0003554761380000055
trainable parameter matrixes corresponding to vectors Q, K and V respectively;
Figure BDA0003554761380000056
to represent the QW i Q
Figure BDA0003554761380000057
Substituting into the formula Attention (Q, K, V), and obtaining the output as head;
adding the attention weight value and the output of the position code to obtain the attention characteristic X a
Figure BDA0003554761380000058
The fully connected feedforward network formula of the attention mechanism is as follows:
X f =max(0,x a W f1 +b f1 )W f2 +b f2
wherein, X f The output of the fully-connected feedforward network is a vector of size (N +1) x dim, x a Is X a Component of (A), W f1 Is a size of dim x d h Matrix of d h For a selected hidden feature dimension, b f1 Is a (N +1) -dimensional column vector, W f2 Is a size d h Matrix of x dim, b f2 Is a (N +1) -dimensional column vector;
output and attention feature X of fully connected feedforward network a Adding to obtain:
G=X m +X f
wherein G is the global topological attention feature of the electroencephalogram and is a vector of (N +1) x dim;
the classification and identification of the emotion label y of the electroencephalogram through the multi-layer self-attention mechanism comprises the following steps:
the input is as follows:
X input =G+L
wherein, X input For input, the method is a vector with the size of (N +1) multiplied by dim, G is the global topological attention feature of electroencephalogram, and L is the local electroencephalogram feature;
mixing X input Inputting the multi-layer self-attention mechanism neural network to obtain output X y ,X y For simulating local-global topological attention characteristics of brain regionA vector, which is a vector of size (N +1) × dim;
predicted label of network output:
y=softmax(X y )
y is an emotion classification prediction label of the brain electricity,
further, step S5 is also included, and the step S5 process is as follows:
s501, storing the electroencephalogram emotion classification prediction labels obtained by the model on a server;
s502, the mobile terminal of the common user accesses the server through the wireless network, sends a request to the server and reads the classification label.
The invention discloses a brain region local-global attention-based electroencephalogram emotion classification system, which is used for realizing the method and comprises the following modules:
the stimulation material presentation module is used for playing video materials for inducing emotion;
the electroencephalogram acquisition module is used for acquiring electroencephalogram signals and converting the acquired electroencephalogram signals into digital signals from analog signals for storage;
an electroencephalogram local feature extraction module for extracting electroencephalogram local features L from the electroencephalogram signals i Wherein i is 1, 2, 3, and N is the number of input data samples;
the brain area full-volume topology attention neural network simulation module is used for simulating a brain area topology structure by combining position coding after slicing the electroencephalogram signals, and extracting the global characteristics G of the electroencephalogram from the brain area topology structure i Wherein i is 1, 2, 3, and N is the number of input data samples and is based on the local characteristics L of the brain electricity i Global characteristics of electroencephalogram G i Classifying and identifying the emotion labels y of the electroencephalogram through a multilayer self-attention mechanism to obtain an electroencephalogram emotion classification result;
and the mobile terminal module is used for reading and displaying the electroencephalogram emotion classification result.
Further, the stimulation material presentation module comprises a liquid crystal display, a loudspeaker and a desktop computer host which are sequentially connected, wherein the system installed on the desktop computer host is Windows 10.
Furthermore, the electroencephalogram acquisition module comprises an electrode cap, an electrode and an analog-to-digital converter which are sequentially connected, wherein the electrode cap and the electrode obtain an electroencephalogram signal by measuring the potential of the cerebral cortex, and the analog-to-digital converter converts the received analog electroencephalogram signal into a digital electric signal.
Further, the electroencephalogram local feature extraction module comprises a channel attention neural network, a convolution neural network and a linear neural network which are sequentially connected, wherein the channel attention neural network carries out channel weighting on the electroencephalogram features through a channel attention weighting algorithm, the convolution neural network carries out local feature extraction on the electroencephalogram features through convolution operation, and the linear neural network expands the electroencephalogram local features into one-dimensional vectors.
Further, the brain area global topology attention neural network simulation module comprises an electroencephalogram feature slicing module, a position coding module, a self-attention mechanism neural network layer, a full-connection feedforward network and a multilayer self-attention mechanism neural network layer which are sequentially connected, wherein the electroencephalogram feature slicing module slices input electroencephalogram features, the position coding module further simulates a brain area topology structure, the self-attention mechanism neural network layer and the full-connection feedforward network extract global features from electroencephalogram signals, and the multilayer self-attention mechanism neural network layer classifies local-global topology attention features of the simulated brain areas of the electroencephalogram.
Further, the electrode cap and the electrode are composed of woven fabrics and silver chloride according to international 10-20 standard lead distribution.
Compared with the prior art, the invention has at least the following beneficial effects:
1) the invention extracts the local and global characteristics of the brain electricity by adopting a mode of simulating the local-global topological attention of the brain area, effectively utilizes the topological connection characteristics of the brain area, and improves the stability and the anti-noise and anti-interference capability of the system.
2) The invention adopts the neural network attention mechanism module, solves the problems that the traditional neural network cannot perform parallel computation and cannot extract the long-distance dependency relationship between sample data when in application, and can achieve very high prediction precision.
3) The invention uses a multilayer self-attention mechanism network to replace the traditional LSTM network structure, so that the model can refer to the output of each moment in the encoding stage during decoding, perform the decoding processing of the next moment after performing the weighted operation on the information output at the previous moment, and solve the problems that the LSTM structure cannot perform parallel computation and cannot extract long-distance dependency relation on an electroencephalogram sequence signal during application.
Drawings
FIG. 1 is a general structure diagram of an electroencephalogram emotion classification system based on local-global attention of brain areas in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a neural network connection of a local electroencephalogram feature extraction module in the embodiment of the present invention;
FIG. 3 is a schematic diagram of a method for simulating global topology of brain regions according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an attention neural network simulating a global topology of a brain region according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 4, the electroencephalogram emotion classification method based on local-global attention of brain areas provided by the present invention includes the following steps:
s1, playing a video material for inducing emotion;
s2, acquiring electroencephalogram signals, converting the acquired electroencephalogram signals into digital signals from analog signals and storing the digital signals;
s3, brain alignmentExtraction of local features L of electroencephalogram from electric signals i Wherein i is 1, 2, 3, and N is the number of input data samples;
s4, slicing the electroencephalogram signal, simulating a brain area topological structure by combining position coding, and extracting electroencephalogram global features G from the brain area topological structure i Wherein i is 1, 2, 3, … …, N, N is the number of input data samples, and is based on the local characteristics L of the brain electricity i Global characteristics of electroencephalogram G i And classifying and identifying the emotion labels y of the electroencephalogram through a multilayer self-attention mechanism to obtain an electroencephalogram emotion classification result.
In some embodiments of the present invention, step S1 includes the following sub-steps:
s101, through an application interface programmed by Qt, enabling emotion-induced video materials subjected to subjective emotion assessment to be played in a random sequence;
s102, through the application interface programmed by Qt, after the testee watches the emotion-induced video, the testee can score the video according to the awakening degree and the pleasure degree of the video within the range of 1-9 minutes and store the video as a video emotion label.
In some embodiments of the present invention, step S2 includes the following sub-steps:
s201, collecting electroencephalogram signals of cerebral cortex;
s202, converting the electroencephalogram signals from analog signals to electric signals, wherein the analog-to-digital conversion speed of each channel is 24 bits, and the selectable sampling rate of each electrode channel is as follows: 2, 4, 6, 8 kHz;
s203, down-sampling the electroencephalogram to 128Hz, removing myoelectricity and eye movement artifacts, and filtering the electroencephalogram by using a band-pass filter;
s204, rearranging the electroencephalogram signals in a matrix representation mode according to the planar two-dimensional distribution of the international 10-20 standard 32 lead distribution, wherein,
Figure BDA0003554761380000083
represents the z-th EEG channel, z ∈ [1, 32 ]]。
In some embodiments of the present invention, the method for extracting electroencephalogram local features in step S3 includes the following sub-steps:
step S31: channel weighting is carried out on the electroencephalogram characteristics through a channel attention weighting algorithm:
the input data is:
X=[x 1 ,x 2 ,x 3 ,…,x c ],X∈R H×W×C
wherein X is the three-dimensional vector of the input, H is the length of the input vector, W is the width of the input vector, C is the number of channels of the input vector, X c For the component of the input vector X on the C-th channel, R H×W×C Representing a three-dimensional real space with the length, width and height of H, W, C respectively;
and (3) carrying out average pooling on the input vectors, and calculating a formula of channel numerical distribution information:
Figure BDA0003554761380000081
where v is channel number distribution information, and is a 1 × 1 × C vector, F avgpool Is an average pooling function, i denotes the ith position over length H, and j denotes the jth position over width W;
the sigmoid function is known:
Figure BDA0003554761380000082
the activation function RELU expression is:
g(x)=max(0,x)
the weight formula of the channel is:
u=σ(W 2 g(W 1 v))
wherein, x is variable, u is weight of channel, and is a vector of 1 × 1 × C, W 1 Is composed of
Figure BDA0003554761380000091
R is a scaling parameter, W 2 Is composed of
Figure BDA0003554761380000092
A matrix of (a);
multiplying the channel weight by the input to obtain:
Figure BDA0003554761380000093
wherein the content of the first and second substances,
Figure BDA0003554761380000094
is an H × W vector, u c Is the component of the channel weight vector u on the channel with the sequence number C;
step S32: performing electroencephalogram local feature extraction on the electroencephalogram features through convolution operation:
the formula of the convolution operation is:
Figure BDA0003554761380000095
Figure BDA0003554761380000096
the output of the convolutional neural network is an H multiplied by W vector, i, j are its subscripts, and K is the size of K 1 ×k 2 K is a convolution kernel of 1 、k 2 Respectively, the length and width of the convolution kernel K, m and n are subscripts thereof, and K (m, n) represents the value of the convolution kernel K at positions m and n;
step S32: the local features of the brain electricity are expanded into one-dimensional vectors, and the method specifically comprises the following steps:
Figure BDA0003554761380000097
further developed into a 1 Xn 1 Wherein n is 1 H × W, inputting a linear network layer, and obtaining local electroencephalogram characteristics as follows:
Figure BDA0003554761380000098
wherein L is a local electroencephalogram characteristic and is n 2 Vector of dimensions, W 3 Is n 2 ×n 1 Matrix of b 1 Is n 2 A column vector of dimensions.
In some embodiments of the present invention, referring to fig. 4, step S4 is a method for simulating a brain region global topology attention neural network, and the mathematical expression is as follows:
the input data is:
X=[x 1 ,x 2 ,x 3 ,…,x c ],X∈R H×W×C
wherein X is an input three-dimensional vector, H is the length of the input vector, W is the width of the input vector, and C is the channel number of the input vector;
performing electroencephalogram feature slicing on the input features to obtain:
Figure BDA0003554761380000099
wherein x is p Is the electroencephalogram feature vector obtained by slicing,
Figure BDA00035547613800000910
p is the side length of the slice;
the positions of channels contained in the obtained brain electrical characteristic section can respectively correspond to frontal lobe, temporal lobe, parietal lobe and occipital lobe on the physiological structure of the cerebral cortex, and the rest parts are defined as edge electrodes to realize the function of simulating a brain area.
Carrying out position coding on the electroencephalogram feature slice, wherein the mathematical expression of the position coding is as follows:
the electroencephalogram feature vector x p From NxP 2 The XC is expanded into a vector of NXdim, and the dim is the length of the set characteristic vector; randomly initializing a 1 xdim sized classified feature vector x' p And the electroencephalogram feature vector x p Is connected to obtain
Figure BDA0003554761380000101
Figure BDA0003554761380000102
Is a vector of size (N +1) x dim, as input to the position encoding module;
randomly initializing a vector v of size (N +1) x dim p And obtaining a position coding formula:
Figure BDA0003554761380000103
Figure BDA0003554761380000104
is the output of the position encoding step, a vector of size (N +1) x dim;
the formula of the self-attention mechanism neural network for extracting the global topological features of the electroencephalogram is as follows:
Figure BDA0003554761380000105
wherein Q, K and V are all of the size
Figure BDA0003554761380000106
H is the number of optional attention moment arrays, T is the transpose,
Figure BDA0003554761380000107
the multi-head attention mechanism formula is as follows:
X m =Concat(head 1 ,…head h )W o
Figure BDA0003554761380000108
wherein, X m As the attention weight, it is a vector of (N +1) × dim, Concat (head) 1 ,…head h ) The h attention moment arrays are shown to be vector-connected,
Figure BDA0003554761380000109
is a trainable parameter matrix, head, simulating a global topological attention neural network module of a brain region h The h-th attention matrix is shown,
Figure BDA00035547613800001010
W i Q
Figure BDA00035547613800001011
trainable parameter matrixes corresponding to the vectors Q, K and V respectively;
Figure BDA00035547613800001012
to represent the QW i Q
Figure BDA00035547613800001013
Substituting into the formula Attention (Q, K, V), the obtained output is head.
Adding the attention weight value and the output of the position code to obtain the attention characteristic X a
Figure BDA00035547613800001014
The fully connected feedforward network formula of the attention mechanism is as follows:
X f =max(0,x a W f1 +b f1 )W f2 +b f2
wherein, X f The output of the fully-connected feedforward network is a vector of size (N +1) x dim, x a Is X a Component of (A), W f1 Is a size of dim x d h Matrix of d h For a selected hidden feature dimension, b f1 Is a (N +1) -dimensional column vector, W f2 Is a size d h Matrix of x dim, b f2 Is a (N +1) -dimensional column vector;
output and attention feature X of fully connected feedforward network a Adding to obtain:
G=x m +X f
wherein, G is the global topological attention feature of the brain electricity and is a vector of (N +1) x dim.
The multilayer self-attention mechanism neural network for extracting the local-global topological attention characteristics of the simulated brain region is expressed as follows:
the inputs to the network layer are:
X input =G+L
wherein, X input For input, the method is a vector with the size of (N +1) multiplied by dim, G is the global topological attention feature of electroencephalogram, and L is the local electroencephalogram feature;
x is to be input Inputting the multi-layer self-attention mechanism neural network to obtain output X y ,X y For simulating the local-global topological attention feature vector of brain region, the vector is a vector with the size of (N +1) x dim
Predicted label of network output:
y=softmax(X y )
and y is an emotion classification prediction label of the electroencephalogram, and the classification label may be high arousal degree and low arousal degree, or high pleasure degree and low pleasure degree according to different classification tasks.
In the invention, the method also comprises the following steps: step S5, read and display the emotion label identified through the network connection.
The step S5 process is as follows:
s501, storing the electroencephalogram emotion classification prediction labels obtained by the model on a server;
s502, the mobile terminal of the common user accesses the server through the wireless network, sends a request to the server and reads the classification label.
The invention also discloses a brain region local-global attention-based electroencephalogram emotion classification system for realizing the method provided by the embodiment, the brain region emotion classification system comprises a stimulation material presentation module, a brain region acquisition module, a brain region local feature extraction module, a simulated brain region full-volume topological attention neural network module and a mobile terminal module,
the stimulation material presenting module is used for playing video materials for inducing emotion;
the electroencephalogram acquisition module is used for acquiring electroencephalogram signals and converting the acquired electroencephalogram signals into digital signals from analog signals for storage;
the electroencephalogram local feature extraction module is used for extracting electroencephalogram local features L from electroencephalogram signals i Wherein i is 1, 2, 3, and N is the number of input data samples;
the brain area full-volume topological attention neural network simulation module is used for simulating a brain area topological structure by combining position coding after slicing electroencephalogram signals, and extracting global electroencephalogram features G from the brain area topological structure i Wherein i is 1, 2, 3, and N is the number of input data samples and is based on the local characteristics L of the brain electricity i Global characteristics of electroencephalogram G i Classifying and identifying the emotion labels y of the electroencephalogram through a multilayer self-attention mechanism to obtain an electroencephalogram emotion classification result;
the mobile terminal module is used for reading and displaying the electroencephalogram emotion classification result.
In some embodiments of the present invention, the stimulation material presenting module includes a liquid crystal display and a speaker connected in sequence, and a desktop computer host is provided with a video playing condition. The system can adopt a liquid crystal display with 1280 multiplied by 1024 pixels and 60Hz, and the system installed on the desktop computer host can be Windows 10.
In some embodiments of the present invention, the electroencephalogram acquisition module includes an electrode cap, an electrode, and an analog-to-digital converter, which are connected in sequence, wherein the electrode cap and the electrode obtain an electroencephalogram signal by measuring a potential of a cerebral cortex, the analog-to-digital converter in cascade converts the received analog electroencephalogram signal into a digital electrical signal, and the analog-to-digital converter transmits the electroencephalogram signal to the application host through an optical fiber to process the electroencephalogram signal.
Wherein, the electrode cap and the electrode are distributed according to international 10-20 standard lead, and the material of the electrode is silver chloride.
In some embodiments of the present invention, the electroencephalogram local feature extraction module includes a channel attention neural network, a convolutional neural network, and a linear neural network, which are sequentially connected, wherein the channel attention neural network performs channel weighting on electroencephalogram features through a channel attention weighting algorithm to obtain weights of different channels, so as to improve useful channel weights, the convolutional neural network performs local feature extraction on the electroencephalogram features through convolution operation, and the linear neural network expands the electroencephalogram local features into a selected size, such as a one-dimensional vector.
In some embodiments of the present invention, the brain area global topology attention neural network simulation module includes an electroencephalogram feature slicing module, a position coding module, a self-attention mechanism neural network layer, a fully-connected feedforward network, and a multi-layer self-attention mechanism neural network layer, which are connected in sequence, wherein the electroencephalogram feature slicing module slices an input electroencephalogram feature, the position coding module further simulates a brain area topology structure, the self-attention mechanism neural network layer and the fully-connected feedforward network extract global features from an electroencephalogram signal, the multi-layer self-attention mechanism neural network layer is classified in a prediction stage after parameters are adjusted in a training stage, and weights and offsets of the electroencephalogram features which need to be actually tested in the training stage are adjusted to continuously improve positioning accuracy.
In some embodiments of the present invention, the mobile terminal module accesses the server through the wireless network and sends a request to the server to read the category label.
By the method and the system provided by the embodiment of the invention, the data processing method for simulating the local-global topology of the brain area is applied to the electroencephalogram emotion recognition, the accuracy of the electroencephalogram emotion recognition is improved, the stability is better, the topological connection of the brain function area in different emotional states can be reflected, and the application prospect is wide.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (10)

1. A brain region local-global attention-based electroencephalogram emotion classification method is characterized by comprising the following steps:
s1, playing a video material for inducing emotion;
s2, acquiring electroencephalogram signals, converting the acquired electroencephalogram signals into digital signals from analog signals and storing the digital signals;
s3, extracting electroencephalogram local feature L from electroencephalogram signals i Wherein i is 1, 2, 3, and N is the number of input data samples;
s4, slicing the electroencephalogram signal, simulating a brain area topological structure by combining position coding, and extracting electroencephalogram global features G from the brain area topological structure i Wherein i is 1, 2, 3, and N is the number of input data samples and is based on the local characteristics L of the brain electricity i Global characteristics of electroencephalogram G i And classifying and identifying the emotion labels y of the electroencephalogram through a multilayer self-attention mechanism to obtain an electroencephalogram emotion classification result.
2. The brain region local-global attention-based electroencephalogram emotion classification method according to claim 1, wherein the step S1 includes:
s101, through an application interface programmed by Qt, enabling emotion-induced video materials subjected to subjective emotion assessment to be played in a random sequence;
s102, through the application interface programmed by Qt, after the testee watches the emotion-induced video, the testee can score the video according to the awakening degree and the pleasure degree of the video and store the video as a video emotion label.
3. The electroencephalogram emotion classification method based on local-global attention of brain areas as claimed in claim 1, wherein the step S2 includes the following sub-steps:
collecting brain electrical signals of cerebral cortex;
converting the electroencephalogram signal from an analog signal into an electric signal;
preprocessing the EEG signals converted into the electric signals;
the brain electrical signals are rearranged in a matrix representation manner.
4. The electroencephalogram emotion classification method based on local-global attention of brain areas as claimed in claim 1, wherein the step S3 includes the following sub-steps:
channel weighting is carried out on the electroencephalogram characteristics through a channel attention weighting algorithm;
extracting electroencephalogram local features of the electroencephalogram features through convolution operation;
unfolding local features of the brain electricity into one-dimensional vectors;
the channel weighting of the electroencephalogram features through the channel attention weighting algorithm specifically comprises the following steps:
the input data is:
X=[x 1 ,x 2 ,x 3 ,…,x c ],X∈R H×W×C
wherein X is the three-dimensional vector of the input, H is the length of the input vector, W is the width of the input vector, C is the number of channels of the input vector, X c For the component of the input vector X on the C-th channel, R H×W×C Representing a three-dimensional real space with the length, width and height of the sheets W, C respectively;
and (3) carrying out average pooling on the input vectors, and calculating a formula of channel numerical distribution information:
Figure FDA0003554761370000021
where v is channel number distribution information, and is a 1 × 1 × C vector, F avgpool Is an average pooling function, i denotes the ith position over length H, and j denotes the jth position over width W;
the sigmoid function is known:
Figure FDA0003554761370000022
the activation function RELU expression is:
g(x)=max(0,x)
then the weight formula of the channel is:
u=σ(W 2 G(W 1 v))
wherein, x is variable, u is weight of channel, and is a vector of 1 × 1 × C, W 1 Is composed of
Figure FDA0003554761370000023
R is a scaling parameter, W 2 Is composed of
Figure FDA0003554761370000024
A matrix of (a);
multiplying the channel weight by the input to obtain:
Figure FDA0003554761370000025
wherein the content of the first and second substances,
Figure FDA0003554761370000026
is an H × W vector, u c Is the component of the channel weight vector u on the channel with the sequence number C;
the local feature extraction of the electroencephalogram features through convolution operation comprises the following steps:
Figure FDA0003554761370000027
Figure FDA0003554761370000028
the output of the convolutional neural network is an H multiplied by W vector, i, j are its subscripts, and K is the size of K 1 ×k 2 K is a convolution kernel of 1 、k 2 Respectively representing the length and width of the convolution kernel KDegree, m, n are their subscripts, K (m, n) represents the value of the convolution kernel K at positions m, n;
the expanding of the local features of the brain electricity into one-dimensional vectors comprises the following steps:
Figure FDA0003554761370000029
further developed into a 1 Xn 1 Wherein n is 1 Inputting H multiplied by W into a linear network layer, and finally obtaining the electroencephalogram local characteristics as follows:
Figure FDA00035547613700000210
wherein L is the local feature of brain electricity and is n 2 Vector of dimensions, W 3 Is n 2 ×n 1 Matrix of b 1 Is n 2 A column vector of dimensions.
5. The electroencephalogram emotion classification method based on local-global attention of brain areas as claimed in any one of claims 1 to 4, wherein said step S4 specifically comprises:
the input data is:
X=[x 1 ,x 2 ,x 3 ,…,x c ],X∈R H×W×C
wherein X is an input three-dimensional vector, H is the length of the input vector, W is the width of the input vector, and C is the channel number of the input vector;
performing electroencephalogram feature slicing on the input features to obtain:
Figure FDA0003554761370000031
wherein x is p Is the electroencephalogram feature vector obtained by slicing,
Figure FDA0003554761370000032
p isThe side length of the slice;
the channel positions contained in the obtained brain electrical characteristic slice respectively correspond to frontal lobe, temporal lobe, parietal lobe and occipital lobe on the physiological structure of cerebral cortex, and the rest parts are defined as edge electrodes;
carrying out position coding on the electroencephalogram feature slice, wherein the mathematical expression of the position coding is as follows:
the electroencephalogram feature vector x p From NxP 2 The XC is expanded into a vector of NXdim, and the dim is the length of the set characteristic vector;
randomly initializing a 1 xdim sized classified feature vector x' p And the electroencephalogram feature vector x p Is connected to obtain
Figure FDA0003554761370000033
Figure FDA0003554761370000034
Is a vector of size (N +1) × dim as input for position coding;
randomly initializing a vector v of size (N +1) x dim p And obtaining a position coding formula:
Figure FDA0003554761370000035
Figure FDA0003554761370000036
is the output of the position code, is a vector of size (N +1) × dim;
extracting global topological features of the electroencephalogram by adopting a self-attention mechanism:
Figure FDA0003554761370000037
wherein Q, K and V are all of the size
Figure FDA0003554761370000038
H is the number of optional attention moment arrays, T is the transpose,
Figure FDA0003554761370000039
the formula of the multi-head attention mechanism is as follows:
X m =Concat(head 1 ,…head h )W o
head=Attention(QW i Q ,KW i K ,VW i V )
wherein, X m As an attention weight, it is a vector of (N +1) × dim in size, Concat (head) 1 ,…head n ) The h attention moment arrays are shown to be vector-connected,
Figure FDA00035547613700000310
is a model trainable parameter matrix, head n The h-th attention matrix is represented,
Figure FDA00035547613700000311
Figure FDA0003554761370000041
W i Q 、W i K 、W i V trainable parameter matrixes corresponding to vectors Q, K and V respectively; head ═ extension (QW) i Q ,KW i K ,VW i V ) To represent the QW i Q ,KW i K ,VW i V Substituting into the formula Attention (Q, K, V), and obtaining the output as head;
adding the attention weight value and the output of the position code to obtain the attention characteristic X a
Figure FDA0003554761370000042
The fully connected feedforward network formula of the attention mechanism is as follows:
X f =max(0,x a W f1 +b f1 )W f2 +b f2
wherein, X f The output of the fully-connected feedforward network is a vector of size (N +1) x dim, x a Is X a Component of (A), W f1 Is a size of dim x d h Matrix of d h For a selected hidden feature dimension, b f1 Is a (N +1) -dimensional column vector, W f2 Is a size d h Matrix of xdim, b f2 Is a (N +1) -dimensional column vector;
output and attention feature X of fully connected feedforward network a Adding to obtain:
G=X m +X f
wherein G is the global topological attention feature of the electroencephalogram and is a vector of (N +1) x dim;
the classification and identification of the emotion label y of the electroencephalogram through the multilayer self-attention mechanism comprises the following steps:
the input is as follows:
X input =G+L
wherein, X input For input, the method is a vector with the size of (N +1) multiplied by dim, G is the global topological attention feature of the electroencephalogram, and L is the local electroencephalogram feature;
mixing X input Inputting the multi-layer self-attention mechanism neural network to obtain output X y ,X y The feature vector is a vector with the size of (N +1) multiplied by dim and is used for simulating the local-global topological attention feature vector of the brain area;
predicted label of network output:
y=softmax(X y )
and y is an emotion classification prediction label of the brain electricity.
6. A brain emotion classification system based on local-global attention of brain regions, for implementing the method of any one of claims 1-5, the system comprising the following modules:
the stimulation material presentation module is used for playing video materials for inducing emotion;
the electroencephalogram acquisition module is used for acquiring electroencephalogram signals and converting the acquired electroencephalogram signals into digital signals from analog signals for storage;
an electroencephalogram local feature extraction module for extracting electroencephalogram local features L from the electroencephalogram signals i Wherein i is 1, 2, 3, and N is the number of input data samples;
the brain area full-volume topological attention neural network simulation module is used for simulating a brain area topological structure by combining position coding after slicing electroencephalogram signals, and extracting global electroencephalogram features G from the brain area topological structure i Wherein i is 1, 2, 3, and N is the number of input data samples and is based on the local characteristics L of the brain electricity i Global characteristics of electroencephalogram G i Classifying and identifying the emotion labels y of the electroencephalogram through a multilayer self-attention mechanism to obtain an electroencephalogram emotion classification result;
and the mobile terminal module is used for reading and displaying the electroencephalogram emotion classification result.
7. The brain region local-global attention-based electroencephalogram emotion classification system as claimed in claim 6, wherein the electroencephalogram acquisition module comprises an electrode cap, an electrode and an analog-to-digital converter which are connected in sequence, wherein the electrode cap and the electrode obtain an electroencephalogram signal by measuring cerebral cortex potential, and the analog-to-digital converter converts the received analog electroencephalogram signal into a digital electrical signal.
8. The brain emotion classification system based on local-global attention of the brain area as claimed in claim 6, wherein the brain emotion classification prediction label obtained in the mobile terminal module is stored on the server, and the mobile terminal accesses the server through a wireless network, sends a request to the server and reads the classification label.
9. The electroencephalogram emotion classification system based on local-global attention of brain areas as claimed in claim 6, wherein the electroencephalogram local feature extraction module comprises an attention neural network, a convolutional neural network and a linear neural network which are sequentially connected, wherein the attention neural network is used for channel weighting of electroencephalogram features through a channel attention weighting algorithm to obtain weights of different channels, the convolutional neural network is used for local feature extraction of electroencephalogram features through convolutional operation, and the linear neural network expands the electroencephalogram local features into a selected size.
10. The electroencephalogram emotion classification system based on local-global attention of a brain area according to any one of claims 6-9, characterized in that the simulated brain area global topology attention neural network module comprises an electroencephalogram feature slicing module, a position coding module, a self-attention mechanism neural network layer, a fully-connected feedforward network and a multilayer self-attention mechanism neural network layer which are connected in sequence, wherein the electroencephalogram feature slicing module is used for slicing input electroencephalogram features, the position coding module is used for further simulating a brain area topology structure, the self-attention mechanism neural network layer and the fully-connected feedforward network are used for extracting global features from electroencephalogram signals, and the multilayer self-attention mechanism neural network layer is used for classifying the electroencephalogram signals in a prediction stage after parameters are adjusted in a training stage.
CN202210271282.3A 2022-03-18 2022-03-18 Electroencephalogram emotion classification method and system based on brain region local-global attention Active CN114795246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210271282.3A CN114795246B (en) 2022-03-18 2022-03-18 Electroencephalogram emotion classification method and system based on brain region local-global attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210271282.3A CN114795246B (en) 2022-03-18 2022-03-18 Electroencephalogram emotion classification method and system based on brain region local-global attention

Publications (2)

Publication Number Publication Date
CN114795246A true CN114795246A (en) 2022-07-29
CN114795246B CN114795246B (en) 2024-07-09

Family

ID=82531355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210271282.3A Active CN114795246B (en) 2022-03-18 2022-03-18 Electroencephalogram emotion classification method and system based on brain region local-global attention

Country Status (1)

Country Link
CN (1) CN114795246B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115607169A (en) * 2022-10-18 2023-01-17 南京科技职业学院 Electroencephalogram signal identification method based on self-adaptive multi-view deep learning framework
CN115758267A (en) * 2022-11-09 2023-03-07 南通大学 Electroencephalogram signal emotion recognition method based on SRU and double attention

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012157624A (en) * 2011-02-02 2012-08-23 Topcon Corp Measuring device, measuring method, and program
CN113627518A (en) * 2021-08-07 2021-11-09 福州大学 Method for realizing multichannel convolution-recurrent neural network electroencephalogram emotion recognition model by utilizing transfer learning
US11194972B1 (en) * 2021-02-19 2021-12-07 Institute Of Automation, Chinese Academy Of Sciences Semantic sentiment analysis method fusing in-depth features and time sequence models
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012157624A (en) * 2011-02-02 2012-08-23 Topcon Corp Measuring device, measuring method, and program
US11194972B1 (en) * 2021-02-19 2021-12-07 Institute Of Automation, Chinese Academy Of Sciences Semantic sentiment analysis method fusing in-depth features and time sequence models
CN113627518A (en) * 2021-08-07 2021-11-09 福州大学 Method for realizing multichannel convolution-recurrent neural network electroencephalogram emotion recognition model by utilizing transfer learning
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115607169A (en) * 2022-10-18 2023-01-17 南京科技职业学院 Electroencephalogram signal identification method based on self-adaptive multi-view deep learning framework
CN115758267A (en) * 2022-11-09 2023-03-07 南通大学 Electroencephalogram signal emotion recognition method based on SRU and double attention

Also Published As

Publication number Publication date
CN114795246B (en) 2024-07-09

Similar Documents

Publication Publication Date Title
Aznan et al. Simulating brain signals: Creating synthetic eeg data via neural-based generative models for improved ssvep classification
CN111134666B (en) Emotion recognition method of multi-channel electroencephalogram data and electronic device
Kumar et al. Envisioned speech recognition using EEG sensors
CN114795246B (en) Electroencephalogram emotion classification method and system based on brain region local-global attention
Hsu Application of competitive Hopfield neural network to brain-computer interface systems
CN112800998B (en) Multi-mode emotion recognition method and system integrating attention mechanism and DMCCA
Timchenko et al. Bio-inspired approach to multistage image processing
CN111126263B (en) Electroencephalogram emotion recognition method and device based on double-hemisphere difference model
CN114224342A (en) Multi-channel electroencephalogram emotion recognition method based on space-time fusion feature network
CN111714118A (en) Brain cognition model fusion method based on ensemble learning
Li et al. Implementation of EEG emotion recognition system based on hierarchical convolutional neural networks
CN116645721B (en) Sitting posture identification method and system based on deep learning
Jinliang et al. EEG emotion recognition based on granger causality and capsnet neural network
CN115919330A (en) EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution
CN115659207A (en) Electroencephalogram emotion recognition method and system
CN111772629A (en) Brain cognitive skill transplantation method
Wu et al. Faster single model vigilance detection based on deep learning
CN114662524B (en) Plug-and-play domain adaptation method based on electroencephalogram signals
Barmpas et al. BrainWave-Scattering Net: a lightweight network for EEG-based motor imagery recognition
Saha et al. Automatic emotion recognition from multi-band EEG data based on a deep learning scheme with effective channel attention
Naidu et al. Stress recognition using facial landmarks and CNN (Alexnet)
CN116919422A (en) Multi-feature emotion electroencephalogram recognition model establishment method and device based on graph convolution
Fan et al. [Retracted] Research on Emotion Recognition of EEG Signal Based on Convolutional Neural Networks and High‐Order Cross‐Analysis
CN116304821A (en) Electroencephalogram signal identification method based on feature fusion
CN114569142A (en) Gesture recognition method and system based on brain-like calculation and gesture recognition device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant