CN113052099A - SSVEP classification method based on convolutional neural network - Google Patents

SSVEP classification method based on convolutional neural network Download PDF

Info

Publication number
CN113052099A
CN113052099A CN202110349963.2A CN202110349963A CN113052099A CN 113052099 A CN113052099 A CN 113052099A CN 202110349963 A CN202110349963 A CN 202110349963A CN 113052099 A CN113052099 A CN 113052099A
Authority
CN
China
Prior art keywords
ssvep
data
neural network
convolutional neural
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110349963.2A
Other languages
Chinese (zh)
Other versions
CN113052099B (en
Inventor
姜小明
赵德春
王添
田媛媛
向富贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202110349963.2A priority Critical patent/CN113052099B/en
Publication of CN113052099A publication Critical patent/CN113052099A/en
Application granted granted Critical
Publication of CN113052099B publication Critical patent/CN113052099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention belongs to the field of data processing, and particularly relates to a convolutional neural network-based SSVEP classification method, which comprises the following steps: dividing the multi-channel SSVEP electroencephalogram data into a plurality of frequency bands respectively corresponding to SSVEP stimulation frequency fundamental waves and harmonic components through a filter bank; carrying out fast Fourier transform on the divided data to obtain corresponding frequency spectrum data; respectively carrying out feature extraction, learning and classification on the electroencephalogram frequency spectrum data in each frequency band by utilizing a multi-path convolutional neural network, and finally classifying; according to the invention, the priori knowledge that fundamental waves exist in the electroencephalogram potential induced by the stimulation target in the SSVEP electroencephalogram signal and cross correlation exists in each harmonic component is utilized, time-domain filtering and fast Fourier transform are used for preprocessing the electroencephalogram signal to extract each harmonic component of the SSVEP signal, and feature extraction and classification are carried out through a convolutional neural network, so that higher classification accuracy is obtained.

Description

SSVEP classification method based on convolutional neural network
Technical Field
The invention belongs to the field of data processing, and particularly relates to an SSVEP classification method based on a convolutional neural network.
Background
With the development of brain-computer interface technology, an interactive mode capable of directly controlling external equipment through brain ideas is provided. The brain-computer interface technology based on the electroencephalogram signals is used as a sub-invasive brain-computer interface technology, and the brain-computer signals are read and identified through an algorithm and are finally converted into available instructions. Among various brain-computer interface technologies based on electroencephalogram signals, steady-state visual evoked potentials (SSVEPs) have received extensive attention and research because of their advantages such as high information transmission rate and low user training. The Steady State Visual Evoked Potential (SSVEP) is generated when a user watches stimulation with different flicker frequencies, the visual cortex of the brain generates corresponding oscillation, and therefore the amplitude of corresponding frequency and harmonic in an electroencephalogram signal is stronger.
The current SSVEP classification methods can be mainly classified into conventional methods and deep learning-based methods. Most of the conventional classification methods, such as Power Spectral Density Analysis (PSDA), Canonical Correlation Analysis (CCA), Filter Bank Canonical Correlation Analysis (FBCCA), are based on a fixed mathematical model, and the classification result is obtained by matching or correlating the electroencephalogram signal with the stimulation target one by one. The SSVEP classification method based on deep learning utilizes time domain or frequency domain electroencephalogram signals as input, and adopts a convolutional neural network to automatically extract and learn characteristics and finally classify. Because the convolutional neural network can self-adjust the network weight parameters according to the characteristics contained in the training data, the convolutional neural network can show better fitting characteristics compared with the traditional algorithm based on a fixed mathematical model.
However, the current various SSVEP methods based on deep learning ignore some important prior knowledge, such as the correlation between SSVEP harmonic components, which results in the problem that the input feature discrimination of the network is low and part of the important features cannot be completely extracted and learned, and it is difficult to further improve the classification accuracy.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an SSVEP classification method based on a convolutional neural network, which comprises the following steps: acquiring SSVEP data, and preprocessing the acquired data; inputting the preprocessed data into a trained multi-channel convolution neural network model to obtain an SSVEP classification result;
the process of training the multi-channel convolutional neural network model comprises the following steps:
s1: acquiring an original SSVEP electroencephalogram data set, and preprocessing the original SSVEP electroencephalogram data to obtain a training data set;
s2: respectively inputting the preprocessed training set data into each channel of a multi-channel convolution neural network model to obtain each one-dimensional feature vector;
s3: performing weighted fusion on the one-dimensional feature vectors obtained by each channel, and performing full-connection convolution on the fused one-dimensional vector features through respective full-connection layers to generate one-dimensional full-connection neurons;
s4: after all the one-dimensional fully-connected neurons are cascaded, carrying out convolution through a fully-connected mode to output the probability corresponding to each SSVEP stimulation target, and classifying data according to the probability of the SSVEP stimulation target to obtain a classification result;
s5: and calculating a loss function of the model, continuously adjusting parameters of the model, and finishing the training of the model when the loss function is minimum.
Preferably, the process of preprocessing the training aggregated data includes: marking the acquired SSVEP electroencephalogram signal data; dividing the marked SSVEP electroencephalogram signal data into a plurality of different independent frequency bands through a filter bank; and processing the time domain electroencephalogram data of different frequency bands by adopting fast Fourier transform to obtain respective frequency spectrum data matrixes.
Furthermore, in the process of segmenting the marked SSVEP electroencephalogram signal data, the frequency band data obtained by segmentation cover the SSVEP stimulation frequency fundamental wave and harmonic wave.
Further, the generated spectrum data matrix is:
Figure BDA0003001834060000021
preferably, the channel of the multi-channel convolutional neural network comprises an input layer, three convolutional layers, a feature enhancement layer and a dimension reduction layer; the process of processing the input data by each channel comprises the following steps:
step 1: inputting the frequency spectrum data matrix into an input layer for two-dimensional vector scale transformation to obtain a two-dimensional characteristic matrix;
step 2: convolving the two-dimensional feature matrix by using a first convolution layer C1 to obtain a first feature matrix with the scale of 2 xM x M (N-2); wherein M represents the number of acquisition channels of the electroencephalogram data
And step 3: convolving the first feature matrix by using a second convolution layer C2 to obtain a second feature matrix with the scale of 2 xM (N-2); wherein N represents the length of the spectral data;
and 4, step 4: convolving the second feature matrix by using a third convolution layer C3 to obtain a third feature matrix with the dimension of 2 xM (N-2-5/FFT resolution + 1); wherein 5/FFT resolution represents a frequency span of 5 Hz;
and 5: processing the third feature matrix by using a feature enhancement layer based on an attention mechanism to obtain an important feature matrix;
step 6: and performing dimension reduction processing on the important feature matrix to obtain one-dimensional vector features.
Further, the process of obtaining the first feature matrix, the second feature matrix, and the third feature matrix includes: performing convolution on an input spectrum data matrix by adopting 2 multiplied by M convolution kernels with the size of 3 multiplied by 3 in a first convolution layer to obtain a first characteristic matrix; convolving the first feature matrix in the second convolution layer by adopting 2 multiplied by M convolution cores with the size of M multiplied by 1 to obtain a second feature matrix; convolving the second feature matrix in the third convolution layer by adopting 2 xM convolution cores with the size of 1x 5/FFT resolution to obtain a third feature matrix; wherein, the FFT resolution represents the frequency resolution of the fast fourier transform result.
Further, the processing the third feature matrix by using the feature enhancement layer based on the attention mechanism includes: performing space dimension compression on input features through global average pooling operation to obtain global feelings of C channels, wherein an output scale is [1xC ]; respectively compressing and expanding N channels by adopting two full-connection layers, wherein the output scale of the first full-connection layer is [1xC/r ], the output scale of the second full-connection layer is [1xC ], and r represents the compression rate; and multiplying the input characteristics with the output characteristics of the second full-connection layer to obtain an important characteristic matrix.
Preferably, the process of performing weighted fusion on the one-dimensional vector features obtained by each channel includes: the one-dimensional feature vectors output by the convolutional neural network channels are added pairwise to obtain a plurality of weighted one-dimensional feature vectors, and the purpose of the method is to perform pairwise fusion on the features corresponding to different harmonic components so as to enhance the expressive force of the features. One-dimensional fully-connected neurons of the scale [1x6 x K ] are then obtained through the respective corresponding fully-connected layer, where K is the number of SSVEP-stimulated targets, with the aim of compressing the number of features to reduce the net computation.
Preferably, the process of classifying the one-dimensional fully-connected neuron comprises: obtaining a plurality of weighted one-dimensional eigenvectors by adding the one-dimensional eigenvectors output by each convolutional neural network channel pairwise; and inputting the weighted one-dimensional feature vectors into the corresponding full-connection layers to obtain one-dimensional full-connection neurons with the dimension of [1x6 x K ], wherein K represents the number of SSVEP stimulation targets.
Preferably, the loss function of the model is expressed as:
Figure BDA0003001834060000041
according to the invention, the priori knowledge that fundamental waves exist in the electroencephalogram potential induced by the stimulation target in the SSVEP electroencephalogram signal and cross correlation exists in each harmonic component is utilized, time-domain filtering and fast Fourier transform are used for preprocessing the electroencephalogram signal to extract each harmonic component of the SSVEP signal, and feature extraction and classification are carried out through a convolutional neural network, so that higher classification accuracy is obtained.
Drawings
FIG. 1 is a flow chart of a convolutional neural network-based SSVEP classification method of the present invention;
FIG. 2 is a SSVEP electroencephalogram data segmentation processing diagram of the present invention;
FIG. 3 is a diagram of a model architecture for a multi-channel convolutional neural network of the present invention;
FIG. 4 is a channel structure diagram of the multi-channel convolutional neural network model of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings, and the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
A convolutional neural network-based SSVEP classification method comprises the following steps: acquiring SSVEP data, and preprocessing the acquired data; inputting the preprocessed data into a trained multi-channel convolution neural network model to obtain an SSVEP classification result; where SSVEP represents the steady state visual evoked potential.
One embodiment of a convolutional neural network-based SSVEP classification method, as shown in fig. 1, includes: acquiring multi-channel SSVEP training data; preprocessing the acquired data; inputting the preprocessed data into a convolutional neural network for training; and acquiring preprocessed SSVEP data to be classified, and inputting the SSVEP data to be classified into a trained convolutional neural network for classification.
The process of training the multi-channel convolutional neural network model comprises the following steps:
s1: acquiring an original SSVEP electroencephalogram data set, and dividing the original SSVEP electroencephalogram data to obtain a training data set and a test data set;
s2: preprocessing the data in the training set;
s3: respectively inputting the preprocessed training set data into each channel of a multi-channel convolution neural network model to obtain each one-dimensional feature vector;
s4: performing weighted fusion on the one-dimensional vector characteristics obtained by each channel, and performing full-connection convolution on the fused one-dimensional vector characteristics through respective full-connection layers to generate one-dimensional full-connection neurons;
s5: all the one-dimensional fully-connected neurons are classified through convolution in a fully-connected mode to obtain a classification result;
s6: calculating a loss function of the model;
s7: and inputting the data in the test set into the model for testing, continuously adjusting the parameters of the model, and finishing the training of the model when the loss function is minimum.
The process of obtaining the original SSVEP electroencephalogram data set comprises the following steps: acquiring a plurality of tested multichannel SSVEP electroencephalogram data, and labeling the acquired electroencephalogram data according to current different SSVEP stimulation frequencies to form a training data set. The marked content is the frequency of the SSVEP electroencephalogram data, such as 8Hz, 8.25Hz, 8.5Hz and the like.
Preferably, the acquired raw SSVEP electroencephalographic dataset is an SSVEP public dataset of the university of california computational neuroscience center. The data set contained 12 SSVEP stimulation targets with a sampling frequency of 2048 Hz. The data is divided by using a 10-fold cross-validation method to obtain a training set and a test set.
As shown in fig. 2, the process of preprocessing the data in the training set includes: dividing the marked SSVEP electroencephalogram signal data into a plurality of different independent frequency bands through a filter bank; and processing the time domain electroencephalogram data of different frequency bands by adopting fast Fourier transform to obtain respective frequency spectrum data matrixes.
Specifically, the original multichannel electroencephalogram time domain data is divided into a plurality of different independent frequency bands respectively covering SSVEP stimulation frequency fundamental waves and harmonic components through a filter bank or wavelet decomposition, for example, the SSVEP stimulation frequency is 8Hz, 8.25Hz, 8.5Hz and the like, the coverage range of the first frequency band is mainly 8Hz-8.5Hz, the coverage range of the second frequency band is mainly 16Hz-17Hz, and the like, the nth frequency band is mainly covered with the nth harmonic frequency range. Meanwhile, the time domain electroencephalogram signals can be selectively segmented into 2-5 frequency bands by comprehensively considering the calculation efficiency and the classification accuracy, and experimental evaluation shows that the optimal balance of the calculation efficiency and the classification accuracy can be obtained when 3 frequency bands are selected.
The fast Fourier transform is used for respectively processing time domain electroencephalogram data of different frequency bands to obtain respective frequency spectrum data matrixes which are used as input data of the convolutional neural network, and the form of the frequency spectrum data matrixes is represented by the following formula. Wherein O is1、O2To OnRepresenting different brain electrical channels, O because of the fill of the convolutional layer in the convolutional neural network without using complementary bits1、O2The data of the two brain electrical channels are repeatedly arranged in the last two rows of the matrix to adapt to the size of the convolution kernel of the convolution layer of the first layer of the convolution neural network. The generated spectrum data matrix is:
Figure BDA0003001834060000061
wherein I represents a spectral data matrix, OnRepresenting the brain electrical channel, X (O)n) Spectral data representing the brain electrical channel.
As shown in fig. 3, the preprocessed electroencephalogram spectrum data matrix is input to a convolutional neural network for training, and the corresponding SSVEP stimulation frequency is used as a label. Each neural network channel corresponds to one electroencephalogram data frequency band, and as shown in fig. 4, each neural network channel is composed of an input layer, three convolutional layers, a feature enhancement layer and a dimensionality reduction layer, wherein the convolutional layers are not filled with complementary bits.
An input layer: the scale of the input layer is (M +2) multiplied by N of a two-dimensional vector formed by the electroencephalogram frequency spectrum data, wherein M represents the number of acquisition channels of the electroencephalogram data, and N represents the length of the frequency spectrum data.
Convolutional layer C1: the convolutional layer C1 convolves the input feature matrix of the input layer to obtain a feature matrix vector, and the convolutional layer C1 is convolved with 2 × M convolution kernels having a size of 3 × 3, so that the scale of the obtained feature matrix vector is 2 × M × (N-2).
Convolutional layer C2: the convolutional layer C2 convolves the feature matrix vectors of the convolutional layer C1 to obtain the feature matrix vectors thereof, and the convolutional layer C2 convolves with 2 × M convolution kernels having a size of M × 1 to obtain a feature matrix having a scale of 2 × M (N-2).
Convolutional layer C3: the convolutional layer C3 convolves the feature matrix vector of the convolutional layer C2 to obtain the feature matrix vector thereof, the convolutional layer C3 convolves with 2 × M convolution kernels having a size of 1 × 5/FFT resolution, where FFT resolution represents the frequency resolution of the FFT result, 5/FFT resolution represents the frequency span of 5Hz, and the feature matrix vector obtained by the convolutional layer C3 has a size of 2 × M (N-2-5/FFT resolution + 1).
Performing space dimension compression on input features through global average pooling operation to obtain global feelings of C channels, wherein an output scale is [1xC ]; the method comprises the following steps that two full-connection layers are adopted to respectively compress and expand N channels, the purpose is to reduce network calculation amount and increase the nonlinear capacity of a network, the output scale of the first full-connection layer is [1xC/r ], wherein r represents the compression ratio, and the output scale of the second full-connection layer is [1xC ]; and finally, multiplying the input features with the output features of the second fully-connected layer, wherein the purpose is to promote important features and restrain unimportant features in a multiplication weighting mode after acquiring the importance of each channel.
A characteristic enhancement layer: the classification accuracy of the SSVEP classification method based on the convolutional neural network can be effectively improved by introducing an attention mechanism as a feature enhancement layer. The feature enhancement layer performs weighting of the input features to focus on important features through global average pooling, full connection layer and matrix multiplication.
A dimension reduction layer: and the dimensionality reduction layer folds the characteristic matrix vectors output by the characteristic enhancement layer to obtain one-dimensional vector characteristics which are sequentially arranged.
As shown in fig. 3, in order to weight each harmonic component, the output one-dimensional vector features of all convolutional neural network channels complete convolution in a fully-connected manner by adding two by two and passing through respective fully-connected layers, and the number of fully-connected neurons is 6 × K, where K represents the total number of stimulation targets of the SSVEP. Finally, all the fully connected neurons are cascaded into a group of new one-dimensional fully connected neurons, input to an output fully connected layer, and classified through convolution in a fully connected mode, and K neurons corresponding to classification results are output.
The invention adopts a loss function of a classified cross entropy loss function calculation model, and the expression of the loss function is as follows:
Figure BDA0003001834060000081
where n is the number of samples, m is the number of classification targets, yimAnd
Figure BDA0003001834060000082
respectively representing the predicted probability and the actual probability.
By adopting the SSVEP classification method, the accuracy of SSVEP classification can reach 93.19%, which is much higher than that of the common SSVEP classification method.
The above-mentioned embodiments, which further illustrate the objects, technical solutions and advantages of the present invention, should be understood that the above-mentioned embodiments are only preferred embodiments of the present invention, and should not be construed as limiting the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A convolutional neural network-based SSVEP classification method is characterized by comprising the following steps: acquiring SSVEP data, and preprocessing the acquired data; inputting the preprocessed data into a trained multi-channel convolution neural network model to obtain an SSVEP classification result;
the process of training the multi-channel convolutional neural network model comprises the following steps:
s1: acquiring an original SSVEP electroencephalogram data set, and preprocessing the original SSVEP electroencephalogram data to obtain a training data set;
s2: respectively inputting the preprocessed training set data into each channel of a multi-channel convolution neural network model to obtain each one-dimensional feature vector;
s3: performing weighted fusion on the one-dimensional feature vectors obtained by each channel, and performing full-connection convolution on the fused one-dimensional vector features through respective full-connection layers to generate one-dimensional full-connection neurons;
s4: after all the one-dimensional fully-connected neurons are cascaded, carrying out convolution through a fully-connected mode to output the probability corresponding to each SSVEP stimulation target, and classifying data according to the probability of the SSVEP stimulation target to obtain a classification result;
s5: and calculating a loss function of the model, continuously adjusting parameters of the model, and finishing the training of the model when the loss function is minimum.
2. The convolutional neural network-based SSVEP classification method according to claim 1, wherein the process of preprocessing the data in the training set comprises: marking the acquired SSVEP electroencephalogram signal data; dividing the marked SSVEP electroencephalogram signal data into a plurality of different independent frequency bands through a filter bank; and processing the time domain electroencephalogram data of different frequency bands by adopting fast Fourier transform to obtain respective frequency spectrum data matrixes.
3. The SSVEP classification method based on the convolutional neural network as claimed in claim 2, wherein in the process of segmenting the labeled SSVEP electroencephalogram signal data, the frequency band data obtained by segmentation covers the SSVEP stimulation frequency fundamental wave and harmonic wave.
4. The SSVEP classification method based on the convolutional neural network as claimed in claim 2, wherein the generated spectrum data matrix is:
Figure FDA0003001834050000021
wherein I represents a spectral data matrix, OnRepresenting the brain electrical channel, X (O)n) Spectral data representing the brain electrical channel.
5. The SSVEP classification method based on the convolutional neural network as claimed in claim 1, wherein the channels of the multi-channel convolutional neural network comprise an input layer, three convolutional layers, a feature enhancement layer and a dimensionality reduction layer; the process of processing the input data by each channel comprises the following steps:
step 1: inputting the frequency spectrum data matrix into an input layer for two-dimensional vector scale transformation to obtain a two-dimensional characteristic matrix;
step 2: convolving the two-dimensional feature matrix by using a first convolution layer C1 to obtain a first feature matrix with the scale of 2 xM x M (N-2); wherein M represents the number of acquisition channels of the electroencephalogram data
And step 3: convolving the first feature matrix by using a second convolution layer C2 to obtain a second feature matrix with the scale of 2 xM (N-2); wherein N represents the length of the spectral data;
and 4, step 4: convolving the second feature matrix by using a third convolution layer C3 to obtain a third feature matrix with the dimension of 2 xM (N-2-5/FFT resolution + 1); wherein 5/FFT resolution represents a frequency span of 5 Hz;
and 5: processing the third feature matrix by using a feature enhancement layer based on an attention mechanism to obtain an important feature matrix;
step 6: and performing dimension reduction processing on the important feature matrix to obtain one-dimensional vector features.
6. The SSVEP classification method based on the convolutional neural network as claimed in claim 5, wherein the process of obtaining the first feature matrix, the second feature matrix and the third feature matrix comprises: performing convolution on an input spectrum data matrix by adopting 2 multiplied by M convolution kernels with the size of 3 multiplied by 3 in a first convolution layer to obtain a first characteristic matrix; convolving the first feature matrix in the second convolution layer by adopting 2 multiplied by M convolution cores with the size of M multiplied by 1 to obtain a second feature matrix; convolving the second feature matrix in the third convolution layer by adopting 2 xM convolution cores with the size of 1x 5/FFT resolution to obtain a third feature matrix; wherein, the FFT resolution represents the frequency resolution of the fast fourier transform result.
7. The SSVEP classification method based on the convolutional neural network as claimed in claim 5, wherein the process of processing the third feature matrix by using the feature enhancement layer based on the attention mechanism comprises: performing space dimension compression on input features through global average pooling operation to obtain global feelings of C channels, wherein an output scale is [1xC ]; respectively compressing and expanding N channels by adopting two full-connection layers, wherein the output scale of the first full-connection layer is [1xC/r ], the output scale of the second full-connection layer is [1xC ], and r represents the compression rate; and multiplying the input characteristics with the output characteristics of the second full-connection layer to obtain an important characteristic matrix.
8. The SSVEP classification method based on the convolutional neural network as claimed in claim 1, wherein the process of performing weighted fusion on the one-dimensional vector features obtained by each channel comprises: obtaining a plurality of weighted one-dimensional eigenvectors by adding the one-dimensional eigenvectors output by each convolutional neural network channel pairwise; and inputting the weighted one-dimensional feature vectors into the corresponding full-connection layers to obtain one-dimensional full-connection neurons with the dimension of [1x6 x K ], wherein K represents the number of SSVEP stimulation targets.
9. The SSVEP classification method based on the convolutional neural network as claimed in claim 1, wherein the process of classifying the one-dimensional fully-connected neurons comprises: cascading all the one-dimensional fully-connected neurons, and inputting each one-dimensional fully-connected neuron after cascading into a softmax layer to obtain the probability corresponding to each SSVEP stimulation target; and selecting the position index corresponding to the maximum probability value as a classification result of the current input data according to the size of each probability value.
10. The convolutional neural network-based SSVEP classification method as claimed in claim 1, wherein the loss function expression of the model is:
Figure FDA0003001834050000031
where n denotes the number of samples, m denotes the number of classification targets, yimThe probability of the prediction is represented by,
Figure FDA0003001834050000032
representing the actual probability.
CN202110349963.2A 2021-03-31 2021-03-31 SSVEP classification method based on convolutional neural network Active CN113052099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110349963.2A CN113052099B (en) 2021-03-31 2021-03-31 SSVEP classification method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110349963.2A CN113052099B (en) 2021-03-31 2021-03-31 SSVEP classification method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN113052099A true CN113052099A (en) 2021-06-29
CN113052099B CN113052099B (en) 2022-05-03

Family

ID=76516706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110349963.2A Active CN113052099B (en) 2021-03-31 2021-03-31 SSVEP classification method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN113052099B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114010208A (en) * 2021-11-08 2022-02-08 成都信息工程大学 Zero-padding frequency domain convolution neural network method suitable for SSVEP classification
CN114129163A (en) * 2021-10-22 2022-03-04 中央财经大学 Electroencephalogram signal-based emotion analysis method and system for multi-view deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8515201B1 (en) * 2008-09-18 2013-08-20 Stc.Unm System and methods of amplitude-modulation frequency-modulation (AM-FM) demodulation for image and video processing
CN105302309A (en) * 2015-11-05 2016-02-03 重庆邮电大学 SSVEP brain-computer interface based brain wave instruction identification method
WO2016049757A1 (en) * 2014-10-01 2016-04-07 Nuralogix Corporation System and method for detecting invisible human emotion
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks
CN110263606A (en) * 2018-08-30 2019-09-20 周军 Scalp brain electrical feature based on end-to-end convolutional neural networks extracts classification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8515201B1 (en) * 2008-09-18 2013-08-20 Stc.Unm System and methods of amplitude-modulation frequency-modulation (AM-FM) demodulation for image and video processing
WO2016049757A1 (en) * 2014-10-01 2016-04-07 Nuralogix Corporation System and method for detecting invisible human emotion
CN105302309A (en) * 2015-11-05 2016-02-03 重庆邮电大学 SSVEP brain-computer interface based brain wave instruction identification method
CN110263606A (en) * 2018-08-30 2019-09-20 周军 Scalp brain electrical feature based on end-to-end convolutional neural networks extracts classification method
CN110222643A (en) * 2019-06-06 2019-09-10 西安交通大学 A kind of Steady State Visual Evoked Potential Modulation recognition method based on convolutional neural networks

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YAO LI 等: ""Convolutional Correlation Analysis for Enhancing the Performance of SSVEP-Based Brain-Computer Interface"", 《IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING 》 *
刘家乐等: "基于稳态视觉诱发电位的脑机接口***的设计与研究", 《工业控制计算机》 *
张黎明等: "用于稳态视觉诱发电位特征频率提取的同步压缩短时傅里叶变换方法", 《西安交通大学学报》 *
王辅国: "基于SSVEP脑机接口的残疾人出行辅助***", 《科技传播》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114129163A (en) * 2021-10-22 2022-03-04 中央财经大学 Electroencephalogram signal-based emotion analysis method and system for multi-view deep learning
CN114129163B (en) * 2021-10-22 2023-08-29 中央财经大学 Emotion analysis method and system for multi-view deep learning based on electroencephalogram signals
CN114010208A (en) * 2021-11-08 2022-02-08 成都信息工程大学 Zero-padding frequency domain convolution neural network method suitable for SSVEP classification
CN114010208B (en) * 2021-11-08 2023-09-08 成都信息工程大学 Zero-filling frequency domain convolutional neural network method suitable for SSVEP classification

Also Published As

Publication number Publication date
CN113052099B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN111012336B (en) Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN107844755B (en) Electroencephalogram characteristic extraction and classification method combining DAE and CNN
Erişti et al. Wavelet-based feature extraction and selection for classification of power system disturbances using support vector machines
CN106909784A (en) Epileptic electroencephalogram (eeg) recognition methods based on two-dimentional time-frequency image depth convolutional neural networks
CN110236533A (en) Epileptic seizure prediction method based on the study of more deep neural network migration features
CN113052099B (en) SSVEP classification method based on convolutional neural network
CN110515456A (en) EEG signals emotion method of discrimination and device based on attention mechanism
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN113011239B (en) Motor imagery classification method based on optimal narrow-band feature fusion
Behura et al. WiST ID—Deep learning-based large scale wireless standard technology identification
Jinliang et al. EEG emotion recognition based on granger causality and capsnet neural network
CN115486857A (en) Motor imagery electroencephalogram decoding method based on Transformer space-time feature learning
CN114239657A (en) Time sequence signal identification method based on complex value interference neural network
CN112800928A (en) Epileptic seizure prediction method of global self-attention residual error network with channel and spectrum features fused
CN116471154A (en) Modulation signal identification method based on multi-domain mixed attention
CN111025100A (en) Transformer ultrahigh frequency partial discharge signal mode identification method and device
CN116482618B (en) Radar active interference identification method based on multi-loss characteristic self-calibration network
CN116919422A (en) Multi-feature emotion electroencephalogram recognition model establishment method and device based on graph convolution
CN111310680B (en) Radiation source individual identification method based on deep learning
Wang et al. A personalized feature extraction and classification method for motor imagery recognition
CN115631371A (en) Extraction method of electroencephalogram signal core network
CN112561055B (en) Electromagnetic disturbance identification method based on bilinear time-frequency analysis and convolutional neural network
CN114818823A (en) Electroencephalogram channel selection method based on squeezing and activation graph convolution neural network
CN113780134A (en) Motor imagery electroencephalogram decoding method based on ShuffleNet V2 network
CN114021424A (en) PCA-CNN-LVQ-based voltage sag source identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant