CN114781441B - EEG motor imagery classification method and multi-space convolution neural network model - Google Patents

EEG motor imagery classification method and multi-space convolution neural network model Download PDF

Info

Publication number
CN114781441B
CN114781441B CN202210353223.0A CN202210353223A CN114781441B CN 114781441 B CN114781441 B CN 114781441B CN 202210353223 A CN202210353223 A CN 202210353223A CN 114781441 B CN114781441 B CN 114781441B
Authority
CN
China
Prior art keywords
features
eeg signal
convolution
space
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210353223.0A
Other languages
Chinese (zh)
Other versions
CN114781441A (en
Inventor
赵威
刘铁军
郜东瑞
李鑫
谢佳欣
秦云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202210353223.0A priority Critical patent/CN114781441B/en
Publication of CN114781441A publication Critical patent/CN114781441A/en
Application granted granted Critical
Publication of CN114781441B publication Critical patent/CN114781441B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an EEG motor imagery classification method and a multi-space convolution neural network model, wherein the method extracts the time characteristics of EEG signals through time convolution, reserves the space characteristics of the EEG signals, and then extracts the space characteristics of the EEG signals through the space convolution; and mapping the time features and the space features into a classifier to complete classification. The model comprises at least spatial convolution and temporal convolution in order to extract simultaneously the characteristic expressions of the EEG signal in different spaces. Experimental results prove that the method obtains classification accuracy better than the existing method under a plurality of data sets, and the superiority of the method is reflected. The invention helps to promote the development of the field of EEG motor imagery.

Description

EEG motor imagery classification method and multi-space convolution neural network model
Technical Field
The invention relates to the technical field of electroencephalogram signal analysis, in particular to an EEG motor imagery classification method and a multi-space convolution neural network model.
Background
Analysis of EEG motor imagery data relies on the rapid development of brain-computer interface technology. In motor imagery tasks, EEG data is derived from motor imagery behavior that is experimentally tested. Analysis of such EEG signal data will help study brain behavior in the subject. Still further, EEG signals generated by resolving motor imagery may help disabled patients control external mechanical movements. Such as: the movement direction of the wheelchair, the movement of the mechanical arm, etc. Therefore, analysis of motor imagery EEG signals is of great importance for independent activity in stroke patients.
In the traditional EEG signal analysis method, the characteristic extraction task of the EEG signal is mainly completed by utilizing a traditional machine learning algorithm. For example, co-space mode (CSP) is one of the most popular and powerful feature extraction methods, and a series of methods derived therefrom, such as filter bank co-space mode (FBCSP), and the like. The extracted EEG features are fed into a classifier to obtain a classification result. The classifier includes: linear Discriminant Analysis (LDA), support Vector Machines (SVM), etc. However, conventional machine learning feature extraction algorithms rely on a significant amount of data prior knowledge. Whereas the acquisition of a priori knowledge requires a significant amount of time. More importantly, the generalization ability of traditional classification models has been a challenge.
With the development of deep learning, more and more neural networks are applied to feature extraction and classification of EEG signals, such as EEGNet, residual network, etc. The strong learning capability of the deep learning model makes the process of EEG feature extraction independent of data prior knowledge. Furthermore, deep learning models typically have a stronger generalization performance. Unfortunately, most of the current deep learning models generally focus only on the expression of brain electrical signals in a single space, while ignoring useful information of EEG signals in other spaces.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides an EEG motor imagery classification method and a multi-space convolution neural network model. The present invention first utilizes a temporal convolution network to extract temporal feature information along the temporal dimension of the EEG and preserve the spatial characteristics of the EEG signal. The characteristic information of the EEG signal in the spatial dimension is then extracted by spatial convolution. Finally, the extracted time and space features are mapped to the category space by using the fully connected network to complete the classification task.
The specific technical scheme of the invention is as follows:
according to a first aspect of the present invention, there is provided a method of classifying EEG motor imagery based on a multi-space convolutional neural network model, the method comprising: extracting the temporal features of the EEG signal by a temporal convolution and preserving the spatial features of the EEG signal, followed by extracting the spatial features of the EEG signal by a spatial convolution; and mapping the time features and the space features into a classifier to complete classification.
Further, the extracting the characteristic information of the EEG signal in the time dimension by the time convolution, and retaining the spatial characteristics of the EEG signal, and then extracting the characteristic information of the EEG signal in the spatial dimension by the space convolution specifically includes: up-scaling the number of channels of the EEG signal by a first convolution kernel of size 1 x 1; enriching the spatial features of the EEG signal by a second convolution kernel of size 1 x 1; convolving the EEG signal over time by a third convolution kernel of size 1 x 11 to obtain a temporal characteristic of the EEG signal; weighting and spatially filtering the EEG signal through a fourth convolution kernel of 60 multiplied by 1 to obtain spatial features of the EEG signal; and compressing the time features and the space features through a first pooling layer, removing redundant information, and reducing the parameter number.
Further, the mapping the temporal feature and the spatial feature into a classifier to complete classification specifically includes: filtering and pooling the time features and/or the space features, classifying the time features and/or the space features, and calculating the class with the highest probability through the following formula (1);
wherein x is i Representing the ith neuron input, x j Represents the j-th neuron input, Σ j exp(x j ) Representing all neuron inputs.
Further, the activation function of the classifier is shown in the following formula (2):
where x represents the output result after convolution calculation, and a is a constant.
Further, after mapping the temporal features and the spatial features into a classifier to complete classification, the method further comprises: minimizing the difference between the classification result and the corresponding real label according to a loss function shown in the following formula (3):
wherein N represents the number of samples, M represents the number of categories, and y if the category is the same as the real tag data ic Equal to 1, otherwise 0, p ic Representing the predicted probability that sample i belongs to category c.
Further, the method further comprises: optimizing the loss function by using an optimizer, and updating the learning rate of the optimizer by the following formula (4):
where new_lr represents the new learning rate, initial_lr represents the initial learning rate, r represents the learning decay rate, epoch represents the number of iterations up to now, step_size represents the step size.
According to a second aspect of the present invention, there is provided a multi-space convolutional neural network model for EEG motor imagery classification, comprising a feature extractor and a classifier; the feature extractor is configured to extract temporal features of the EEG signal by a temporal convolution and preserve spatial features of the EEG signal, followed by extracting spatial features of the EEG signal by the spatial convolution; the classifier is configured to map the temporal features and the spatial features into a classifier to complete classification.
Further, the feature extractor is further configured to: up-scaling the number of channels of the EEG signal by a first convolution kernel of size 1 x 1; enriching the spatial features of the EEG signal by a second convolution kernel of size 1 x 1; convolving the EEG signal over time by a third convolution kernel of size 1 x 11 to obtain a temporal characteristic of the EEG signal; weighting and spatially filtering the EEG signal through a fourth convolution kernel of 60 multiplied by 1 to obtain spatial features of the EEG signal; and compressing the time features and the space features through a first pooling layer, removing redundant information, and reducing the parameter number.
Further, the classifier is further configured to: filtering and pooling the time features and/or the space features, classifying the time features and/or the space features, and calculating the class with the highest probability through the following formula (1);
wherein x is i Representing the ith neuron input, x j Represents the j-th neuron input, Σ j exp(x j ) Representing all neuron inputs.
Further, the activation function of the classifier is shown in the following formula (2):
where x represents the output result after convolution calculation, and a is a constant.
Further, the model also includes a machine learning module configured to: minimizing the difference between the classification result and the corresponding real label according to a loss function shown in the following formula (3):
wherein N represents the number of samples, M represents the number of categories, and y if the category is the same as the real tag data ic Equal to 1, otherwise 0, p ic Representing the predicted probability that sample i belongs to category c.
Further, the machine learning module further includes an optimizer, with which the loss function is optimized, the learning rate of which is updated by the following formula (4):
where new_lr represents the new learning rate, initial_lr represents the initial learning rate, r represents the learning decay rate, epoch represents the number of iterations up to now, step_size represents the step size.
The beneficial effects are that:
1) The limitation of the traditional machine learning algorithm on the feature extraction task is overcome, and higher model generalization capability is obtained.
2) The proposed multi-spatial convolution improves the feature extraction performance of the model on the EEG signal to some extent.
3) The proposed method shows very high classification accuracy on multiple data sets, and is significantly superior to the existing methods.
Drawings
Fig. 1 is a flowchart of an EEG motor imagery classification method based on a multi-space convolutional neural network model according to an embodiment of the invention.
Fig. 2 is a geometric diagram of an activation function of a classifier according to an embodiment of the present invention.
FIG. 3 is a diagram of an confusion matrix for IIa datasets according to an embodiment of the present invention.
FIG. 4 is a confusion matrix for IIb datasets in accordance with an embodiment of the present invention.
Detailed Description
The following description of the technical solutions in the embodiments of the present invention will be clear and complete, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The invention will now be further described with reference to the accompanying drawings.
The embodiment of the invention provides an EEG motor imagery classification method based on a multi-space convolution neural network model. As shown in fig. 1, the method starts with step S100 of extracting the temporal features of the EEG signal by means of a temporal convolution and preserving the spatial features of the EEG signal, followed by extracting the spatial features of the EEG signal by means of a spatial convolution.
In some embodiments, the signal for each input is defined as X i ∈R C×T C represents the number of channels of the EEG signal, T represents the data length, and i represents the number of iterations. Step S100 is implemented by a feature extractor in the multi-space convolutional neural network model. In the feature extractor, firstly, a first convolution check of 1×1 is used for carrying out dimension-lifting operation on the channel number of the EEG signal, and the original C×T is expanded to 60×T, so that the useful information of each channel can be extracted; then at Shape Transformatio, the spatial features of the EEG signal are enriched using a second convolution kernel of size 1×1 as well; at the Temporal Con layer, the EEG signal is convolved over time using a third convolution kernel of 1×11 sizeObtaining a time characteristic; in the Spatial Conv layer, a convolution kernel EEG signal of 60X 1 is used for weighted Spatial filtering to obtain Spatial characteristics; the last layer is a first pooling layer, compresses the time characteristics and the space characteristics, removes redundant information and reduces the number of parameters.
Finally, in step S200, the temporal features and the spatial features are mapped into a classifier to complete classification.
In the classifier, a convolution layer is used to further filter the input data, then a second pooling layer is followed, finally the FC layer classifies the data, and finally the maximum probability class is calculated by using the formula (1), which is shown as (1)
Wherein x is i Representing the ith neuron input, x j Represents the j-th neuron input, Σ j exp(x j ) Representing all neuron inputs.
In some embodiments, the activation function of the classifier is shown in equation (2) below:
as shown in FIG. 2, the activation function designed by the embodiment of the invention fuses sigmoid and ReLU, has soft saturation on the left side and no saturation on the right side. The left side can make ELU more robust to input change or noise, the right side linear part makes ELU can alleviate the problem of gradient disappearance, and ELU output average value is close to 0, so convergence speed is faster.
In some embodiments, cross entropy loss is used to minimize the difference between model predictions and corresponding real labels, the loss function is shown in equation (3) below:
wherein N represents the number of samples, M represents the number of categories, and y if the category is the same as the real tag data ic Equal to 1, otherwise 0, p ic Representing the predicted probability that sample i belongs to category c.
In some embodiments, the loss function is optimized with an optimizer, the learning rate of which is updated by the following equation (4):
where new_lr represents the new learning rate, initial_lr represents the initial learning rate, r represents the learning decay rate, epoch represents the number of iterations up to now, step_size represents the step size.
For example only, setting the learning rate of the initial trial of the optimizer to 0.02, every 50 epochs reduces the learning rate by a factor of 0.5. The optimizer may select an Adma optimizer. In general, an Adma optimizer is an optimizer with relatively excellent working performance, and has high calculation efficiency and low memory requirement. The method combines the advantages of two optimization algorithms, namely AdaGrad and RMSProp, and comprehensively considers the mean value and variance of the gradient to calculate a new step size.
The embodiment of the invention also provides a multi-space convolution neural network model for EEG motor imagery classification, which comprises a feature extractor and a classifier; the feature extractor is configured to extract temporal features of the EEG signal by a temporal convolution and preserve spatial features of the EEG signal, followed by extracting spatial features of the EEG signal by the spatial convolution; the classifier is configured to map the temporal features and the spatial features into a classifier to complete classification.
Further, the feature extractor is further configured to: up-scaling the number of channels of the EEG signal by a first convolution kernel of size 1 x 1; enriching the spatial features of the EEG signal by a second convolution kernel of size 1 x 1; convolving the EEG signal over time by a third convolution kernel of size 1 x 11 to obtain a temporal characteristic of the EEG signal; weighting and spatially filtering the EEG signal through a fourth convolution kernel of 60 multiplied by 1 to obtain spatial features of the EEG signal; and compressing the time features and the space features through a first pooling layer, removing redundant information, and reducing the parameter number.
Further, the classifier is further configured to: filtering and pooling the time features and/or the space features, classifying the time features and/or the space features, and calculating the class with the highest probability through the following formula (1);
wherein x is i Representing the ith neuron input, x j Represents the j-th neuron input, Σ j exp(x j ) Representing all neuron inputs.
Further, the activation function of the classifier is shown in the following formula (2):
where x represents the output result after convolution calculation, and a is a constant.
Further, the model also includes a machine learning module configured to: minimizing the difference between the classification result and the corresponding real label according to a loss function shown in the following formula (3):
wherein N represents the number of samples, M represents the number of categories, and y if the category is the same as the real tag data ic Equal to 1, otherwise 0, p ic Representing the predicted probability that sample i belongs to category c.
Further, the machine learning module further includes an optimizer, with which the loss function is optimized, the learning rate of which is updated by the following formula (4):
where new_lr represents the new learning rate, initial_lr represents the initial learning rate, r represents the learning decay rate, epoch represents the number of iterations up to now, step_size represents the step size.
The multi-space convolution neural network model for EEG motor imagery classification provided by the embodiment of the invention is consistent with the technical effects of the method provided by the embodiment of the invention in implementation, and is not described herein.
Experiments will be performed based on the methods or models provided by the embodiments of the present invention to further illustrate the feasibility and advancement of the invention.
The examples of the present invention conducted extensive experiments on IIa and IIb datasets of the BCI contest IV.
IIa of BCI Competition IV: this dataset collected 22 electrode electroencephalographic signals for 9 healthy subjects at two different phases. Each subject participated in four motor imagery tasks including left hand, right hand, foot and tongue motor imagery. Each stage had 6 experiments with a brief rest in between. One experimental data contained 48 pieces of experimental data (12 for each of the four categories of tasks), producing 288 pieces of experimental data in total for each stage. Experimental data between [2,6] seconds of each experiment are considered herein in the experiments. All experimental data were put together and taken at a sliding window size of 500 and a step size of 20. 5000 data in the selected experimental data are used as a test set, and the rest is used as a training set.
IIb of BCI Competition IV: BCI competition IV the public dataset Data 2b, the experimental model being the same as IIa, is an electroencephalogram dataset based on vision-induced left-right hand motor imagery. The data set collects the brain electrical signals of 9 right-handed, normal-vision or normal-vision after correction experimenters as the data set [15]. All experimental data were put together and taken at a sliding window size of 500 and a step size of 20. 5000 data in the selected experimental data are used as a test set, and the rest is used as a training set.
The method provided by the embodiment of the invention is compared with the traditional machine learning classification method. Conventional machine learning classification methods include SVM, KNN, LDA. Experimental results show that compared with the traditional machine learning method, the classification effect is remarkably improved, and the multi-space convolutional neural network provided by the embodiment of the invention can effectively extract the characteristics of the electroencephalogram signals and classify the electroencephalogram signals. The confusion matrix for the IIa and IIb datasets is shown in FIGS. 3 and 4, versus the machine learning method pair, tables 1 and 2.
TABLE 1 machine learning method vs. IIa dataset classification by the methods herein
TABLE 2 machine learning method vs. IIb dataset classification by the methods herein
As can be seen from tables 1 and 2, the method proposed by the embodiment of the present invention is significantly superior to the general machine learning method.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention, and are intended to be included within the scope of the appended claims and description.

Claims (2)

1. An EEG motor imagery classification method based on a multi-space convolutional neural network model, which is characterized by comprising the following steps:
extracting the temporal features of the EEG signal by a temporal convolution and preserving the spatial features of the EEG signal, followed by extracting the spatial features of the EEG signal by a spatial convolution;
mapping the time features and the space features into a classifier to finish classification;
the extracting the characteristic information of the EEG signal in the time dimension through the time convolution, and reserving the spatial characteristics of the EEG signal, and then extracting the characteristic information of the EEG signal in the spatial dimension through the space convolution specifically comprises:
up-scaling the number of channels of the EEG signal by a first convolution kernel of size 1 x 1;
enriching the spatial features of the EEG signal by a second convolution kernel of size 1 x 1;
convolving the EEG signal over time by a third convolution kernel of size 1 x 11 to obtain a temporal characteristic of the EEG signal;
weighting and spatially filtering the EEG signal through a fourth convolution kernel of 60 multiplied by 1 to obtain spatial features of the EEG signal;
compressing the time features and the space features through a first pooling layer, removing redundant information, and reducing the parameter number;
the mapping the temporal features and the spatial features into the classifier to complete classification specifically comprises:
filtering and pooling the time features and/or the space features, classifying the time features and/or the space features, and calculating the class with the highest probability through the following formula (1);
wherein x is i Representing the ith neuron input, x j Represents the j-th neuron input, Σ j exp(x j ) Representing all neuron inputs;
the activation function of the classifier is shown in the following formula (2):
wherein x represents an output result after convolution calculation, and a is a constant;
after mapping the temporal features and the spatial features into a classifier to complete classification, the method further comprises:
minimizing the difference between the classification result and the corresponding real label according to a loss function shown in the following formula (3):
wherein N represents the number of samples, M represents the number of categories, and y if the category is the same as the real tag data ic Equal to 1, otherwise 0, p ic Representing the predicted probability that sample i belongs to category c;
the method further comprises the steps of:
optimizing the loss function by using an optimizer, and updating the learning rate of the optimizer by the following formula (4):
where new_lr represents the new learning rate, initial_lr represents the initial learning rate, r represents the learning decay rate, epoch represents the number of iterations up to now, step_size represents the step size.
2. A multi-space convolutional neural network model for EEG motor imagery classification, comprising a feature extractor and a classifier;
the feature extractor is configured to extract temporal features of the EEG signal by a temporal convolution and preserve spatial features of the EEG signal, followed by extracting spatial features of the EEG signal by the spatial convolution;
the classifier is configured to map the temporal features and the spatial features into a classifier to complete classification;
the feature extractor is further configured to:
up-scaling the number of channels of the EEG signal by a first convolution kernel of size 1 x 1;
enriching the spatial features of the EEG signal by a second convolution kernel of size 1 x 1;
convolving the EEG signal over time by a third convolution kernel of size 1 x 11 to obtain a temporal characteristic of the EEG signal;
weighting and spatially filtering the EEG signal through a fourth convolution kernel of 60 multiplied by 1 to obtain spatial features of the EEG signal;
compressing the time features and the space features through a first pooling layer, removing redundant information, and reducing the parameter number;
the classifier is further configured to:
filtering and pooling the time features and/or the space features, classifying the time features and/or the space features, and calculating the class with the highest probability through the following formula (1);
wherein, x is i Representing the ith neuron input, x j Represents the j-th neuron input, Σ j exp(x j ) Representing all neuron inputs;
the activation function of the classifier is shown in the following formula (2):
wherein x represents an output result after convolution calculation, and a is a constant;
the model also includes a machine learning module configured to:
minimizing the difference between the classification result and the corresponding real label according to a loss function shown in the following formula (3):
wherein N represents the number of samples, M represents the number of categories, and y if the category is the same as the real tag data ic Equal to 1, otherwise 0, p ic Representing the predicted probability that sample i belongs to category c;
the machine learning module further includes an optimizer, with which the loss function is optimized, the learning rate of which is updated by the following equation (4):
where new_lr represents the new learning rate, initial_lr represents the initial learning rate, r represents the learning decay rate, epoch represents the number of iterations up to now, step_size represents the step size.
CN202210353223.0A 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model Active CN114781441B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210353223.0A CN114781441B (en) 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210353223.0A CN114781441B (en) 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model

Publications (2)

Publication Number Publication Date
CN114781441A CN114781441A (en) 2022-07-22
CN114781441B true CN114781441B (en) 2024-01-26

Family

ID=82427519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210353223.0A Active CN114781441B (en) 2022-04-06 2022-04-06 EEG motor imagery classification method and multi-space convolution neural network model

Country Status (1)

Country Link
CN (1) CN114781441B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115337026B (en) * 2022-10-19 2023-03-10 之江实验室 Convolutional neural network-based EEG signal feature retrieval method and device
CN117434452B (en) * 2023-12-08 2024-03-05 珠海市嘉德电能科技有限公司 Lithium battery charge and discharge detection method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993103A (en) * 2019-03-29 2019-07-09 华南理工大学 A kind of Human bodys' response method based on point cloud data
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
CN110213788A (en) * 2019-06-15 2019-09-06 福州大学 WSN abnormality detection and kind identification method based on data flow space-time characteristic
CN110309797A (en) * 2019-07-05 2019-10-08 齐鲁工业大学 Merge the Mental imagery recognition methods and system of CNN-BiLSTM model and probability cooperation
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113642400A (en) * 2021-07-12 2021-11-12 东北大学 Graph convolution action recognition method, device and equipment based on 2S-AGCN
CN114062511A (en) * 2021-10-24 2022-02-18 北京化工大学 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110531861B (en) * 2019-09-06 2021-11-19 腾讯科技(深圳)有限公司 Method and device for processing motor imagery electroencephalogram signal and storage medium
KR102443961B1 (en) * 2020-07-10 2022-09-16 고려대학교 산학협력단 Apparatus and method for motor imagery classification using eeg

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
CN109993103A (en) * 2019-03-29 2019-07-09 华南理工大学 A kind of Human bodys' response method based on point cloud data
CN110213788A (en) * 2019-06-15 2019-09-06 福州大学 WSN abnormality detection and kind identification method based on data flow space-time characteristic
CN110309797A (en) * 2019-07-05 2019-10-08 齐鲁工业大学 Merge the Mental imagery recognition methods and system of CNN-BiLSTM model and probability cooperation
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113642400A (en) * 2021-07-12 2021-11-12 东北大学 Graph convolution action recognition method, device and equipment based on 2S-AGCN
CN114062511A (en) * 2021-10-24 2022-02-18 北京化工大学 Single-sensor-based intelligent acoustic emission identification method for early damage of aircraft engine

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Zhang D等.EEG-based intention recognition from spatio-temporal representations via cascade and parallel convolutional recurrent neural networks.《arXiv》.2017,1-8. *
杨俊等.基于深度时空特征融合的多通道运动想象EEG解码方法.《电子与信息学报》.2021,第43卷(第1期),196-203. *
顾及区域信息的卷积神经网络在影像语义分割中的应用;伍佳等;《科学技术与工程》(第21期);281-286 *

Also Published As

Publication number Publication date
CN114781441A (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN109472194B (en) Motor imagery electroencephalogram signal feature identification method based on CBLSTM algorithm model
CN111012336B (en) Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
Li et al. Accurate retinal vessel segmentation in color fundus images via fully attention-based networks
CN114781441B (en) EEG motor imagery classification method and multi-space convolution neural network model
JP2020502683A (en) System and method for iterative classification using neurophysiological signals
CN111950455B (en) Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
Aghamaleki et al. Multi-stream CNN for facial expression recognition in limited training data
CN111797674B (en) MI electroencephalogram signal identification method based on feature fusion and particle swarm optimization algorithm
Alghamdi et al. A comparative study of deep learning models for diagnosing glaucoma from fundus images
CN113693613A (en) Electroencephalogram signal classification method and device, computer equipment and storage medium
Baysal et al. Multi-objective symbiotic organism search algorithm for optimal feature selection in brain computer interfaces
Yang et al. A robust iris segmentation using fully convolutional network with dilated convolutions
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
CN110613445B (en) DWNN framework-based electrocardiosignal identification method
Venu et al. Optimized Deep Learning Model Using Modified Whale’s Optimization Algorithm for EEG Signal Classification
CN112926502B (en) Micro expression identification method and system based on coring double-group sparse learning
CN112465054B (en) FCN-based multivariate time series data classification method
CN114190884B (en) Longitudinal analysis method, system and device for brain disease data
Kalimuthu et al. Multi-class facial emotion recognition using hybrid dense squeeze network
CN114209342A (en) Electroencephalogram signal motor imagery classification method based on space-time characteristics
Tang et al. A channel selection method for event related potential detection based on random forest and genetic algorithm
Hatode et al. Evolution and Testimony of Deep Learning Algorithm for Diabetic Retinopathy Detection
CN114063787B (en) Deep learning processing analysis method based on EMG and IMU data
CN117158912B (en) Sleep stage detection system based on graph attention mechanism and space-time graph convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant