CN112633365B - Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm - Google Patents

Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm Download PDF

Info

Publication number
CN112633365B
CN112633365B CN202011519735.7A CN202011519735A CN112633365B CN 112633365 B CN112633365 B CN 112633365B CN 202011519735 A CN202011519735 A CN 202011519735A CN 112633365 B CN112633365 B CN 112633365B
Authority
CN
China
Prior art keywords
model
motor imagery
source
mirror
electroencephalogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011519735.7A
Other languages
Chinese (zh)
Other versions
CN112633365A (en
Inventor
罗靖
刘光明
王耀杰
弓一婧
高帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202011519735.7A priority Critical patent/CN112633365B/en
Publication of CN112633365A publication Critical patent/CN112633365A/en
Application granted granted Critical
Publication of CN112633365B publication Critical patent/CN112633365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Mathematical Physics (AREA)
  • Pathology (AREA)
  • Data Mining & Analysis (AREA)
  • Physiology (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Fuzzy Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a mirror convolution neural network model, which is formed by a source model and a mirror model together; the method comprises the steps that a mirror image brain electrical signal is formed through brain electrical channels of left and right brain hemispheres of a source brain electrical signal and used for training a motor imagery recognition CNN model, and the trained motor imagery classification CNN model is called a source model; the mirror image model is a mirror image motor imagery classification model formed by copying the trained source model. The invention also discloses a motor imagery electroencephalogram recognition algorithm based on the mirror convolutional neural network model. The algorithm solves the problem of low classification performance caused by limited training data and obvious difference among subjects in the prior art.

Description

Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm
Technical Field
The invention belongs to the technical field of brain-computer interfaces, and particularly relates to a mirror convolution neural network model and a motor imagery electroencephalogram recognition algorithm based on the mirror convolution neural network model.
Background
Electroencephalogram (EEG) is a simple, flexible and noninvasive brain monitoring method, and Motor image (Motor image) identification based on the EEG is a key technology for determining the performance of a Motor image brain-computer interface (Brain Computer Interface, BCI) system. The motor imagery brain-computer interface needs to collect brain electrical signals of a subject when the subject performs a specific motor imagery task, recognize motor imagery content according to the brain electrical signals, and then convert the recognition result into a command to control peripheral devices. The electroencephalogram signal has the characteristics of low signal-to-noise ratio and low spatial resolution, so that the extraction of effective distinguishing features from the electroencephalogram is the key for success of a motor imagery recognition system. Based on the features applied in motor imagery recognition, the existing methods are mainly divided into three categories: methods based on spectral features, methods based on co-spatial modes (Common Spatial Pattern, CSP) and methods based on neural networks.
With the rapid development of deep learning, many motor imagery recognition methods based on convolutional neural networks (Convolution Neural Network, CNN) have emerged and exhibit good experimental results. Schirrmester studied CNN model EEGDecoding with different structure and design choices, implementing electroencephalogram decoding for motor imagery or recognition of motor execution. Experimental results show that the performances of the proposed 'lower ConvNet' and 'Deep ConvNet' network structures are superior to those of other structures, and the latest Deep learning technology, including batch standardization, drop out and ELU, can greatly improve the model performances. EEGNet is a compact convolutional network exhibiting excellent performance over all four BCI paradigms, including P300 visual evoked potential, error-related negative feedback, motor-related cortical potential, and sensorimotor rhythms.
Although a certain breakthrough has been made in recent motor imagery recognition models based on CNN compared with the traditional method, the classification performance of the models is greatly affected due to limited training data available for specific subjects caused by the fact that the motor imagery electroencephalogram signal acquisition difficulty is large and the difference among subjects is remarkable.
Disclosure of Invention
The invention aims to provide a motor imagery electroencephalogram recognition algorithm based on a mirror convolutional neural network model, which solves the problem of low classification performance caused by limited training data and obvious difference among subjects in the prior art.
It is a second object of the present invention to provide a mirrored convolutional neural network model.
The technical scheme adopted by the invention is that a mirror convolution neural network model is formed by a source model and a mirror model together;
the method comprises the steps that a mirror image brain electrical signal is formed through brain electrical channels of left and right brain hemispheres of a source brain electrical signal and used for training a motor imagery recognition CNN model, and the trained motor imagery classification CNN model is called a source model; the mirror image model is a mirror image motor imagery classification model formed by copying the trained source model.
The present invention is also characterized in that,
the source brain electrical signals are: and acquiring an electroencephalogram signal section of 4.5 seconds from 0.5 seconds before occurrence of a motor imagery prompt to 4 seconds after occurrence of the motor imagery prompt, filtering the electroencephalogram signal by using two band-pass filters of 4-38Hz or 0-38Hz respectively, and removing signal noise by using a moving index averaging method.
The mirror image brain electric signal is obtained by exchanging brain electric channels of left and right brain hemispheres of the source brain electric signal.
The second technical scheme adopted by the invention is that a motor imagery electroencephalogram recognition algorithm based on a mirror convolution neural network model specifically comprises the following steps:
step 1, training stage
The method comprises the steps that a mirror image brain electrical signal is formed through brain electrical channels of left and right brain hemispheres of a source brain electrical signal and used for training a motor imagery recognition CNN model, and the trained motor imagery classification CNN model is called a source model;
step 2, test stage
Copying the trained source model to form a mirror motor imagery classification model, namely a mirror model, wherein the mirror model and the source model form a mirror convolution neural network model; the source brain electrical signal is input into the source model, the mirror image brain electrical signal is input into the mirror image model, the final class probability output is formed by combining the output class probabilities of the source model and the mirror image model, and the class with the largest integration probability becomes the final prediction label.
The present invention is also characterized in that,
in step 1, the source brain electrical signals are: and acquiring an electroencephalogram signal segment of 4.5 seconds from 0.5 seconds before occurrence of a motor imagery prompt to 4 seconds after occurrence of the motor imagery prompt, filtering the electroencephalogram signal by using two band-pass filters of 4-38Hz or 0-38Hz, and removing signal noise by using a moving index averaging method.
In the step 1, the mirror image brain electric signal is obtained by exchanging brain electric channels of left and right brain hemispheres of the source brain electric signal.
The beneficial effects of the invention are as follows: the algorithm solves the restriction of small sample number to the model by integrating learning and data enhancement ideas on the premise of not increasing the number of the model parameters so as to improve the performance of motor imagery classification.
Drawings
FIG. 1 is a schematic diagram of a mirrored brain electrical signal constructed by brain electrical channel interchange of left and right hemispheres in the algorithm of the invention;
FIG. 2 is a schematic diagram of a model of a mirrored convolutional neural network in an algorithm of the present invention.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The invention provides a mirror convolution neural network model, which is formed by a source model and a mirror model together;
the method comprises the steps that a mirror image brain electrical signal is formed through brain electrical channels of left and right brain hemispheres of a source brain electrical signal and used for training a motor imagery recognition CNN model, and the trained motor imagery classification CNN model is called a source model; the mirror image model is a mirror image motor imagery classification model formed by copying the trained source model.
The source brain electrical signals are: and acquiring an electroencephalogram signal section of 4.5 seconds from 0.5 seconds before occurrence of a motor imagery prompt to 4 seconds after occurrence of the motor imagery prompt, filtering the electroencephalogram signal by using two band-pass filters of 4-38Hz or 0-38Hz respectively, and removing signal noise by using a moving index averaging method.
The mirror image brain electric signal is obtained by exchanging brain electric channels of left and right brain hemispheres of the source brain electric signal.
The invention also provides a motor imagery electroencephalogram recognition algorithm based on the mirror convolution neural network model, which is characterized by comprising the following steps of:
step 1, training stage
The method comprises the steps that a mirror image brain electrical signal is formed through brain electrical channels of left and right brain hemispheres of a source brain electrical signal and used for training a motor imagery recognition CNN model, and the trained motor imagery classification CNN model is called a source model;
in step 1, the source brain electrical signals are: and acquiring an electroencephalogram signal section of 4.5 seconds from 0.5 seconds before occurrence of a motor imagery prompt to 4 seconds after occurrence of the motor imagery prompt, filtering the electroencephalogram signal by using two band-pass filters of 4-38Hz or 0-38Hz respectively, and removing signal noise by using a moving index averaging method.
In the step 1, the mirror image brain electric signal is obtained by exchanging brain electric channels of left and right brain hemispheres of the source brain electric signal.
As shown in fig. 1, for example, the C3 and C4 channels are the 8 th and 12 th channels of the source brain electrical signal, respectively, and the 12 th and 8 th channels, respectively, in the mirror brain electrical signal. Thus, in the left-right hand motor imagery electroencephalogram, the event-related desynchronization and event-related synchronization phenomenon occurs on the opposite side of the mirror image electroencephalogram as compared to the source electroencephalogram, and thus the motor imagery tag of the mirror image electroencephalogram is set to be opposite to the motor imagery tag of the source electroencephalogram. For example, the source electroencephalogram label is a right-hand motor imagery, and the corresponding mirror image electroencephalogram label is set to a left-hand motor imagery, and vice versa. In other motor imagery electroencephalogram signals (such as tongue motor imagery and bipedal motor imagery), the left and right channels of the mirror image electroencephalogram signals are interchanged without changing motor imagery labels. The training set comprising the source electroencephalogram signal sample and the mirror image electroencephalogram signal sample is used for training a motor imagery classification CNN model, and the trained motor imagery classification CNN model is called a source model. This can also be considered a data enhancement method during the model training phase.
Step 2, test stage
Copying the trained source model to form a mirror motor imagery classification model, namely a mirror model, wherein the mirror model and the source model form a mirror convolution neural network model (the source model of the mirror convolution neural network model is identical to the mirror model in structure and parameters); the method comprises the steps of constructing a mirror image electroencephalogram signal for assisting in identifying a source electroencephalogram signal, inputting the source electroencephalogram signal in a test set into a source model, inputting the mirror image electroencephalogram signal into the mirror image model, forming final class probability output by combining output class probabilities of the source model and the mirror image model, and enabling a class with the maximum integration probability to be a final prediction label.
As shown in fig. 2, in the left-right hand motor imagery electroencephalogram, the event-related desynchronization and event-related synchronization phenomenon occurs in the contralateral hemisphere as compared with the source electroencephalogram, so that the prediction probability of the left-hand motor imagery of the mirrored electroencephalogram is equivalent to the prediction probability of the right-hand motor imagery of the source electroencephalogram, and vice versa. Therefore, the left-right hand motor imagery output prediction probabilities of the source model and the mirror model are exchanged and averaged, and the other motor imagery output prediction probabilities only need to be averaged to form the final prediction probability. In the four-classification motor imagery problem of left hand, right hand, feet and tongue, the final predictive probability calculation formula is as follows:
wherein P is l ,P r ,P f ,P t The final prediction probabilities of left hand, right hand, double foot and tongue motor imagination are respectively P l o ,P r o ,P f o ,P t o Respectively represent left hand and right handSource model output probability of hand, feet and tongue motor imagery, P l m ,P r m ,P f m ,P t m The output probabilities of the mirror image models of left hand, right hand, both feet and tongue motor imagination are represented respectively. Finally, the prediction label of the source electroencephalogram signal is the type with the largest final prediction probability, and the calculation formula is as follows, wherein I is the prediction label, the subscript I is the type label, and arg max can return the type label corresponding to the maximum probability.
I=arg max P i i∈{l,r,f,t} (2)
Since the CNN model outputs the probabilities that the samples are considered to belong to each class, this operation is a probabilistic integration process, and the class with the greater probability of integration will become the final predictive label. Through the mirror convolution network structure, the integrated learning method is utilized to improve the model performance on the premise of not increasing the number of model parameters.
The mirror convolution neural network model provided by the algorithm can be applied to any motor imagery recognition CNN model. To verify the validity and versatility of the mirror convolutional neural network model, we used three popular motor imagery recognition CNN models in experiments: EEGNet [1] And EEGDecoding's lower ConvNet and Deep ConvNet [2] . Based on the sensory-motor rhythm dataset experimental results reported in the EEGNet paper, models with convolution kernel sizes (2, 32) and (8, 4) were chosen. EEGNet has 3 convolutional layers and 1 fully-connected layer. Batch normalization, drop out, max pooling, and ELU activation functions are applied in EEGNet. Four motor imagery CNN model structures are tested in EEGDecoding paper, and only the Shallow ConvNet and Deep ConvNet with better classification effect are selected. The shalow ConvNet inspired by the FBCSP algorithm flow has 2 convolutional layers and 1 fully connected layer. In the short ConvNet, the square function is set as the activation function. Deep ConvNet is a deeper CNN with 5 convolutional layers and 1 fully-connected layer, and uses the ELU activation function. In the EEGDecoding model, max pooling, drop out and batch normalization are applied.
Reference is made to:
[1]R.T.Schirrmeister,J.T.Springenberg,L.D.J.Fiederer,M.Glasstetter,K.Eggensperger,M.Tangermann,F.Hutter,W.Burgard,and T.Ball,“Deep learning with convolutional neural networks for eeg decoding and visualization,”Human Brain Mapping,vol.38,no.11,pp.5391–5420,2017.
[2]V.J.Lawhern,A.J.Solon,N.R.Waytowich,S.M.Gordon,C.P.Hung,and B.J.Lance,“Eegnet:a compact convolutional neural network for eeg-based brain–computer interfaces,”Journal of Neural Engineering,vol.15,no.5,p.056013,2018.
and (3) experimental verification:
in order to verify the performance of the proposed mirror convolutional neural network model on the four-class motor imagery task, experimental verification was performed on the fourth international brain-computer interface large race 2a dataset. The performance of the mirror convolutional network was compared to the original convolutional network and the results are given in table 1. Since EEGNet and Braindecoding are the latest research results in the field, the comparison of the results is also the comparison with the latest research results. Since we find that training generally converges to within 500 iterations, the maximum number of iterations of training is set to 700. The maximum test accuracy within 700 iterations, the average test accuracy between 501 and 600 iterations (100 iterations average accuracy) and the average test accuracy between 501 and 700 iterations (200 iterations average accuracy) are used in the experimental results to measure model performance to reduce the impact of randomness and batch number settings. The improvement of the mirror convolution network over the original network is measured by the accuracy improvement. The first column in the table represents the original model name and bandpass filter, in the format "model-filter". For example, "EEGNet-0Hz" means that the mirror convolution model is constructed based on the EEGNet model and that the electroencephalogram signal is preprocessed using a 0-38Hz bandpass filter. The experimental result is the average value of 5 experiments, and the maximum improvement of the optimal classification accuracy and the accuracy is marked in bold.
Table 1 mirror convolutional neural network Model (MCNN) is compared with four classification accuracy of the original model motor imagery.
The comparison in table 1 shows that: (1) The mirror convolution neural network model is excellent in four classification tasks of the 2a data set, and compared with an original model, the maximum accuracy, the 100-time iteration average accuracy and the 200-time iteration average accuracy are respectively improved by 4.55%, 4.83% and 4.79% on average; (2) The classification performance of the EEGNet model by the mirror convolution neural network is improved to the greatest extent; (3) The mirror convolution neural network has universality, and the improvement of the motor imagery recognition performance is independent of the original CNN model and the preprocessing band-pass filter.
In addition, the algorithm also compares the performance of the original CNN and the mirror convolutional neural network in the task of classifying the motor imagery, and the result is given in the table 2. For the four-class motor imagery data set 2a, we completed the two-class task using only left and right hand motor imagery samples to participate in the experiment. In addition, the motor imagery two-class data set 2b is also used in the experimental comparison. Likewise, the maximum test accuracy, the 100-iteration average accuracy and the 200-iteration average accuracy are used to evaluate the model classification effect. The first column represents the original model name, dataset and bandpass filter, in the format "model-dataset-filter". For example, "EEGNet-2a-0Hz" means that the mirror convolution model is constructed based on the EEGNet model, evaluated on the 2a dataset, and the brain electrical signal is preprocessed using a 0-38Hz band pass filter. The experimental result is the average value of 5 experiments, and the position with the maximum improvement of the optimal classification precision and precision is marked in bold.
Table 2 mirror convolutional neural network Model (MCNN) is compared with two classification accuracy of the original model motor imagery.

Claims (1)

1. The motor imagery electroencephalogram recognition algorithm based on the mirror convolution neural network model is characterized by comprising the following steps of:
step 1, training stage
The method comprises the steps that a mirror image brain electrical signal is formed through brain electrical channels of left and right brain hemispheres of a source brain electrical signal and used for training a motor imagery recognition CNN model, and the trained motor imagery classification CNN model is called a source model;
in step 1, the source brain electrical signals are: collecting an electroencephalogram signal section of 4.5 seconds from 0.5 seconds before occurrence of motor imagery prompt to 4 seconds after occurrence of motor imagery prompt, filtering the electroencephalogram signal by using two band-pass filters of 4-38Hz or 0-38Hz, and removing signal noise points by using a moving index averaging method to obtain the electroencephalogram signal;
in the step 1, the mirror image brain electric signal is obtained by exchanging brain electric channels of left and right brain hemispheres of the source brain electric signal;
step 2, test stage
Copying the trained source model to form a mirror motor imagery classification model, namely a mirror model, wherein the mirror model and the source model form a mirror convolution neural network model; the source brain electrical signal is input into the source model, the mirror image brain electrical signal is input into the mirror image model, the final class probability output is formed by combining the output class probabilities of the source model and the mirror image model, and the class with the largest integration probability becomes the final prediction label;
in the left-hand and right-hand motor imagery electroencephalogram, compared with the source electroencephalogram, the event-related desynchronization and event-related synchronization phenomena occur in the contralateral hemisphere, so that the prediction probability of the left-hand motor imagery of the mirror electroencephalogram is equal to the prediction probability of the right-hand motor imagery of the source electroencephalogram, and vice versa; therefore, the left-right hand motor imagery output prediction probabilities of the source model and the mirror image model are exchanged and averaged, and other motor imagery output prediction probabilities only need to be averaged to form final prediction probabilities; in the four-classification motor imagery problem of left hand, right hand, both feet and tongue, the final predictive probability calculation formula is as follows:
wherein P is l ,P r ,P f ,P t Is respectively a left hand, a right hand,Final prediction probability of bipedal and tongue motor imagery, P l o ,P r o ,P f o ,P t o The output probabilities of source models respectively representing left hand, right hand, double feet and tongue motor imagination, P l m ,P r m ,P f m ,P t m The output probabilities of mirror image models of left hand, right hand, feet and tongue motor imagination are respectively represented; finally, the prediction label of the source EEG signal is the type with the largest final prediction probability, and the calculation formula is as follows, wherein I is the prediction label, the subscript I is the type label, and arg max returns the type label corresponding to the maximum probability;
I=argmaxP i i∈{l,r,f,t} (2)。
CN202011519735.7A 2020-12-21 2020-12-21 Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm Active CN112633365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011519735.7A CN112633365B (en) 2020-12-21 2020-12-21 Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011519735.7A CN112633365B (en) 2020-12-21 2020-12-21 Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm

Publications (2)

Publication Number Publication Date
CN112633365A CN112633365A (en) 2021-04-09
CN112633365B true CN112633365B (en) 2024-03-19

Family

ID=75320509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011519735.7A Active CN112633365B (en) 2020-12-21 2020-12-21 Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm

Country Status (1)

Country Link
CN (1) CN112633365B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017084416A1 (en) * 2015-11-17 2017-05-26 天津大学 Feedback system based on motor imagery brain-computer interface
CN109598222A (en) * 2018-11-26 2019-04-09 南开大学 Wavelet neural network Mental imagery brain electricity classification method based on the enhancing of EEMD data
CN109784211A (en) * 2018-12-26 2019-05-21 西安交通大学 A kind of Mental imagery Method of EEG signals classification based on deep learning
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN110916652A (en) * 2019-10-21 2020-03-27 昆明理工大学 Data acquisition device and method for controlling robot movement based on motor imagery through electroencephalogram and application of data acquisition device and method
KR102096565B1 (en) * 2018-11-08 2020-04-02 광운대학교 산학협력단 Analysis method of convolutional neural network based on Wavelet transform for identifying motor imagery brain waves
CN111062250A (en) * 2019-11-12 2020-04-24 西安理工大学 Multi-subject motor imagery electroencephalogram signal identification method based on depth feature learning
CN111265212A (en) * 2019-12-23 2020-06-12 北京无线电测量研究所 Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
CN111832416A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network
CN111832431A (en) * 2020-06-23 2020-10-27 杭州电子科技大学 Emotional electroencephalogram classification method based on CNN

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017084416A1 (en) * 2015-11-17 2017-05-26 天津大学 Feedback system based on motor imagery brain-computer interface
CN110069958A (en) * 2018-01-22 2019-07-30 北京航空航天大学 A kind of EEG signals method for quickly identifying of dense depth convolutional neural networks
KR102096565B1 (en) * 2018-11-08 2020-04-02 광운대학교 산학협력단 Analysis method of convolutional neural network based on Wavelet transform for identifying motor imagery brain waves
CN109598222A (en) * 2018-11-26 2019-04-09 南开大学 Wavelet neural network Mental imagery brain electricity classification method based on the enhancing of EEMD data
CN109784211A (en) * 2018-12-26 2019-05-21 西安交通大学 A kind of Mental imagery Method of EEG signals classification based on deep learning
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN110916652A (en) * 2019-10-21 2020-03-27 昆明理工大学 Data acquisition device and method for controlling robot movement based on motor imagery through electroencephalogram and application of data acquisition device and method
CN111062250A (en) * 2019-11-12 2020-04-24 西安理工大学 Multi-subject motor imagery electroencephalogram signal identification method based on depth feature learning
CN111265212A (en) * 2019-12-23 2020-06-12 北京无线电测量研究所 Motor imagery electroencephalogram signal classification method and closed-loop training test interaction system
CN111832416A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network
CN111832431A (en) * 2020-06-23 2020-10-27 杭州电子科技大学 Emotional electroencephalogram classification method based on CNN

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于CSP与卷积神经网络算法的多类运动想象脑电信号分类;曾庆山;范明莉;宋庆祥;;科学技术与工程;20170928(第27期);全文 *
基于卷积神经网络的自适应样本加权脑机接口建模;邹宜君;赵新刚;徐卫良;韩建达;;信息与控制;20191215(第06期);全文 *
基于神经网络集成技术的运动想像脑电识别方法;李明爱;王蕊;郝冬梅;;北京工业大学学报(第03期);全文 *
基于集成卷积神经网络的脑电情感识别;魏琛等;《华东理工大学学报(自然科学版)》;全文 *

Also Published As

Publication number Publication date
CN112633365A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
WO2021237918A1 (en) Gpdc graph convolutional neural network-based fatigue detection method, apparatus, and storage medium
Huang et al. S-EEGNet: Electroencephalogram signal classification based on a separable convolution neural network with bilinear interpolation
CN113128552B (en) Electroencephalogram emotion recognition method based on depth separable causal graph convolution network
CN114266276B (en) Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
WO2022052328A1 (en) Natural action electroencephalographic recognition method based on riemannian geometry
WO2020042511A1 (en) Motion potential brain-machine interface encoding and decoding method based on spatial filtering and template matching
CN113158793A (en) Multi-class motor imagery electroencephalogram signal identification method based on multi-feature fusion
CN111273767A (en) Hearing-aid brain computer interface system based on deep migration learning
CN114947883A (en) Time-frequency domain information fusion deep learning electroencephalogram noise reduction method
CN108470182B (en) Brain-computer interface method for enhancing and identifying asymmetric electroencephalogram characteristics
Vallabhaneni et al. Deep learning algorithms in eeg signal decoding application: a review
Sartipi et al. EEG emotion recognition via graph-based spatio-temporal attention neural networks
CN112633365B (en) Mirror convolution neural network model and motor imagery electroencephalogram recognition algorithm
Nagarajan et al. Investigation on robustness of EEG-based brain-computer interfaces
CN117407748A (en) Electroencephalogram emotion recognition method based on graph convolution and attention fusion
CN115414050A (en) EEG brain network maximum clique detection method and system for realizing emotion recognition
Mishra et al. A novel classification for EEG based four class motor imagery using kullback-leibler regularized Riemannian manifold
Wang et al. A personalized feature extraction and classification method for motor imagery recognition
CN113768474B (en) Anesthesia depth monitoring method and system based on graph convolution neural network
Sadatnejad et al. Riemannian channel selection for BCI with between-session non-stationarity reduction capabilities
Ren et al. Extracting and supplementing method for EEG signal in manufacturing workshop based on deep learning of time–frequency correlation
Aristimunha et al. Evaluating the structure of cognitive tasks with transfer learning
Zhu et al. Decoding Multi-Brain Motor Imagery From EEG Using Coupling Feature Extraction and Few-Shot Learning
Zhao et al. GTSception: a deep learning eeg emotion recognition model based on fusion of global, time domain and frequency domain feature extraction
CN117909868B (en) Electroencephalogram cognitive load analysis method and system based on neuroimaging priori dynamic graph convolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant