CN114533083B - Motor imagery state identification method based on multi-fusion convolutional neural network - Google Patents

Motor imagery state identification method based on multi-fusion convolutional neural network Download PDF

Info

Publication number
CN114533083B
CN114533083B CN202210079960.6A CN202210079960A CN114533083B CN 114533083 B CN114533083 B CN 114533083B CN 202210079960 A CN202210079960 A CN 202210079960A CN 114533083 B CN114533083 B CN 114533083B
Authority
CN
China
Prior art keywords
electroencephalogram
matrix
data
time
frequency band
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210079960.6A
Other languages
Chinese (zh)
Other versions
CN114533083A (en
Inventor
李勇强
牛钦
朱威灵
傅向向
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Meilian Medical Technology Co ltd
Jiangsu Province Hospital First Affiliated Hospital With Nanjing Medical University
Original Assignee
Zhejiang Meilian Medical Technology Co ltd
Jiangsu Province Hospital First Affiliated Hospital With Nanjing Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Meilian Medical Technology Co ltd, Jiangsu Province Hospital First Affiliated Hospital With Nanjing Medical University filed Critical Zhejiang Meilian Medical Technology Co ltd
Priority to CN202210079960.6A priority Critical patent/CN114533083B/en
Publication of CN114533083A publication Critical patent/CN114533083A/en
Application granted granted Critical
Publication of CN114533083B publication Critical patent/CN114533083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/377Electroencephalography [EEG] using evoked responses
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Power Engineering (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to a motor imagery state identification method based on a multi-fusion convolutional neural network, which is characterized by comprising the following steps of: acquiring brain electrical data of a user, and preprocessing the data; extracting an electroencephalogram time-frequency matrix and a specific frequency band electroencephalogram time-domain matrix from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix under a specific frequency band based on the specific frequency band electroencephalogram time-domain matrix; inputting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state; the input of the full-connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolution layer and a pooling layer, and the electroencephalogram time domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time domain energy matrix under the specific frequency band through a co-space mode.

Description

Motor imagery state identification method based on multi-fusion convolutional neural network
Technical Field
The invention relates to a motor imagery state identification method based on a multi-fusion convolutional neural network. Is suitable for the field of brain-computer interaction.
Background
Motor imagery brain-computer interaction is a technical scheme for helping limb dyskinesia patients to perform rehabilitation training, and the main rehabilitation principle is to capture brain electrical characteristics formed by motor imagery by means of brain-computer interface equipment, namely Mu rhythms (8-13 Hz) of a motor cortex and changes of beta rhythms (18-24 Hz). For example, when a human body performs motor imagination of left and right limbs, the two brain electric rhythms on the same side are increased, while the different sides are reduced, and when the human body does not imagine, the rhythms on two sides are not changed obviously. By capturing the characteristics of brain electrical rhythms when a patient performs motor imagery, feedback of motor perception can be formed, and damaged motor neurons are stimulated to construct a new nerve loop, so that the efficiency of rehabilitation training of the motor function is improved.
In order to improve accuracy of motion state identification, researchers have proposed different schemes including power spectrum analysis, co-space mode, sample entropy method, etc., but all have some problems.
The power spectrum analysis method is used for determining whether motor imagery is performed before by calculating the power spectrum of Mu and beta rhythms of the motor perception cortex and comparing the power spectrum with a threshold value. The method is easy to realize, but needs a large amount of priori data if a specific determination threshold is needed, and is not flexible enough, so that the processing capability of fuzzy data is poor. The co-space mode method is widely applied to single-time two-class motor imagery, and a group of optimal spatial filters are found for projection by utilizing diagonalization of a matrix, so that variance value difference of two kinds of signals is maximized, and a feature vector with higher distinction degree is obtained. The method has higher requirements on leads, and the more leads, the better the accuracy, but the method is easy to be interfered by noise. The sample entropy method has stable algorithm and small operation amount, but is more suitable for processing small sample data.
Disclosure of Invention
The invention aims to solve the technical problems that: aiming at the problems, a motor imagery state identification method based on a multi-fusion convolutional neural network is provided.
The technical scheme adopted by the invention is as follows: a motor imagery state recognition method based on a multi-fusion convolutional neural network is characterized by comprising the following steps of:
acquiring brain electrical data of a user, and preprocessing the data;
extracting an electroencephalogram time-frequency matrix and a specific frequency band electroencephalogram time-domain matrix from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix under a specific frequency band based on the specific frequency band electroencephalogram time-domain matrix;
inputting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state;
the input of the full-connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolution layer and a pooling layer, and the electroencephalogram time domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time domain energy matrix under the specific frequency band through a co-space mode.
The data preprocessing comprises the following steps:
and (3) performing low-pass filtering of 0-30Hz on the electroencephalogram data, removing interference, and then performing time domain data enhancement.
The method for extracting the specific frequency band electroencephalogram time domain matrix from the data preprocessed electroencephalogram data, and determining the electroencephalogram time domain energy matrix under the specific frequency band based on the specific frequency band electroencephalogram time domain matrix comprises the following steps:
obtaining filtering data of the electroencephalogram data subjected to data preprocessing under Mu [8Hz-13Hz ] and beta [18Hz-24Hz ] through a band-pass filter, and obtaining 2 electroencephalogram time domain matrixes;
obtaining the electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes to obtain 2 electroencephalogram time domain energy matrixes, and then merging the 2 electroencephalogram time domain energy matrixes according to the row matrixes to obtain an electroencephalogram time domain energy matrix;
the row number in the electroencephalogram time domain energy matrix represents channels under different frequencies, the column number represents a time point, and the meaning of the matrix median represents the instantaneous energy of single channel data under a certain frequency band at a certain time.
The electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes is calculated by adopting the following formula:
wherein X is ij The value of the ith row and the jth column in the electroencephalogram time domain matrix represents electroencephalogram data of single channel data at a certain moment under a certain frequency band.
The extraction of the electroencephalogram time-frequency matrix from the electroencephalogram data subjected to data preprocessing comprises the following steps:
and (3) obtaining a time-frequency matrix of each channel, wherein a row number in each matrix represents a frequency point, a column number represents a time point, the time-frequency matrix represents an energy value of each channel at each frequency and each time under a single channel, and the time-frequency matrix of each channel is formed into an n-dimensional matrix.
A motor imagery state recognition device based on a multi-fusion convolutional neural network is characterized in that:
the data acquisition and preprocessing module is used for acquiring the brain electrical data of the user and preprocessing the data;
the parameter extraction module is used for extracting an electroencephalogram time-frequency matrix and a specific frequency band electroencephalogram time-domain matrix from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix under a specific frequency band based on the specific frequency band electroencephalogram time-domain matrix;
the model identification module is used for inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state;
the input of the full-connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolution layer and a pooling layer, and the electroencephalogram time domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time domain energy matrix under the specific frequency band through a co-space mode.
A storage medium having stored thereon a computer program executable by a processor, characterized by: the method comprises the steps of realizing the motor imagery state identification method based on the multi-fusion convolutional neural network when the computer program is executed.
A motor imagery state recognition apparatus, characterized by comprising:
the electroencephalogram acquisition device is used for acquiring electroencephalogram data of a user;
the data processing device is provided with a memory and a processor, wherein the memory is stored with a computer program which can be executed by the processor, and the computer program realizes the steps of the motor imagery state identification method based on the multi-fusion convolutional neural network when being executed.
The beneficial effects of the invention are as follows: according to the invention, the motor imagery state recognition is carried out by utilizing the motor imagery state recognition model, the input of the full-connection layer in the motor imagery state recognition model is composed of the time-frequency domain brain electric characteristics extracted by the convolution layer and the time domain brain electric energy characteristics output by the co-space mode, the time domain brain electric energy characteristics of all frequency bands are provided for the time variation of the energy of the motor imagery state, the recognized brain electric energy characteristics of the specific frequency bands are mainly provided for the time domain brain electric energy characteristics, and when the two parts of information are integrated for training the classification layer, the reference frequency domain characteristic information is more comprehensive, the characteristics needing to be observed are highlighted in an important way, the classification result is not influenced by noise generated by other frequency bands, and the classification layer for the characteristic training in the second step is integrated for improving the classification accuracy.
Drawings
Fig. 1 is a schematic diagram of a motor imagery state recognition model in an embodiment.
Fig. 2 is a schematic diagram of model training data acquisition in an embodiment.
FIG. 3 is a schematic training flow diagram of an embodiment.
Detailed Description
The embodiment is a motor imagery state identification method based on a multi-fusion convolutional neural network, which specifically comprises the following steps:
s1, acquiring brain electrical data of a user, and preprocessing the data.
And acquiring brain electrical data of n-conduction m points, wherein the shape of the brain electrical matrix obtained after acquisition is n 1, and the sampling rate is 250Hz.
And performing one-round low-pass filtering of 0Hz-30Hz on the original electroencephalogram data to remove interference such as power frequency, high-frequency myoelectricity and the like, then performing time domain data enhancement, namely sliding a time window with the length of m2 at the step speed, selecting a proper time window for superposition averaging, and storing the time window into filtering data of n x m2 after finishing.
S2, extracting an electroencephalogram time-frequency matrix and a specific frequency band electroencephalogram time-domain matrix from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix under the specific frequency band based on the specific frequency band electroencephalogram time-domain matrix.
1. Electroencephalogram time domain energy matrix under specific frequency band
a. And designing a two-wheel band-pass filter to obtain the filtering data of the filtering data under the specific frequency bands Mu [8Hz-13Hz ] and beta [18Hz-24Hz ], and finally obtaining 2 brain electric time domain matrixes with the data format still being n.m2.
b. Calculating time domain brain energy based on the brain electrical time domain matrix by using the following formula to obtain 2 brain electrical time domain energy matrixes with n being m 2;
then combining the 2 line-by-line matrixes to obtain a 2n m2 brain electrical time domain energy matrix; wherein the row numbers represent channels at different frequencies, the column numbers represent time points, and therefore the meaning of the matrix median represents the instantaneous energy of a single channel at a single time under each frequency band.
2. Electroencephalogram time-frequency matrix
In order to observe more carefully the energy distribution of the electroencephalogram channels at each frequency, the present embodiment finds the (0-30 Hz) time-frequency matrices of each channel, n in total (corresponding to the electroencephalogram channels), and the format of each matrix is as follows:
in this example, the frequency interval is 2Hz, i.e. the row numbers are (0-2 Hz,2-4Hz, … …,28-30 Hz), thus 15 rows total.
In the above formula, the row number represents a frequency point, the column number represents a time point, and therefore, the matrix represents energy values of each frequency and each time under a single channel, and the time-frequency matrix of each channel is combined to form an n-dimensional matrix.
S3, inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under the specific frequency band into a trained motor imagery state identification model based on the convolutional neural network, and identifying the motor imagery state.
As shown in fig. 1, the motor imagery state recognition model in this embodiment has a convolutional neural network, a full-connection layer and a Softmax classifier, input data of the convolutional neural network is an electroencephalogram time-frequency matrix (i.e. time-frequency matrices of all channels are combined to form an n-dimensional matrix), and the time-frequency domain electroencephalogram features are output to the full-connection layer; the electroencephalogram time domain energy matrix under the specific frequency band extracts the electroencephalogram time domain energy characteristics under the specific frequency band through a co-space mode and then inputs the characteristics to the full-connection layer; the full-connection layer synthesizes the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy characteristics under the specific frequency band, outputs the electroencephalogram time-domain energy characteristics to the Softmax classifier, and classifies the electroencephalogram time-domain energy characteristics by the Softmax classifier.
The construction steps of the convolutional neural network in the embodiment are as follows, and the purpose of the convolutional neural network is to change the layer-by-layer compression characteristic of a large-size input matrix into 1 small-size matrix capable of accurately describing the input matrix, and to train the fully-connected neural network for classification:
a. building convolution layers
If the input layer data format is n x n2 x m2 and the single layer convolution kernel format is n3 x m3, let its weight be w1, bias be b2, kernel function be f, calculate output according to equation 3
output1=f(w 1 x 1 +w 1 x 2 +......w 1 x n +b) (3)
The convolution kernel slides according to the step length s1, 1 result is obtained by using the formula (3) for 1 time of sliding, and a matrix with the result composition format of n4 m4 is used as the output of the convolution layer, wherein the relation between the output format of the convolution layer and the input data format, the convolution layer format, the step length and the filling parameter p1 is shown as the formula (4)
n4*m4=(((n2-2p1-n3)/s1)+1)*(((m2-2p1-m3)/s1)+1) (4)
b. Build pooling layer
The effect of the pooling layer is to downsample the result output by the convolution layer according to the formula (5) to obtain a sample with smaller dimension, and if the format of the pooling layer is n5 x m5, the output of the pooling layer is
output2=max(output 1 ,output 2 ,....output (n5+m5) ) (5)
The pooling layer is the sliding calculation result as the convolution layer, the sliding step length is s2, the filling parameter is p2, and the calculation of the output dimension n6 m6 of the pooling layer is shown as the formula (6)
n6*m6=(((n5-2p2-n4)/s2)+1)*(((m5-2p2-m4)/s2)+1) (6)
The feature vectors output by the convolution layer and the pooling layer are combined with the feature vectors output by the co-space mode to train the full-connection layer, and the model is stored and then classified.
The training of the motor imagery state recognition model in this embodiment includes the following steps:
A. wearing an 8-lead brain electric cap, connecting an acquisition box to acquire data, taking earlobe parts A1 and A2 as reference electrodes, selecting three leads of C3, C4 and Cz from the acquired data, and writing the data into a data set.
B. The screen is black 10s at the beginning of training, the test needs to be closed and calm at this moment, then imagination of left and right hands is started according to the instruction of fig. 2, the screen presents a dot at the beginning, the test needs to be ready at this moment, the link lasts for 1s, then left arrow, right arrow or black screen appears, the link lasts for 6s, if the left arrow is present, the test needs to concentrate on imagination of left hand movement, if the right arrow is present, imagination of right hand movement needs to be concentrated on by the test is not present, and the black screen is not imaginable.
C. A total of 2000 trials (which may be a continuous multi-day experiment) were collected, data pre-processing was performed, including outlier removal, filtering at 0.5-30Hz, data normalization, data segmentation and labeling (where left hand imagination was noted 0, right hand imagination was noted 1, and not imagination was noted 2), then 80% of the trials of data segmentation were performed as training sets and 20% as test sets.
D. The method comprises the steps of (1) extracting two characteristics of single test data, (1) performing Mu rhythm (8-13 Hz) and beta rhythm (18-24 Hz) filtering on the single test data through a fft calculation time-frequency matrix (2), and calculating a time domain energy matrix according to a formula (1).
E. And D, sending the time-frequency matrix calculated in the step D into a convolution layer in the model, and extracting the feature vector by using a co-space mode algorithm by using the calculated time-domain energy matrix.
F. And E, sending the two parts of characteristics calculated in the step into a full-connection layer for training, adjusting the weights of neurons of the full-connection layer and the convolution layer by using a back propagation algorithm, calculating an output error, and storing a model when the output error is lower than a threshold value. The learning rate can be adjusted by 0.0001 amplitude when training the model, the best performing model on the test set is selected, and the training flow is shown in figure 3.
G. After the model is stored, each test data can be collected on line, preprocessing is carried out according to the process, the characteristics are extracted, and the model is called for characteristic recognition.
The embodiment also provides a motor imagery state recognition device based on the multi-fusion convolutional neural network, which comprises a data acquisition and preprocessing module, a parameter extraction module and a model recognition module.
The data acquisition and preprocessing module in this example is used for acquiring the brain electrical data of the user and preprocessing the data; the parameter extraction module is used for extracting an electroencephalogram time-frequency matrix and a specific frequency band electroencephalogram time-domain matrix from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix under a specific frequency band based on the specific frequency band electroencephalogram time-domain matrix; the model identification module is used for inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under the specific frequency band into a trained motor imagery state identification model based on the convolutional neural network, and identifying the motor imagery state.
In the embodiment, the input of the full connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolution layer and a pooling layer, and the electroencephalogram time domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time domain energy matrix under the specific frequency band through a co-space mode.
The present embodiment also provides a storage medium having stored thereon a computer program executable by a processor, which when executed, implements the steps of a motor imagery state recognition method based on a multi-fusion convolutional neural network.
The embodiment also provides motor imagery state identification equipment, which comprises an electroencephalogram acquisition device and a data processing device, wherein the electroencephalogram acquisition device is used for acquiring electroencephalogram data of a user; the data processing device has a memory and a processor, the memory stores a computer program which can be executed by the processor, and the computer program realizes the steps of the motor imagery state recognition method based on the multi-fusion convolutional neural network when being executed.

Claims (6)

1. A motor imagery state recognition method based on a multi-fusion convolutional neural network is characterized by comprising the following steps of:
acquiring brain electrical data of a user, and preprocessing the data;
extracting an electroencephalogram time-frequency matrix and a specific frequency band electroencephalogram time-domain matrix from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix under a specific frequency band based on the specific frequency band electroencephalogram time-domain matrix;
inputting an electroencephalogram time-frequency matrix and an electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state;
the input of the full connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolution layer and a pooling layer, and the electroencephalogram time domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time domain energy matrix under the specific frequency band through a co-space mode;
the method for extracting the specific frequency band electroencephalogram time domain matrix from the data preprocessed electroencephalogram data, and determining the electroencephalogram time domain energy matrix under the specific frequency band based on the specific frequency band electroencephalogram time domain matrix comprises the following steps:
obtaining filtering data of the electroencephalogram data subjected to data preprocessing under Mu [8Hz-13Hz ] and beta [18Hz-24Hz ] through a band-pass filter, and obtaining 2 electroencephalogram time domain matrixes;
obtaining the electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes to obtain 2 electroencephalogram time domain energy matrixes, and then merging the 2 electroencephalogram time domain energy matrixes according to the row matrixes to obtain an electroencephalogram time domain energy matrix;
the line numbers in the electroencephalogram time domain energy matrix represent channels under different frequencies, the column numbers represent time points, the meaning of the matrix median represents the instantaneous energy of single channel data under a certain frequency band at a certain time;
the electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes is calculated by adopting the following formula:
wherein X is ij The value of the ith row and the jth column in the electroencephalogram time domain matrix represents electroencephalogram data of single channel data at a certain moment under a certain frequency band.
2. The motor imagery state recognition method based on a multi-fusion convolutional neural network of claim 1, wherein the data preprocessing comprises:
and (3) performing low-pass filtering of 0-30Hz on the electroencephalogram data, removing interference, and then performing time domain data enhancement.
3. The motor imagery state recognition method based on a multi-fusion convolutional neural network according to claim 1, wherein the extracting an electroencephalogram time-frequency matrix from the data-preprocessed electroencephalogram data comprises:
and (3) obtaining a time-frequency matrix of each channel, wherein a row number in each matrix represents a frequency point, a column number represents a time point, the time-frequency matrix represents an energy value of each channel at each frequency and each time under a single channel, and the time-frequency matrix of each channel is formed into an n-dimensional matrix.
4. A motor imagery state recognition device based on a multi-fusion convolutional neural network is characterized in that:
the data acquisition and preprocessing module is used for acquiring the brain electrical data of the user and preprocessing the data;
the parameter extraction module is used for extracting an electroencephalogram time-frequency matrix and a specific frequency band electroencephalogram time-domain matrix from the electroencephalogram data subjected to data preprocessing, and determining an electroencephalogram time-domain energy matrix under a specific frequency band based on the specific frequency band electroencephalogram time-domain matrix;
the model identification module is used for inputting the electroencephalogram time-frequency matrix and the electroencephalogram time-domain energy matrix under a specific frequency band into a trained motor imagery state identification model based on a convolutional neural network, and identifying a motor imagery state;
the input of the full connection layer in the motor imagery state recognition model consists of time-frequency domain electroencephalogram characteristics and electroencephalogram time domain energy characteristics under a specific frequency band, wherein the time-frequency domain electroencephalogram characteristics are extracted by the electroencephalogram time-frequency matrix through a convolution layer and a pooling layer, and the electroencephalogram time domain energy characteristics under the specific frequency band are extracted by the electroencephalogram time domain energy matrix under the specific frequency band through a co-space mode;
the method for extracting the specific frequency band electroencephalogram time domain matrix from the data preprocessed electroencephalogram data, and determining the electroencephalogram time domain energy matrix under the specific frequency band based on the specific frequency band electroencephalogram time domain matrix comprises the following steps:
obtaining filtering data of the electroencephalogram data subjected to data preprocessing under Mu [8Hz-13Hz ] and beta [18Hz-24Hz ] through a band-pass filter, and obtaining 2 electroencephalogram time domain matrixes;
obtaining the electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes to obtain 2 electroencephalogram time domain energy matrixes, and then merging the 2 electroencephalogram time domain energy matrixes according to the row matrixes to obtain an electroencephalogram time domain energy matrix;
the line numbers in the electroencephalogram time domain energy matrix represent channels under different frequencies, the column numbers represent time points, the meaning of the matrix median represents the instantaneous energy of single channel data under a certain frequency band at a certain time;
the electroencephalogram time domain energy corresponding to the 2 electroencephalogram time domain matrixes is calculated by adopting the following formula:
wherein X is ij The value of the ith row and the jth column in the electroencephalogram time domain matrix represents electroencephalogram data of single channel data at a certain moment under a certain frequency band.
5. A storage medium having stored thereon a computer program executable by a processor, characterized by: the steps of the motor imagery state recognition method based on the multi-fusion convolutional neural network according to any one of claims 1 to 3 are realized when the computer program is executed.
6. A motor imagery state recognition apparatus, characterized by comprising:
the electroencephalogram acquisition device is used for acquiring electroencephalogram data of a user;
a data processing device having a memory and a processor, the memory storing a computer program executable by the processor, the computer program when executed implementing the steps of the motor imagery state recognition method based on a multi-fusion convolutional neural network of any one of claims 1 to 3.
CN202210079960.6A 2022-01-24 2022-01-24 Motor imagery state identification method based on multi-fusion convolutional neural network Active CN114533083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210079960.6A CN114533083B (en) 2022-01-24 2022-01-24 Motor imagery state identification method based on multi-fusion convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210079960.6A CN114533083B (en) 2022-01-24 2022-01-24 Motor imagery state identification method based on multi-fusion convolutional neural network

Publications (2)

Publication Number Publication Date
CN114533083A CN114533083A (en) 2022-05-27
CN114533083B true CN114533083B (en) 2023-12-01

Family

ID=81672572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210079960.6A Active CN114533083B (en) 2022-01-24 2022-01-24 Motor imagery state identification method based on multi-fusion convolutional neural network

Country Status (1)

Country Link
CN (1) CN114533083B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9116835B1 (en) * 2014-09-29 2015-08-25 The United States Of America As Represented By The Secretary Of The Army Method and apparatus for estimating cerebral cortical source activations from electroencephalograms
CN105809124A (en) * 2016-03-06 2016-07-27 北京工业大学 DWT- and Parametric t-SNE-based characteristic extracting method of motor imagery EEG(Electroencephalogram) signals
CN106502410A (en) * 2016-10-27 2017-03-15 天津大学 Improve the transcranial electrical stimulation device of Mental imagery ability and method in brain-computer interface
CN110163180A (en) * 2019-05-29 2019-08-23 长春思帕德科技有限公司 Mental imagery eeg data classification method and system
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN111110230A (en) * 2020-01-09 2020-05-08 燕山大学 Motor imagery electroencephalogram feature enhancement method and system
CN111950455A (en) * 2020-08-12 2020-11-17 重庆邮电大学 Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
CN112528834A (en) * 2020-12-08 2021-03-19 杭州电子科技大学 Sub-band target alignment common space mode electroencephalogram signal cross-subject classification method
CN112741637A (en) * 2020-12-23 2021-05-04 杭州国辰迈联机器人科技有限公司 P300 electroencephalogram signal extraction method, cognitive rehabilitation training method and system
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN113558644A (en) * 2021-07-20 2021-10-29 陕西科技大学 Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network
CN113576495A (en) * 2021-07-19 2021-11-02 浙江迈联医疗科技有限公司 Motor imagery evaluation method combined with EEG data quality
CN113780134A (en) * 2021-08-31 2021-12-10 昆明理工大学 Motor imagery electroencephalogram decoding method based on ShuffleNet V2 network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL239191A0 (en) * 2015-06-03 2015-11-30 Amir B Geva Image classification system
CN110531861B (en) * 2019-09-06 2021-11-19 腾讯科技(深圳)有限公司 Method and device for processing motor imagery electroencephalogram signal and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9116835B1 (en) * 2014-09-29 2015-08-25 The United States Of America As Represented By The Secretary Of The Army Method and apparatus for estimating cerebral cortical source activations from electroencephalograms
CN105809124A (en) * 2016-03-06 2016-07-27 北京工业大学 DWT- and Parametric t-SNE-based characteristic extracting method of motor imagery EEG(Electroencephalogram) signals
CN106502410A (en) * 2016-10-27 2017-03-15 天津大学 Improve the transcranial electrical stimulation device of Mental imagery ability and method in brain-computer interface
CN110163180A (en) * 2019-05-29 2019-08-23 长春思帕德科技有限公司 Mental imagery eeg data classification method and system
CN110765920A (en) * 2019-10-18 2020-02-07 西安电子科技大学 Motor imagery classification method based on convolutional neural network
CN111110230A (en) * 2020-01-09 2020-05-08 燕山大学 Motor imagery electroencephalogram feature enhancement method and system
CN111950455A (en) * 2020-08-12 2020-11-17 重庆邮电大学 Motion imagery electroencephalogram characteristic identification method based on LFFCNN-GRU algorithm model
CN113011239A (en) * 2020-12-02 2021-06-22 杭州电子科技大学 Optimal narrow-band feature fusion-based motor imagery classification method
CN112528834A (en) * 2020-12-08 2021-03-19 杭州电子科技大学 Sub-band target alignment common space mode electroencephalogram signal cross-subject classification method
CN112741637A (en) * 2020-12-23 2021-05-04 杭州国辰迈联机器人科技有限公司 P300 electroencephalogram signal extraction method, cognitive rehabilitation training method and system
CN113576495A (en) * 2021-07-19 2021-11-02 浙江迈联医疗科技有限公司 Motor imagery evaluation method combined with EEG data quality
CN113558644A (en) * 2021-07-20 2021-10-29 陕西科技大学 Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network
CN113780134A (en) * 2021-08-31 2021-12-10 昆明理工大学 Motor imagery electroencephalogram decoding method based on ShuffleNet V2 network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Spatial-frequency feature learning and classification of motor imagery EEG based on deep convolution neural network;Miao M,等;《Computational and mathematical methods in medicine》;全文 *
基于多特征融合的运动想象脑电信号分类研究;陆振宇,等;《现代计算机》;全文 *

Also Published As

Publication number Publication date
CN114533083A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
Lee et al. A convolution neural networks scheme for classification of motor imagery EEG based on wavelet time-frequecy image
CN111012336B (en) Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN112120694B (en) Motor imagery electroencephalogram signal classification method based on neural network
CN109784242A (en) EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks
Patil et al. Feature extraction of EEG for emotion recognition using Hjorth features and higher order crossings
Bentlemsan et al. Random forest and filter bank common spatial patterns for EEG-based motor imagery classification
CN114266276B (en) Motor imagery electroencephalogram signal classification method based on channel attention and multi-scale time domain convolution
CN107260166A (en) A kind of electric artefact elimination method of practical online brain
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN113191225B (en) Emotion electroencephalogram recognition method and system based on graph attention network
CN110353672A (en) Eye artefact removal system and minimizing technology in a kind of EEG signals
CN112515685A (en) Multi-channel electroencephalogram signal channel selection method based on time-frequency co-fusion
CN115795346A (en) Classification and identification method of human electroencephalogram signals
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
CN113128384B (en) Brain-computer interface software key technical method of cerebral apoplexy rehabilitation system based on deep learning
Ghonchi et al. Spatio-temporal deep learning for EEG-fNIRS brain computer interface
CN113128353A (en) Emotion sensing method and system for natural human-computer interaction
CN114533083B (en) Motor imagery state identification method based on multi-fusion convolutional neural network
Abdulwahab et al. EEG motor-imagery BCI system based on maximum overlap discrete wavelet transform (MODWT) and cubic SVM
CN116522106A (en) Motor imagery electroencephalogram signal classification method based on transfer learning parallel multi-scale filter bank time domain convolution
CN115299960A (en) Electric signal decomposition method and electroencephalogram signal decomposition device based on short-time varying separate modal decomposition
CN115154828A (en) Brain function remodeling method, system and equipment based on brain-computer interface technology
CN111990992A (en) Electroencephalogram-based autonomous movement intention identification method and system
Sokhal et al. Classification of EEG signals using empirical mode decomposition and lifting wavelet transforms
CN114082169B (en) Disabled hand soft body rehabilitation robot motor imagery identification method based on electroencephalogram signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant