CN117574989A - Deep learning method of adaptive filtering in motor imagery classification - Google Patents

Deep learning method of adaptive filtering in motor imagery classification Download PDF

Info

Publication number
CN117574989A
CN117574989A CN202311493387.4A CN202311493387A CN117574989A CN 117574989 A CN117574989 A CN 117574989A CN 202311493387 A CN202311493387 A CN 202311493387A CN 117574989 A CN117574989 A CN 117574989A
Authority
CN
China
Prior art keywords
convolution
module
adaptive filtering
features
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311493387.4A
Other languages
Chinese (zh)
Inventor
张浩宇
王成
王涵宇
黄宣竣
娄红羽
史晟乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Rhb Medical Tech Co ltd
Original Assignee
Shenzhen Rhb Medical Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Rhb Medical Tech Co ltd filed Critical Shenzhen Rhb Medical Tech Co ltd
Priority to CN202311493387.4A priority Critical patent/CN117574989A/en
Publication of CN117574989A publication Critical patent/CN117574989A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a motor imagery classification self-adaptive filtering method, which comprises the following steps: (1) Extracting time sequence features and channel features through three-layer deep convolution and expanding high-dimensional features; (2) Intercepting different components of the original electroencephalogram data by using a sliding window; (3) Automatically learning filtering parameters through a frequency self-adaptive module; (4) And extracting time domain features by using a convolution module in the time domain, and improving the perception field of view by downsampling the features layer by layer.

Description

Deep learning method of adaptive filtering in motor imagery classification
Technical Field
The invention relates to the technical field of brain-computer interfaces, in particular to a depth method based on high receptive field and frequency domain analysis for identifying left and right hand motor imagery intention by utilizing brain electrical signals.
Background
Brain-computer interface (BCI) is a technology that interprets Brain activity and converts it into commands that can control external devices, with the potential to change the world and improve quality of life. Motor Imagery (MI) based on Electroencephalogram (EEG) can assist a stroke patient in rehabilitation training, can provide guidelines for doctor diagnosis or treatment planning, and can also be used for wheelchair control and prosthesis control, and thus has been widely used in many BCI applications. However, the current methods have limitations on EEG decoding, which limit the widespread development of the BCI industry. In this context, efficient methods of interpreting EEG are a popular problem.
In recent years, deep Learning (DL) methods are widely used in the BCI field. The DL method can learn the potential high-dimensional characteristics of the EEG autonomously, so that filtering or artificial characteristic extraction of the EEG signal is not needed, and the processing flow of the EEG is greatly simplified. The MI decoding method based on the convolutional neural network (Convolution neural network, CNN) effectively acquires time local features in different channels of the EEG, and the intervention of a Self-attention (Self-attention) mechanism enlarges the receptive field of feature extraction by the neural network and captures more hidden long-term dependence. However, to our knowledge, the above models for MI classification mostly perform feature extraction in the time domain, ignoring the critical role of EEG frequency domain features on MI classification.
The frequency domain has been the focus of time signal attention. On the one hand, the frequency domain analysis plays a role of filtering and denoising in the signal processing, and the EEG is affected by noise in the collecting process, so that the traditional EEG processing method mostly filters the mains supply with about 50Hz and low-frequency noise signals, however, a filter designed artificially can cause loss of effective information while filtering noise and redundant information, and high-dimensional characteristics which are difficult to identify by human eyes are prevented from being extracted by the DL method. On the other hand, in MI experiments, along with the beginning of imagination of the testee, EEG will show step-like growth, which is shown by that μ -rhythm (8-12 Hz) is suppressed, that is, the power spectrum density in the frequency range is reduced, and more EEG at rest tends to be in a stable state, and is shown by that the power spectrum density in the frequency domain is relatively stable, so that the identification of MI in the frequency domain has the unique advantage. Based on the two points, the adaptive filter design is applied to the DL method, so that the defects that the DL method does not filter noise and redundancy are overcome, the capability of capturing the electroencephalogram change is enhanced, and the MI and resting state can be better identified.
Disclosure of Invention
In view of this, the present invention proposes an adaptive filtered MI classification algorithm framework by simulating the traditional frequency domain filtering: signal embedding is carried out through a multidimensional convolution module, and EEG signals are fully decoded; frequency domain feature screening is carried out through the self-adaptive filtering module; obtaining the time characteristics of the high receptive field through a time convolution module; and finishing classification tasks through the classifier with the feature fusion, and identifying MI and rest states. The deep learning method is realized by the following technical scheme:
1) The method comprises the steps that by means of a multi-dimensional convolution embedding module, three different convolution layers are used by the multi-dimensional convolution embedding module, input time sequence features are extracted, features among different channels are integrated, and high-dimensional features are expanded;
2) Removing excessive redundant features by using an average pooling layer;
3) Different parts are intercepted from the signal through a sliding window, in the sliding window, data expansion is carried out, and window electroencephalogram data are respectively decoded to obtain characteristics, so that the universality of decoding is enhanced;
4) Screening the effective frequency through an adaptive filtering module;
5) Decoding the time series from the high receptive field using a time convolution module;
6) Integrating the characteristics under different sliding windows to jointly complete the prediction of the classification result;
7) Calculating a loss function by using the prediction result and the classification label, completing the back propagation of the network based on gradient descent, and updating parameters in each module; and testing the updated network parameters by using the verification set, and if the testing accuracy is better than the accuracy of the previous iteration, saving the parameters of the model.
8) Repeating the steps 1) to 7), and after 50 times of iteration, the loss function tends to converge and the optimal model parameters for MI identification are obtained.
Drawings
FIG. 1 is a block diagram of a method of deep learning of adaptive filtering in MI classification;
Detailed Description
The present invention will be further described with reference to specific examples, but the present invention is not limited to these examples.
The invention is intended to cover any alternatives, modifications, equivalents, and variations that fall within the spirit and scope of the invention. In the following description of preferred embodiments of the invention, specific details are set forth in order to provide a thorough understanding of the invention, and the invention will be fully understood to those skilled in the art without such details.
As shown in fig. 1, the implementation of the adaptive filtering deep learning method in MI classification according to the present invention includes the following steps:
1) The MI automatic identification system is composed of a multidimensional convolution module, a sliding window, a self-adaptive filtering module and a time convolution module. EEG signal x input to a network in The dimension is (1, C in ,T in ) Wherein C in The number of channels of brain electricity is T in Points are sampled for time.
2) In the multi-dimensional convolution module, F is used 1 The core is (1, T) exract ) Extracting the characteristics of the time domain high receptive field which is Texract time sampling points from the two-dimensional convolution of the (a); the use core is (C in The two-dimensional depth convolution of the 1) combines all the electroencephalogram channels into one electroencephalogram channel, and the characteristics of multiple channels are fused; using F 2 Two-dimensional convolution with (1, 1) kernel extends the high-dimensional feature number to F 2 A plurality of; using two cores (1, T respectively fusion1 ) And (1, T) fusion2 ) Redundancy reduction in the average pooling of (a) and utilization of F in the two-layer pooling 2 Feature extraction is performed by two-dimensional convolution with (1, 16) kernels. The output x of the module conv The dimension is (F 2 ,T in /(T fusion1 T fusion2 ))。
3) The sliding window is shifted n times along the dimension of the time sampling point to obtain n times of dimension (F 2 ,T in /(T fusion1 T fusion2 ) Characteristics of-n) [ x ] 0 ,x 1 ,...,x n-1 ]The subsequent decoding process is divided into n branches, and feature extraction is performed on the n features respectively. The module increases the amount of data to promote robustness of network identification.
4) The adaptive filtering module fits the filter parameters using the learning ability of the network, and the given input x using the residual structure to obtain the output y can be expressed as:
y=x+MLP(F(x)) (1)
wherein MLP (·) is a multi-layer perceptron, and the structural formula (2) of the frequency domain filtering function F (·)
Wherein F (·) is the Fourier transform, F -1 (. Cndot.) is the inverse fourier transform, the filter H is a matrix that can be learned,is Hadamard product. Multiplication in the frequency domain is equivalent to convolution in the time domain, so F (·) is equivalent to convolution in the time domain,
where is convolution, h=f -1 (H)。
5) The receptive field is enlarged by using two time convolution modules, each time convolution module is formed by two layers of cavity convolutions, and a residual structure is used for preventing gradient disappearance. At this stage, the receptive field of the cavity convolution is sequentially enlarged from network deepening, and all the features are integrated together while the features with different time lengths are extracted.
6) Averaging the features under all sliding windows yields all features for MI classification.
7) And finally, predicting by using the extracted features through a full connection layer classifier, and judging whether the tested person performs MI.
The foregoing is illustrative of the preferred embodiments of the present invention, and is not to be construed as limiting the claims. The present invention is not limited to the above embodiments, and the specific structure thereof is allowed to vary. In general, all changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims (10)

1. A deep learning method of self-adaptive filtering in motor imagery classification is characterized in that: the method is realized through the following steps:
1) Extracting the input time sequence features by using a multi-dimensional convolution embedding module by using two-dimensional time domain convolution, integrating the features among different channels by using two-dimensional channel convolution, and finally expanding the high-dimensional features by using the two-dimensional convolution with the convolution kernel size of 1 multiplied by 1; removing excessive redundant features by using an average pooling layer;
2) Intercepting different time periods from the electroencephalogram signals through a sliding window, thereby realizing data expansion and respectively decoding the window electroencephalogram data to obtain characteristics, and enhancing the universality of decoding;
3) And screening the effective frequency through the adaptive filtering module. Fitting filter parameters by using a residual structure in the adaptive filtering module; the electroencephalogram of each window is input into the multi-layer perceptron after being subjected to a filtering function to obtain a residual part, the residual part and the original electroencephalogram are overlapped to obtain final output, and parameters in the filtering function can be automatically adjusted through continuous iterative training of a network;
4) Decoding the time series from the high receptive field using a time convolution module; integrating the characteristics under different sliding windows to jointly complete the prediction of the classification result;
5) Calculating a loss function by using the prediction result and the classification label, completing the back propagation of the network based on gradient descent, and updating parameters in each module; and testing the updated network parameters by using the verification set, and if the testing accuracy is better than the accuracy of the previous iteration, storing the model parameters. After 50 iterations are completed, the loss function converges, and meanwhile, the optimal model parameters are obtained.
2. The method for deep learning of adaptive filtering in motor imagery classification according to claim 1, wherein the input of the multidimensional convolution embedding module is an electroencephalogram signal with fixed sampling points and fixed channel numbers.
3. The method for deep learning of adaptive filtering in motor imagery classification according to claim 1, wherein the input electroencephalogram signals need to be normalized and do not need to be filtered and denoised.
4. The sliding window operation of claim 1, wherein the sliding window cannot be greater than a sampling time point length of the feature being processed.
5. The adaptive filtering module of claim 1, wherein the filter is sized to match the size of the currently processed electroencephalogram signal feature, and wherein the long-term dependence of the global field of view is obtained for global filtering.
6. The adaptive filtering module of claim 1, wherein the frequency domain conversion is implemented by a fast fourier transform.
7. The method according to claim 1, wherein the method is implemented by a PyTorch environment.
8. The time convolution module according to claim 1, wherein said time convolution kernel is gradually enlarged in size to obtain a high receptive field in a deep network.
9. The time convolution according to claim 1 or 7, wherein said convolution is a hole convolution, and said high receptive field is obtained without increasing the computational complexity.
10. The number of iterations of claim 1 wherein the penalty function is advanced to end training when 10 consecutive iterations are unchanged.
CN202311493387.4A 2023-11-10 2023-11-10 Deep learning method of adaptive filtering in motor imagery classification Pending CN117574989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311493387.4A CN117574989A (en) 2023-11-10 2023-11-10 Deep learning method of adaptive filtering in motor imagery classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311493387.4A CN117574989A (en) 2023-11-10 2023-11-10 Deep learning method of adaptive filtering in motor imagery classification

Publications (1)

Publication Number Publication Date
CN117574989A true CN117574989A (en) 2024-02-20

Family

ID=89861643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311493387.4A Pending CN117574989A (en) 2023-11-10 2023-11-10 Deep learning method of adaptive filtering in motor imagery classification

Country Status (1)

Country Link
CN (1) CN117574989A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117972395A (en) * 2024-03-22 2024-05-03 清华大学 Multi-channel data processing method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117972395A (en) * 2024-03-22 2024-05-03 清华大学 Multi-channel data processing method and device, electronic equipment and storage medium
CN117972395B (en) * 2024-03-22 2024-07-09 清华大学 Multi-channel data processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107844755B (en) Electroencephalogram characteristic extraction and classification method combining DAE and CNN
CN114052735B (en) Deep field self-adaption-based electroencephalogram emotion recognition method and system
CN110399857A (en) A kind of brain electricity emotion identification method based on figure convolutional neural networks
EP4212100A1 (en) Electroencephalogram signal classification method and apparatus, and device, storage medium and program product
CN113643723B (en) Voice emotion recognition method based on attention CNN Bi-GRU fusion visual information
CN117574989A (en) Deep learning method of adaptive filtering in motor imagery classification
CN112465069B (en) Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
CN114224360B (en) EEG signal processing method, equipment and storage medium based on improved EMD-ICA
CN113392732B (en) Partial discharge ultrasonic signal anti-interference method and system
CN112932501B (en) Method for automatically identifying insomnia based on one-dimensional convolutional neural network
CN112070668A (en) Image super-resolution method based on deep learning and edge enhancement
CN114781441B (en) EEG motor imagery classification method and multi-space convolution neural network model
CN116363149A (en) Medical image segmentation method based on U-Net improvement
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
CN114648048A (en) Electrocardiosignal noise reduction method based on variational self-coding and PixelCNN model
CN114209342A (en) Electroencephalogram signal motor imagery classification method based on space-time characteristics
CN117473303B (en) Personalized dynamic intention feature extraction method and related device based on electroencephalogram signals
Li et al. Deeplabv3+ vision transformer for visual bird sound denoising
CN117828333A (en) Cable partial discharge feature extraction method based on signal mixing enhancement and CNN
CN113558644A (en) Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network
CN117158994A (en) Brain electrical signal classification method and system for cerebral apoplexy patient based on motor imagery
CN116628420A (en) Brain wave signal processing method based on LSTM neural network element learning
CN116919422A (en) Multi-feature emotion electroencephalogram recognition model establishment method and device based on graph convolution
CN115017960A (en) Electroencephalogram signal classification method based on space-time combined MLP network and application
Tang et al. Dynamic pruning group equivariant network for motor imagery EEG recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination