CN116720106A - Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field - Google Patents

Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field Download PDF

Info

Publication number
CN116720106A
CN116720106A CN202310677981.2A CN202310677981A CN116720106A CN 116720106 A CN116720106 A CN 116720106A CN 202310677981 A CN202310677981 A CN 202310677981A CN 116720106 A CN116720106 A CN 116720106A
Authority
CN
China
Prior art keywords
motor imagery
feature generator
self
training
electroencephalogram signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310677981.2A
Other languages
Chinese (zh)
Inventor
张泽金
孙曜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202310677981.2A priority Critical patent/CN116720106A/en
Publication of CN116720106A publication Critical patent/CN116720106A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Psychology (AREA)
  • Physiology (AREA)
  • Fuzzy Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Human Computer Interaction (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a self-adaptive motor imagery electroencephalogram signal classification method based on the field of transfer learning. And secondly, completing distribution matching of a plurality of substructures in each category through a data distribution matching module, and extracting classification features through a feature generator. Sample classification is then accomplished by two separate task-specific classifiers. And finally, training and constructing a self-adaptive motor imagery electroencephalogram signal classification model based on the transfer learning field by using a motor imagery public data set. The method solves the problem of insufficient self-adaption to specific type data in domain-level and class-level distribution matching, solves the problem that a feature generator extracts the features which are far away from source domain data support and are not discriminative, and optimizes the classification accuracy, speed and generalization capability of the classifier.

Description

Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field
Technical Field
The invention relates to the field of electroencephalogram signal processing, in particular to a motor imagery electroencephalogram signal classification method based on field self-adaption in transfer learning.
Technical Field
The motor imagery brain-computer interface means that a user imagines the movement of his/her body part, different movement parts can regulate different areas of the cerebral cortex to generate different brain electrical signals, and data are collected through an brain electrical signal collector. The recorded electroencephalogram signals are then decoded using a classification algorithm, and the classification results can be used to control external devices such as rehabilitation robots, hand orthotics, and the like. However, because of the large inter-subject individual differences in brain electrical signals, motor imagery brain-computer interfaces typically require long calibration efforts for new subjects, which lengthy calibration greatly reduces the utility of the brain-computer interface system. Therefore, the adaptive method in the transfer learning field is most suitable in a plurality of signal processing and machine learning methods for reducing or eliminating calibration, and can train a classification algorithm to ensure that the method has the characteristics of high recognition accuracy, high recognition speed, strong generalization capability and the like, thereby realizing the rapid and accurate control of external equipment.
The adaptive method in the transfer learning field is to use the existing data, model and structure to help achieve the learning target on the target data. The difference from the traditional machine learning method is that the source domain training data and the target domain data in the transfer learning domain self-adaptive method are different in distribution, the feature space is not necessarily consistent, the target domain data is not labeled, and the target domain data is inaccessible in the training process. The core aim is to reduce the distribution difference of the two fields, so that the target field data is marked completely.
The work for developing the classification task of the motor imagery electroencephalogram based on the self-adaption in the field of transfer learning is not mature so far, and the accuracy, the speed and the generalization capability of an electroencephalogram classification algorithm are difficult to improve due to the large individual difference among tested electroencephalogram signals, so that a certain difficulty exists in the rapid and accurate control of external equipment such as a rehabilitation robot.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a self-adaptive motor imagery electroencephalogram signal classification method based on the field of transfer learning, which mainly utilizes substructure distribution matching and a maximum classifier difference technology to realize electroencephalogram signal classification with high accuracy, high speed and strong generalization capability. The method is developed around three main steps of data distribution matching of a source domain and a target domain in transfer learning, feature extraction and signal classification. The matching of the source domain data distribution and the target domain data distribution refers to the fact that the source domain data distribution and the target domain data distribution are more consistent through calculation, so that the method is very helpful for subsequent feature extraction and signal classification, and is a premise for realizing migration. Feature extraction refers to reducing the data dimension of the input data or recombining the original features for subsequent use by a feature generator. The signal classification refers to classifying input data features by using a classifier, so that the final purpose of classifying the data samples is achieved.
Firstly, the substructure distribution matching technology is used for realizing better source domain and target domain data distribution matching, the substructure distribution matching focuses on the fine grain potential distribution of data, the problem of insufficient self-adaption to specific type data in domain level and class level distribution matching can be solved, and the problem of excessive adaption to specific type data in sample level distribution matching can be solved. And secondly, the maximum classifier difference technology is used for extracting data features which are more beneficial to classification and training a classifier with higher classification precision.
The self-adaptive motor imagery electroencephalogram signal classification method based on the transfer learning field mainly comprises the following steps:
step (1), acquiring motor imagery electroencephalogram signals, normalizing, and labeling to form a test data set; the acquired motor imagery public data set constitutes a training data set.
And (2) completing distribution matching of a plurality of substructures in each category in the data set through a data distribution matching module, wherein the specific contents are as follows:
firstly, filtering motor imagery electroencephalogram signals based on a Gaussian mixture model to obtain substructures of a source domain and a target domain: and determining the number of source domain clusters according to a Bayesian information criterion, after clustering, each cluster can be regarded as a substructure, after obtaining the substructures of the source domain and the target domain, setting equal weights on the substructures of the target domain, and realizing self-adaptive weighting on the substructures of the source domain by solving a transmission scheme corresponding to the optimal transmission cost, thereby giving a larger weight to the substructures close to the target domain. At this moment, each sub-structure can be regarded as a sample by utilizing an optimal transmission method, migration mapping of the source domain sub-structure and the corresponding target domain sub-structure is performed, and then feature extraction and classification can be performed.
Step (3), extracting classification features through a feature generator, wherein the specific contents are as follows:
the result of the sub-structure and sub-structure migration mapping in the previous step is input to a feature generator, the feature generator extracts features and outputs the features to two independent task-specific classifiers, and the feature generator considers the output of the two classifiers on a target sample, so that the target feature is generated near a point with source domain data support by optimization training.
Sample classification is accomplished by two independent task-specific classifiers.
And (5) forming a motor imagery electroencephalogram signal classification model by the data distribution matching module, the feature generator and the two independent classifiers, and training the motor imagery electroencephalogram signal classification model by using a training data set. And then inputting the test data set into a trained motor imagery electroencephalogram signal classification model, and outputting a motor imagery electroencephalogram signal classification result.
In the first stage of training, a classifier and feature generator are trained to correctly classify source domain samples. The second stage fixes the feature generator, trains two classifiers, makes their classification difference the largest. And the third stage is opposite to the second stage, fixing two classifiers, and optimizing a feature generator so that the newly extracted features have the same effect on the two classifiers as much as possible. And (3) alternately antagonizing the characteristic generator and the classifier to finally obtain a classification model with high classification precision, high speed and strong generalization capability, and repeatedly executing the second and third stages until the loss of the classification model of the motor imagery electroencephalogram tends to be stable, thereby completing training.
In the third stage, the confidence degrees of the samples are expressed by the difference sizes of the two independent classifier outputs, the confidence degrees are ordered from large to small, the samples corresponding to the second half of the confidence degrees are selected to optimize the feature generator, and target features are generated nearby points with source domain data support and used for retraining the classifier.
The invention has the advantages and beneficial results that:
the method provided by the above realizes better source domain and target domain data distribution matching, and the substructure distribution matching focuses on the fine-grained potential distribution of the data, so that the problem of insufficient self-adaption to specific type data in domain level and class level distribution matching can be solved, and the problem of excessive adaption to specific type data in sample level distribution matching can be solved. The feature generator and the classifier are optimized alternately in antagonism through the maximum classifier difference, so that the problem that the feature generator extracts the features which are far away from the source domain data support and are not discriminative is solved, and meanwhile, the classification accuracy, speed and generalization capability of the classifier are optimized.
Drawings
Fig. 1 is a flowchart of motor imagery electroencephalogram signal processing;
FIG. 2 is a diagram of a sub-structure based data distribution matching framework;
FIG. 3 is a diagram of a training process for two classifiers;
FIG. 4 is a schematic diagram of a second phase of resistance training;
fig. 5 is a schematic diagram of a third stage of resistance training.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
A self-adaptive motor imagery electroencephalogram signal classification method based on the field of transfer learning comprises the following specific steps:
step (1), acquiring motor imagery electroencephalogram signals, normalizing, and labeling to form a test data set; the acquired motor imagery public data set constitutes a training data set.
As shown in fig. 1, the process of the motor imagery electroencephalogram signal processing comprises electroencephalogram signal acquisition, signal filtering to obtain source domain and target domain data, data distribution matching, feature extraction and classification. The model comprises a data distribution matching module, a feature generator and a classifier which are unfolded around three steps of data distribution matching, feature extraction and classification in a processing flow. Firstly, a data distribution matching module is used for carrying out distribution matching on substructures in each category, then a feature generator extracts features, finally two independent task-specific classifiers are introduced, the confidence of a sample is represented by the divergence of the features generator and the task-specific classifiers, and the maximum classifier difference technology is used for completing the alternating optimization of the feature generator and the classifier.
And (2) completing distribution matching of a plurality of substructures in each category in the data set through a data distribution matching module, wherein the specific contents are as follows:
as shown in fig. 2, the substructures of the source domain and the target domain are obtained firstly based on a gaussian mixture model, wherein the number of clusters of the source domain is determined according to a bayesian information criterion, after clustering, each cluster can be regarded as a substructure, each substructure has two representation modes of expected representation and distributed representation, and the transmission cost of the substructures corresponding to the two representation modes is obtained by the following formula:
where c (·, ·) represents the required substructure transmission cost,representing the i, j sub-structure centers of the source domain and the target domain, respectively, < >>Representation l 2 Square of norm>Representing a gaussian distribution->Covariance of i, j sub-structures of source domain and target domain, respectively, +.>Representing the square of the wasperstein distance, I.I 2 Representation l 1 Square of norm, B (·, ·) 2 Representing the square of the Bures matrix, tr (. Cndot.) represents the trace of the matrix,. Cndot., 1/2 the square root is represented.
After the substructures of the source domain and the target domain are obtained, the substructures of the source domain need to be weighted, the substructures close to the target domain are given larger weight, the substructures of the target domain are provided with equal weight, and the transmission scheme corresponding to the optimal transmission cost is solved through the following formula:
s.t
wherein,,representing the transmission scheme corresponding to the optimal transmission cost, C representing a cost matrix composed of transmission cost, pi representing a coupling matrix of probability density functions of a source domain and a target domain, < ->Pi, lambda, representing the minimum value of the Frobenius dot product of pi and C 1 To balance the hyper-parameters for computing speed and accuracy, · T Representing the transpose of the matrix>Representation and source domainClass number k dimension identical unit vector, W t Representing the target domain substructure weight vector, H (pi) being the entropy term, pi ij A coupling matrix representing probability density functions of the i, j-th substructures of the source domain and the target domain.
And the weight of the corresponding source domain substructure can be calculated through the following formula, so that the self-adaptive weighting of the source domain substructure is realized.
Wherein W is s Representing the source domain sub-structure weight vector,transmission scheme corresponding to the optimal transmission cost is indicated, < ->Representing the same unit vector as the number k dimensions of the target domain categories.
At this point, each sub-structure can be regarded as a sample by using the optimal transmission method, and the migration mapping of the source domain sub-structure and the corresponding target domain sub-structure is performed, that is, the solution is:
s.t
Ω(π)=∑ jcl ||π(I cl ,j)|| 2 .
wherein pi * Representing a migration mapping of a source domain substructure and a target domain substructure, C representing a cost matrix consisting of transmission costsPi represents the coupling matrix of the probability density functions of the source domain and the target domain, pi, lambda, representing the minimum value of the Frobenius dot product of pi and C 1 For the super-parameters for balancing the calculation speed and accuracy, H (pi) is the entropy term, eta is the super-parameters, omega (pi) is the group sparse regularizer, < >>Representing the same unit vector, W, as the number of source domain categories, k, dimensions s Representing the source domain sub-structure weight vector, I.I 2 Representation l 2 Norm representation, I cl Represents the row index in pi (I) cl J) represents a vector containing the j-th column coefficient of pi associated with class cl.
Step (3), extracting classification features through a feature generator, wherein the specific contents are as follows:
the feature generator extracts feature outputs from the input data to two independent task-specific classifiers, and the feature generator considers the outputs of the two classifiers on the target sample to be optimally trained to generate target features near points with source domain data support.
Step (4) sample classification is accomplished by two independent task-specific classifiers, the specific contents are as follows:
as shown in fig. 3, the target domain samples are first classified by two separate task-specific classifiers, with the shaded portion being the divergence of the two classifiers. The two classifiers are then trained to maximize the shadow area, with the goal of optimizing the feature generator using the maximum classifier difference to allow it to re-train the two classifiers based on the better features extracted by the different classifiers, ultimately minimizing the divergence of the two classifiers.
And (5) forming a motor imagery electroencephalogram signal classification model by the data distribution matching module, the feature generator and the two independent classifiers, and training the motor imagery electroencephalogram signal classification model by using the motor imagery public data sets BCI Competition IVDataset a and BCI Competition IV Dataset 2 b.
And then inputting the test data set into a trained motor imagery electroencephalogram signal classification model, and outputting a motor imagery electroencephalogram signal classification result.
In the first stage of training, the classifier and generator are first trained to correctly classify the source samples, and in order for the classifier and generator to obtain task-specific discriminating characteristics, this step is crucial, the training network minimizes softmax cross entropy:
wherein the method comprises the steps ofRepresenting a softmax cross entropy loss function, x s ,y s Representing source domain samples and corresponding labels, X s ,Y s Representing a set of source domain samples and corresponding tags, < ->Mathematical expectations representing source domain samples, +.>Representing the same unit vector as the source domain label dimension, p (y|x s ) Representing the probability output of the input source domain samples obtained by the classifier.
In the second stage, shown in FIG. 4, the feature generator G is fixed, training the classifier F 1 And F 2 Maximizing their variance, the variance of the two classifiers is measured by the L1 penalty, and the training targets are as follows:
wherein the method comprises the steps ofClassifier F representing when optimization objective is minimum 1 、F 2 ,x t ,y t Representing a target domain sample and corresponding label, X t Express target field sample set,/->Representing a softmax cross entropy loss function, < ->Representing the difference of the two classifiers, +.>Representing the mathematical expectation of the target domain samples, d (·, ·) represents the difference loss, i.e., the absolute value of the difference between the probability outputs of the two classifiers, p 1 (y|x t )、p 2 (y|x t ) Respectively represent the probability outputs of the input target domain samples obtained by the two classifiers.
In the third stage, as shown in FIG. 5, two classifiers F are fixed, as opposed to the second stage 1 And F 2 The feature generator G is optimized so that the classifier effects are as identical as possible:
wherein the method comprises the steps ofFeature generator G, # representing when the optimization objective is minimum>Representing classification of target domains by two classifiersDifferences. The confidence degrees of the samples are expressed through the difference sizes of the two independent classifier outputs, the confidence degrees are ordered from large to small, and the samples with the confidence degrees corresponding to the second half of the confidence degrees are selected to optimize the feature generator.
And repeating the second and third stages until the model loss is stabilized, and completing model training.
Alternately optimizing two classifiers F 1 And F 2 And a feature generator G, which finally reaches a classification model with high classification accuracy, high speed and strong generalization capability.

Claims (5)

1. The self-adaptive motor imagery electroencephalogram signal classification method based on the transfer learning field is characterized by comprising the following steps of:
step 1, acquiring motor imagery electroencephalogram signals, normalizing, and labeling to form a test data set;
acquiring a motor imagery public data set to form a training data set;
step 2, completing distribution matching of a plurality of substructures in each category in a data set through a data distribution matching module; using an optimal transmission algorithm to consider each sub-structure as a sample, and performing migration mapping of the source domain sub-structure and the corresponding target domain sub-structure;
step 3, extracting classification features through a feature generator, inputting the migration mapping result in the previous step into the feature generator, extracting features by the feature generator, and outputting the features to two independent classifiers;
step 4, completing sample classification through two independent classifiers;
step 5, a data distribution matching module, a feature generator and two independent classifiers form a motor imagery electroencephalogram signal classification model, and a training data set is used for training the motor imagery electroencephalogram signal classification model;
inputting the test data set into a trained motor imagery electroencephalogram signal classification model, and outputting a motor imagery electroencephalogram signal classification result.
2. The method for classifying motor imagery electroencephalograms based on self-adaption of the transfer learning field according to claim 1, wherein the specific process of distribution matching in the step 2 is as follows:
determining the number of source domain clusters according to a Bayesian information criterion, wherein after clustering, each cluster is regarded as a substructure;
after the substructures of the source domain and the target domain are obtained, weighting is carried out on the substructures of the source domain, and equal weights are set on the substructures of the target domain.
3. The adaptive motor imagery electroencephalogram classification method based on the transfer learning field according to claim 2, wherein the specific operation of weighting the source domain substructure is: and the self-adaptive weighting of the source domain sub-structure is realized by solving a transmission scheme corresponding to the optimal transmission cost.
4. The method for classifying motor imagery electroencephalograms based on self-adaption of transfer learning field according to any one of claims 1 or 3, wherein the training specific process in step 5 is as follows:
in the first stage of training, a training classifier and a feature generator correctly classify the source domain samples; a second stage of fixed feature generator, training two classifiers to maximize their classification differences; the third stage is opposite to the second stage, two classifiers are fixed, and a feature generator is optimized so that the newly extracted features have the same effect on the two classifiers;
and repeatedly executing the second and third stages until the loss of the motor imagery electroencephalogram signal classification model tends to be stable, and completing training.
5. The method for classifying motor imagery electroencephalograms based on self-adaption of transfer learning field according to claim 4, wherein the specific process of the third-stage optimization feature generator in step 5 is as follows: the confidence degrees of the samples are expressed through the difference sizes of the two independent classifier outputs, the confidence degrees are ordered from large to small, and the samples with the confidence degrees corresponding to the second half of the confidence degrees are selected to optimize the feature generator.
CN202310677981.2A 2023-06-09 2023-06-09 Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field Pending CN116720106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310677981.2A CN116720106A (en) 2023-06-09 2023-06-09 Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310677981.2A CN116720106A (en) 2023-06-09 2023-06-09 Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field

Publications (1)

Publication Number Publication Date
CN116720106A true CN116720106A (en) 2023-09-08

Family

ID=87874591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310677981.2A Pending CN116720106A (en) 2023-06-09 2023-06-09 Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field

Country Status (1)

Country Link
CN (1) CN116720106A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118121215A (en) * 2024-05-08 2024-06-04 之江实验室 Method and device for identifying cross-library brain electrical fatigue based on EGRF model
CN118121215B (en) * 2024-05-08 2024-07-30 之江实验室 Method and device for identifying cross-library brain electrical fatigue based on EGRF model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118121215A (en) * 2024-05-08 2024-06-04 之江实验室 Method and device for identifying cross-library brain electrical fatigue based on EGRF model
CN118121215B (en) * 2024-05-08 2024-07-30 之江实验室 Method and device for identifying cross-library brain electrical fatigue based on EGRF model

Similar Documents

Publication Publication Date Title
CN112308158B (en) Multi-source field self-adaptive model and method based on partial feature alignment
CN108520780B (en) Medical data processing and system based on transfer learning
CN113378632B (en) Pseudo-label optimization-based unsupervised domain adaptive pedestrian re-identification method
CN105975931B (en) A kind of convolutional neural networks face identification method based on multiple dimensioned pond
CN106778832B (en) The semi-supervised Ensemble classifier method of high dimensional data based on multiple-objection optimization
CN1197025C (en) Enhancing knowledge discovery from multiple data sets using multiple support vector machines
CN114841257B (en) Small sample target detection method based on self-supervision comparison constraint
CN107145830A (en) Hyperspectral image classification method with depth belief network is strengthened based on spatial information
CN108764280B (en) Medical data processing method and system based on symptom vector
CN104966105A (en) Robust machine error retrieving method and system
CN110569982A (en) Active sampling method based on meta-learning
CN108009571A (en) A kind of semi-supervised data classification method of new direct-push and system
CN112232395B (en) Semi-supervised image classification method for generating countermeasure network based on joint training
Weber et al. Automated labeling of electron microscopy images using deep learning
CN107423697A (en) Activity recognition method based on non-linear fusion depth 3D convolution description
CN114329031A (en) Fine-grained bird image retrieval method based on graph neural network and deep hash
CN114329124A (en) Semi-supervised small sample classification method based on gradient re-optimization
CN114220164A (en) Gesture recognition method based on variational modal decomposition and support vector machine
CN104573728B (en) A kind of texture classifying method based on ExtremeLearningMachine
CN116611025B (en) Multi-mode feature fusion method for pulsar candidate signals
CN113139513A (en) Hyperspectral classification method for active learning of space spectrum based on super-pixel contour and improved PSO-ELM
CN116720106A (en) Self-adaptive motor imagery electroencephalogram signal classification method based on transfer learning field
CN115310491A (en) Class-imbalance magnetic resonance whole brain data classification method based on deep learning
CN115329821A (en) Ship noise identification method based on pairing coding network and comparison learning
Peng Research on Emotion Recognition Based on Deep Learning for Mental Health

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination