CN109730818A - A kind of prosthetic hand control method based on deep learning - Google Patents

A kind of prosthetic hand control method based on deep learning Download PDF

Info

Publication number
CN109730818A
CN109730818A CN201811563220.XA CN201811563220A CN109730818A CN 109730818 A CN109730818 A CN 109730818A CN 201811563220 A CN201811563220 A CN 201811563220A CN 109730818 A CN109730818 A CN 109730818A
Authority
CN
China
Prior art keywords
frequency
eeg signals
mental imagery
dimensional image
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811563220.XA
Other languages
Chinese (zh)
Inventor
徐宝国
张琳琳
宋爱国
何小杭
魏智唯
李文龙
张大林
李会军
曾洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201811563220.XA priority Critical patent/CN109730818A/en
Publication of CN109730818A publication Critical patent/CN109730818A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a kind of prosthetic hand control methods based on deep learning, comprising: chooses to hand type of action;Mental imagery EEG signals are acquired, and are classified as training sample and sample to be tested;Mental imagery EEG signals are pre-processed, including low-pass filtering and Laplacian space filtering;Feature in training sample is extracted using wavelet transformation, generates the time-frequency two-dimensional image of training sample;Building with time-frequency two-dimensional image be input, with Mental imagery action classification be output convolutional neural networks model, training adjustment parameter, and by roll over cross validation, model after being trained;Feature in sample to be tested is extracted using wavelet transformation, time-frequency two-dimensional image is generated and input model obtains corresponding Mental imagery action classification and exports, and control artificial hand as control instruction and complete corresponding movement.The present invention chooses common hand motion in life and is more nearly nature as class object, has the characteristics that more abundant use of information, stability and accuracy rate are higher.

Description

A kind of prosthetic hand control method based on deep learning
Technical field
The present invention relates to a kind of prosthetic hand control methods based on deep learning, belong to the technical field of brain machine equipment control.
Background technique
Brain-computer interface (Brain-computer interface, BCI) technology using brain as control centre, utilizes computer Brain signal is received, control external equipment completes corresponding control instruction after processing analysis, this not depend on peripheral nerve flesh The communication mode of meat provides new life style for physical disabilities.One basic BCI system includes following 4 each section: signal Acquisition, feature extraction, the execution of tagsort and control instruction.BCI technology is suffered from the life auxiliary of physical disabilities, limb injury There are very big research and application value in the fields such as rehabilitation training, Entertainment and the smart home of person.
BCI application realization dependent on EEG signals (Electroencephalogram, EEG) identification good accuracy and Robustness.Traditional EEG recognition methods includes two links: feature extraction and Modulation recognition.It is calculated first using various types of signal processing Method extracts EEG signal time domain, frequency domain and space characteristics, selects a certain feature or combines input of several features as classifier, Next is only the parameter optimization of classifier, finally obtains disaggregated model.Traditional feature extraction algorithm includes wavelet transformation (Wavelet Transform, WT), autoregression (Auto regression, AR) model parameter estimation, cospace mode (Common spatial patterns, CSP) and all kinds of innovatory algorithms.Classification method mainly includes linear discriminant analysis (Linear Discriminant Analysis, LDA), support vector machines (Support Vector Machine, SVM) and k- Most adjacent principle (k-Nearest Neighbor, KNN) etc..But tradition research, there are some defects, conventional method first needs A large amount of priori knowledge is wanted to remove screening signal characteristic, the drawbacks of artificial screening feature is that classification results can not be judged to a certain spy The degree of dependence of component is levied, the characteristic information extracted based on this mode is incomplete;Secondly traditional characteristic extraction and Modulation recognition It is the independent link of two lower couplings, the discrimination of classifier depends entirely on the superiority and inferiority of feature extraction, and classifier is mentioned in utilization Also information loss can be generated when the feature taken, simultaneously as EEG, vulnerable to noise jamming, the nicety of grading of classifier is difficult to improve.
In recent years, an important branch of the deep learning theory as machine learning, computer vision, speech recognition with And natural language processing etc. has obtained effective application and development abundant.The big feature of the one of deep learning is that model can The automatic validity feature that extracts has some researchers and applies it in eeg signal classification in recent years.
Summary of the invention
Technical problem to be solved by the present invention lies in overcome the deficiencies in the prior art, and it is special to solve Mental imagery EEG signals Sign extracts difficulty, and the low problem of classification accuracy, the present invention provides a kind of prosthetic hand control method based on deep learning, utilizes depth The theories of learning are spent, the new tool of signal characteristic abstraction and classification in BCI system realizes nature artificial hand control based on deep learning System.
The present invention specifically uses following technical scheme to solve above-mentioned technical problem:
A kind of prosthetic hand control method based on deep learning, comprising the following steps:
Step 1 chooses hand motion;
Step 2, acquisition Mental imagery EEG signals, and it is classified as training sample and sample to be tested;
Step 3 pre-processes the Mental imagery EEG signals of acquisition, including carries out to Mental imagery EEG signals low Pass filter and Laplacian space filtering;
Step 4 utilizes feature in the training sample of Mental imagery EEG signals after wavelet transformation extraction pretreatment, generation fortune The time-frequency two-dimensional image of training sample in dynamic imagination EEG signals;
Step 5, building are to input, with Mental imagery with the time-frequency two-dimensional image of training sample in Mental imagery EEG signals Action classification is the convolutional neural networks model of output, and training adjusts convolutional neural networks model parameter, and is intersected by mostly folding Verifying, the convolutional neural networks model after being trained;
Step 6 utilizes feature in the sample to be tested of Mental imagery EEG signals after wavelet transformation extraction pretreatment, generation fortune It moves the time-frequency two-dimensional image of sample to be tested in imagination EEG signals and the convolutional neural networks model inputted after training is corresponded to Mental imagery action classification and output;The correspondence Mental imagery action classification of output is generated into control instruction control artificial hand completion pair The movement answered.
Further, as a preferred technical solution of the present invention: the hand motion chosen in the step 1 includes hand It holds, refer to kneading rotation.
Further, as a preferred technical solution of the present invention: being set in the step 2 using multichannel brain electric acquisition Standby acquisition Mental imagery EEG signals.
Further, as a preferred technical solution of the present invention, in the step 3 to Mental imagery EEG signals into The filtering of row Laplacian space, using formula:
Wherein, Vi LAPRefer to the i-th channel by the filtered amplitude of Laplacian Laplacian space;Vi ERRefer to i-th to lead to The amplitude in road;SiRefer to the neighborhood channel set in i-th of channel;gijRefer to intermediate variable, is Vi LAPCoefficient, calculated by above-mentioned formula It obtains;dijRefer to the distance in the i-th channel and jth interchannel, the channel j belongs to neighborhood Si
Further, as a preferred technical solution of the present invention, pre- place is extracted using wavelet transformation in the step 4 Feature after reason in Mental imagery EEG signals in training sample generates time-frequency two-dimensional image, specifically:
Step 4a, wavelet transformation is carried out to training sample in pretreated Mental imagery EEG signals, chooses small echo and makees For morther wavelet;Several layers decomposition is made to channel where each training sample, each layer of calculating generates a series of wavelet coefficients, and The wavelet coefficient of all layers of calculating forms a two-dimensional coefficient matrix;
By spatial scaling be frequency, on each numerical projection to image of two-dimensional coefficient matrix, will obtain be with the time Horizontal axis, frequency are the time-frequency two-dimensional image of the longitudinal axis, and color lump light and shade, which corresponds in two-dimensional coefficient matrix, in the time-frequency two-dimensional image is The size of number numerical value.
Step 4b, position in the training sample of Mental imagery EEG signals is selected to be in three training of sensorimotor cortex Sample makees the time-frequency two-dimensional image that each training sample corresponding channel is generated after step 4a is handled, and in order from top to bottom by institute Time-frequency two-dimensional image mosaic is obtained, combination forms a time-frequency two-dimensional image.
Further, as a preferred technical solution of the present invention: choosing the conduct of Morlet small echo in the step 4a Morther wavelet.
Further, as a preferred technical solution of the present invention: the convolutional neural networks mould constructed in the step 5 Type includes two convolutional layers, two pond layers and two full articulamentums.
Further, as a preferred technical solution of the present invention: in the step 5, training adjusts convolutional Neural net Network model parameter is updated by iteration using entropy function is intersected as optimization aim and obtains parameter.
The present invention by adopting the above technical scheme, can have the following technical effects:
Prosthetic hand control method based on deep learning of the invention uses wavelet transformation using new input form Means handle Mental imagery EEG signals, generate time-frequency two-dimensional image and combine related channel program formation eventually for the input of classification Image thus to obtain classification results and is converted into control signal, and control artificial hand executes corresponding movement.The movement of this method Imaginary Movement, which is chosen, commonly uses hand motion close in real life, so that more certainly to the process of control artificial hand from Mental imagery So;Also, convolutional neural networks model of the invention does not use traditional feature extraction and adds the mode of machine learning, but makes With convolutional neural networks, the advantage of higher-dimension signal abstraction feature can be learnt by deep learning, by feature extraction and signal point The link of class is integrated.On the one hand this mode avoids the incompleteness of artificial selected characteristic;On the other hand meter is simplified Calculation process, effectively save manpower.
Therefore, the present invention changes conventional method and chooses imagination right-hand man or foot equally to control the mode of artificial hand, chooses Common hand motion is such as held, refers to that kneading rotation as class object, so that outputting and inputting unanimously, is more nearly in life Nature.The present invention generates the time-frequency two-dimensional figure of Mental imagery signal using wavelet transformation using a kind of new input form Picture, while the feature extraction ability powerful by convolutional neural networks compare conventional method, have use of information more abundant, surely Qualitative higher, the higher feature of accuracy rate.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow diagrams of the prosthetic hand control method of deep learning.
Fig. 2 is the structural schematic diagram of convolutional neural networks model in the present invention.
Specific embodiment
Embodiments of the present invention are described with reference to the accompanying drawings of the specification.
As shown in Figure 1, the present invention devises a kind of prosthetic hand control method based on deep learning, this method specifically include with Lower step:
Step 1 chooses hand motion;The hand that the present invention has been specifically chosen holding, has referred to kneading 3 seed types of rotation Movement, is the common actions under real life scene.
Since tradition artificial hand control is all the movement for being controlled, but being imagined by the movement of imagination right-hand man, foot and tongue There are uncoordinated unnatural contradiction between the movement executed with artificial hand, the present invention chooses common 3 movements in life: hand It holds, refer to that kneading three kinds of action classifications of rotation as class object, are allowed to be more nearly true living scene.
Step 2, using 22 channel brain wave acquisition equipment, acquire the Mental imagery EEG signals of different movements, and by its point For training sample and sample to be tested.
Step 3 pre-processes the Mental imagery EEG signals of acquisition, including carries out to Mental imagery EEG signals low Pass filter and Laplacian space filtering;Since there is various artefacts and interference, such as eye in the original EEG signals of acquisition The noise that the electro-physiological signals such as electricity, myoelectricity and equipment and external environment introduce.This causes original signal signal-to-noise ratio low, can use Information is few.Therefore it needs to improve signal-to-noise ratio by filtering, specifically:
Step 3a, low-pass filtering is carried out to the Mental imagery EEG signals of acquisition, filters out the unrelated ingredient such as baseline drift;
Step 3b, the Mental imagery EEG signals of selection acquisition are in the channel of sensorimotor cortex, emerging to each sense The channel of interest carries out Large Laplacian Laplacian space filtering operation, improves signal-to-noise ratio;
The present invention uses improved Large Laplacian Laplacian space filtering algorithm, improved Large The filtering of Laplacian Laplacian space is calculated by formula (1) and formula (2):
Wherein, Vi LAPRefer to amplitude of i-th channel after Laplce's Laplacian space filtering;Vi ERRefer to i-th to lead to The amplitude in road;SiRefer to the neighborhood channel set in i-th of channel;gijRefer to Vj ERPreceding coefficient is an intermediate variable, according to second A formula is calculated, dijRefer to the distance in the i-th channel and jth interchannel, the channel j belongs to neighborhood Si
Step 4 utilizes feature in the training sample of Mental imagery EEG signals after wavelet transformation extraction pretreatment, generation fortune The time-frequency two-dimensional image of training sample in dynamic imagination EEG signals;The present invention generates convolutional Neural net by the means of wavelet transformation The input picture of network model, wavelet transformation have the characteristics that multiscale analysis, conventional method are compared, when can obtain signal simultaneously The performance in domain and frequency domain.In addition, the form of picture is used to input as the classifier of model, convolutional neural networks can be made full use of The ability of extraction feature improves classification accuracy.The process of wavelet transformation of the present invention specifically:
Step 4a, firstly, carrying out wavelet transformation, the present invention to training sample in pretreated Mental imagery EEG signals It is preferred that choosing Morlet small echo as morther wavelet;64 layers of decomposition are made to channel where each training sample, each layer of calculating produces A series of raw wavelet coefficients, and the wavelet coefficient of 64 layers of all calculating forms a two-dimensional coefficient matrix;
The present embodiment preferably selects Morlet small echo as morther wavelet, so that energy is more concentrated, classifying quality is more preferable; In addition Mental imagery signal sampling frequencies are 128Hz, know that original signal highest frequency is 64Hz according to Nyquist's theorem, therefore Wavelet decomposition is by signal decomposition to 64 layers.
Then, it is frequency by spatial scaling, is frequency by spatial scaling, then the row of coefficient matrix corresponds to frequency, column pair Answer the time;It by each numerical projection to image of two-dimensional coefficient matrix, obtains using time t as horizontal axis, frequency f is the longitudinal axis Time-frequency two-dimensional image, color lump light and shade corresponds to the size of factor v in two-dimensional coefficient matrix in the time-frequency two-dimensional image.
Step 4b, position in the training sample of Mental imagery EEG signals is selected to be in three training of sensorimotor cortex Sample, such as select three electrodes: then the signal in the channel C3, Cz and C4 makees the processing of step 4a, generate each training sample pair Answer the time-frequency two-dimensional image in channel;Frequency variation relativity and relative positional relationship between reserve channel, the present invention will be by According to selected channel C 4, the sequence of Cz, C3 from top to bottom by the time-frequency two-dimensional image mosaic in 3 channels of gained, combination forms one Input of the time-frequency two-dimensional image as classifier.
The present invention selects the corresponding small echo of training sample of the Mental imagery EEG signals of C3, Cz and C43 related channel programs Time-frequency figure is combined, this 3 respective Time-Frequency Informations in channel are not only contained in the image of formation, also retain the phase in channel The comparative information of information change is built to spatial position and channel.
Step 5, building are to input, with Mental imagery with the time-frequency two-dimensional image of training sample in Mental imagery EEG signals Action classification is the convolutional neural networks model of output, and training adjusts convolutional neural networks model parameter, and is intersected by mostly folding Verifying, the convolutional neural networks model after being trained.It is specific as follows:
Step 5a, for the number of plies of training samples number design convolutional network, since brain electricity tests complicated and time consumption, training sample This is sufficient not as good as general image classification task, and the present invention voluntarily constructs convolutional neural networks structure;
Based on deep learning theory, to avoid over-fitting, the present invention builds the nerve net that depth is 6 layers for a small amount of sample Network, structure is as shown in Fig. 2, wherein include two convolutional layers, two pond layers and two full articulamentums;It is taken not in convolutional layer Same convolution kernel size: 1*1,3*3 and 5*5;By first layer convolutional layer, input time-frequency two-dimensional image respectively with three kinds of sizes Convolution kernel does convolution operation, and the quantity of every kind of size convolution kernel is 8, and obtained characteristic pattern is 24 channels, each nerve therein Member all uses relu function to be activated as activation primitive;By first layer pond layer, down-sampling is carried out to characteristic pattern, is reduced Size.Second layer convolutional layer, the convolution nuclear volume of every kind of size are 16, and obtained characteristic pattern is 48 channels;Second layer pond layer Equally also carry out the operation of down-sampling;First full articulamentum, activation primitive use relu function;The full articulamentum of the last layer It is output layer, the probability that sample belongs to each classification is calculated using softmax function, takes probability value maximum a kind of as classification As a result.
Step 5b, 10 folding cross validations are set, more stable model-evaluation index are obtained with this, and adapted to sample Measure little actual conditions.For the stable evaluation index obtained, using cross validation.Stratified sampling is used first, will be trained Sample is divided into 10 parts of mutual exclusion, takes wherein be used to train for 9 parts every time, and remaining 1 part is used to test, the classification of final mask Precision takes mean value by the result of 10 models, in this, as the evaluation index of model.
Preferably, in the training process of convolutional neural networks model, for classification problem of the invention, using cross entropy For function as optimization aim, training adjusts convolutional neural networks model parameter, and iteration, which updates, obtains parameter;Made using Adam algorithm For optimization method, undated parameter in an iterative process, until obtaining finally stable model.
Step 6 is extracted after pre-processing using the small wave converting method of step 4 in the sample to be tested of Mental imagery EEG signals Feature generates the time-frequency two-dimensional image of sample to be tested in Mental imagery EEG signals, the convolutional Neural net after the training of input step 5 Network model obtains corresponding Mental imagery action classification and exports, i.e., sample to be tested is converted to corresponding picture input form, made To be classified with trained model to it, is exported as the classification of 3 kinds of Mental imageries, it is 00,01 and 10 that sorting signal, which is separately encoded, And the binary coding is exported.Then, by the correspondence Mental imagery action classification 00,01 and 10 of output, corresponding control is generated System instruction control artificial hand completes corresponding movement.
To sum up, the present invention chooses common hand motion in life and such as holds, refers to that kneading is rotated as class object, so that It outputs and inputs unanimously, is more nearly nature.The present invention is generated using wavelet transformation and is transported using a kind of new input form The time-frequency two-dimensional image of dynamic imaginary signals, while the feature extraction ability powerful by convolutional neural networks, compare conventional method, It is more abundant with use of information, the higher feature of accuracy rate.
Embodiments of the present invention are explained in detail above in conjunction with attached drawing, but the present invention is not limited to above-mentioned implementations Mode within the knowledge of a person skilled in the art can also be without departing from the purpose of the present invention It makes a variety of changes.

Claims (8)

1. a kind of prosthetic hand control method based on deep learning, which comprises the following steps:
Step 1 chooses hand motion;
Step 2, acquisition Mental imagery EEG signals, and it is classified as training sample and sample to be tested;
Step 3 pre-processes the Mental imagery EEG signals of acquisition, including carries out low pass filtered to Mental imagery EEG signals Wave and Laplacian space filtering;
Step 4, using wavelet transformation extract pretreatment after Mental imagery EEG signals training sample in feature, generate movement thinks As the time-frequency two-dimensional image of training sample in EEG signals;
Step 5, building are acted with the time-frequency two-dimensional image of training sample in Mental imagery EEG signals for input, with Mental imagery Classification is the convolutional neural networks model of output, and training adjusts convolutional neural networks model parameter, and by rolling over cross validation more, Convolutional neural networks model after being trained;
Step 6, using wavelet transformation extract pretreatment after Mental imagery EEG signals sample to be tested in feature, generate movement thinks As the time-frequency two-dimensional image of sample to be tested in EEG signals and the convolutional neural networks model that inputs after training obtains corresponding movement Imaginary Movement classification and output;It is corresponding that the correspondence Mental imagery action classification of output is generated into control instruction control artificial hand completion Movement.
2. according to claim 1 based on the prosthetic hand control method of deep learning, it is characterised in that: chosen in the step 1 Hand motion include hold, refer to kneading rotation.
3. according to claim 1 based on the prosthetic hand control method of deep learning, it is characterised in that: used in the step 2 Multichannel brain electric acquires equipment and acquires Mental imagery EEG signals.
4. according to claim 1 based on the prosthetic hand control method of deep learning, it is characterised in that: to fortune in the step 3 Dynamic imagination EEG signals carry out Laplacian space filtering, using formula:
Wherein, Vi LAPRefer to the i-th channel by the filtered amplitude of Laplacian space;Vi ERRefer to the amplitude in i-th of channel;SiRefer to the The neighborhood channel set in i channel;gijRefer to intermediate variable;dijRefer to the distance in the i-th channel and jth interchannel, the channel j belongs to neighbour Domain Si
5. according to claim 1 based on the prosthetic hand control method of deep learning, it is characterised in that: utilized in the step 4 Feature after wavelet transformation extraction pretreatment in Mental imagery EEG signals in training sample, generates time-frequency two-dimensional image, specifically Are as follows:
Step 4a, wavelet transformation is carried out to training sample in pretreated Mental imagery EEG signals, chooses small echo as female Small echo;Several layers decomposition is made to channel where each training sample, each layer of calculating generates a series of wavelet coefficients, and all The wavelet coefficient that layer calculates forms a two-dimensional coefficient matrix;
It is frequency by spatial scaling, on each numerical projection to image of two-dimensional coefficient matrix, will obtains using the time as horizontal axis, Frequency is the time-frequency two-dimensional image of the longitudinal axis, and color lump light and shade corresponds to factor v in two-dimensional coefficient matrix in the time-frequency two-dimensional image Size;
Step 4b, position in the training sample of Mental imagery EEG signals is selected to be in three trained samples of sensorimotor cortex This, makees the time-frequency two-dimensional image that each training sample corresponding channel is generated after step 4a is handled, and in order from top to bottom by gained Time-frequency two-dimensional image mosaic, combination form a time-frequency two-dimensional image.
6. according to claim 5 based on the prosthetic hand control method of deep learning, it is characterised in that: chosen in the step 4a Morlet small echo is as morther wavelet.
7. according to claim 1 based on the prosthetic hand control method of deep learning, it is characterised in that: constructed in the step 5 Convolutional neural networks model include two convolutional layers, two pond layers and two full articulamentums.
8. according to claim 1 based on the prosthetic hand control method of deep learning, it is characterised in that: in the step 5, training Convolutional neural networks model parameter is adjusted using entropy function is intersected as optimization aim, is updated by iteration and obtains parameter.
CN201811563220.XA 2018-12-20 2018-12-20 A kind of prosthetic hand control method based on deep learning Pending CN109730818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811563220.XA CN109730818A (en) 2018-12-20 2018-12-20 A kind of prosthetic hand control method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811563220.XA CN109730818A (en) 2018-12-20 2018-12-20 A kind of prosthetic hand control method based on deep learning

Publications (1)

Publication Number Publication Date
CN109730818A true CN109730818A (en) 2019-05-10

Family

ID=66360892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811563220.XA Pending CN109730818A (en) 2018-12-20 2018-12-20 A kind of prosthetic hand control method based on deep learning

Country Status (1)

Country Link
CN (1) CN109730818A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110711055A (en) * 2019-11-07 2020-01-21 江苏科技大学 Image sensor intelligence artificial limb leg system based on degree of depth learning
CN110807386A (en) * 2019-10-25 2020-02-18 天津大学 Chinese speech decoding nursing system based on transfer learning
CN110929581A (en) * 2019-10-25 2020-03-27 重庆邮电大学 Electroencephalogram signal identification method based on space-time feature weighted convolutional neural network
CN111317468A (en) * 2020-02-27 2020-06-23 腾讯科技(深圳)有限公司 Electroencephalogram signal classification method and device, computer equipment and storage medium
CN112043550A (en) * 2020-09-29 2020-12-08 深圳睿瀚医疗科技有限公司 Tongue control hand rehabilitation robot system based on magnetic markers and operation method thereof
CN113143676A (en) * 2020-12-15 2021-07-23 天津大学 Control method of external limb finger based on brain-muscle-electricity cooperation

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202223388U (en) * 2011-08-30 2012-05-23 西安交通大学苏州研究院 Wearable brain-controlled intelligent prosthesis
CN105654037A (en) * 2015-12-21 2016-06-08 浙江大学 Myoelectric signal gesture recognition method based on depth learning and feature images
CN105943207A (en) * 2016-06-24 2016-09-21 吉林大学 Intelligent artificial limb movement system based on idiodynamics and control methods thereof
CN106821681A (en) * 2017-02-27 2017-06-13 浙江工业大学 A kind of upper limbs ectoskeleton control method and system based on Mental imagery
CN106909784A (en) * 2017-02-24 2017-06-30 天津大学 Epileptic electroencephalogram (eeg) recognition methods based on two-dimentional time-frequency image depth convolutional neural networks
CN106943217A (en) * 2017-05-03 2017-07-14 广东工业大学 A kind of reaction type human body artificial limb control method and system
CN108446020A (en) * 2018-02-28 2018-08-24 天津大学 Merge Mental imagery idea control method and the application of Visual Graph and deep learning
CN108874137A (en) * 2018-06-15 2018-11-23 北京理工大学 A kind of gesture motion based on EEG signals is intended to the universal model of detection
CN108983973A (en) * 2018-07-03 2018-12-11 东南大学 A kind of humanoid dexterous myoelectric prosthetic hand control method based on gesture identification

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202223388U (en) * 2011-08-30 2012-05-23 西安交通大学苏州研究院 Wearable brain-controlled intelligent prosthesis
CN105654037A (en) * 2015-12-21 2016-06-08 浙江大学 Myoelectric signal gesture recognition method based on depth learning and feature images
CN105943207A (en) * 2016-06-24 2016-09-21 吉林大学 Intelligent artificial limb movement system based on idiodynamics and control methods thereof
CN106909784A (en) * 2017-02-24 2017-06-30 天津大学 Epileptic electroencephalogram (eeg) recognition methods based on two-dimentional time-frequency image depth convolutional neural networks
CN106821681A (en) * 2017-02-27 2017-06-13 浙江工业大学 A kind of upper limbs ectoskeleton control method and system based on Mental imagery
CN106943217A (en) * 2017-05-03 2017-07-14 广东工业大学 A kind of reaction type human body artificial limb control method and system
CN108446020A (en) * 2018-02-28 2018-08-24 天津大学 Merge Mental imagery idea control method and the application of Visual Graph and deep learning
CN108874137A (en) * 2018-06-15 2018-11-23 北京理工大学 A kind of gesture motion based on EEG signals is intended to the universal model of detection
CN108983973A (en) * 2018-07-03 2018-12-11 东南大学 A kind of humanoid dexterous myoelectric prosthetic hand control method based on gesture identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王卫星等: "基于卷积神经网络的脑电信号上肢运动意图识别", 《浙江大学学报(工学版)》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110807386A (en) * 2019-10-25 2020-02-18 天津大学 Chinese speech decoding nursing system based on transfer learning
CN110929581A (en) * 2019-10-25 2020-03-27 重庆邮电大学 Electroencephalogram signal identification method based on space-time feature weighted convolutional neural network
CN110807386B (en) * 2019-10-25 2023-09-22 天津大学 Chinese language decoding nursing system based on transfer learning
CN110711055A (en) * 2019-11-07 2020-01-21 江苏科技大学 Image sensor intelligence artificial limb leg system based on degree of depth learning
CN111317468A (en) * 2020-02-27 2020-06-23 腾讯科技(深圳)有限公司 Electroencephalogram signal classification method and device, computer equipment and storage medium
CN111317468B (en) * 2020-02-27 2024-04-19 腾讯科技(深圳)有限公司 Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium
CN112043550A (en) * 2020-09-29 2020-12-08 深圳睿瀚医疗科技有限公司 Tongue control hand rehabilitation robot system based on magnetic markers and operation method thereof
CN112043550B (en) * 2020-09-29 2023-08-18 深圳睿瀚医疗科技有限公司 Tongue control hand rehabilitation robot system based on magnetic marks and operation method thereof
CN113143676A (en) * 2020-12-15 2021-07-23 天津大学 Control method of external limb finger based on brain-muscle-electricity cooperation

Similar Documents

Publication Publication Date Title
CN109730818A (en) A kind of prosthetic hand control method based on deep learning
Aslan et al. Automatic Detection of Schizophrenia by Applying Deep Learning over Spectrogram Images of EEG Signals.
CN105654037B (en) A kind of electromyography signal gesture identification method based on deep learning and characteristic image
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN110163180A (en) Mental imagery eeg data classification method and system
CN113505822B (en) Multi-scale information fusion upper limb action classification method based on surface electromyographic signals
CN113158793B (en) Multi-class motor imagery electroencephalogram signal identification method based on multi-feature fusion
CN113693613B (en) Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium
CN109711383A (en) Convolutional neural networks Mental imagery EEG signal identification method based on time-frequency domain
CN110399846A (en) A kind of gesture identification method based on multichannel electromyography signal correlation
CN109598222B (en) EEMD data enhancement-based wavelet neural network motor imagery electroencephalogram classification method
CN112043473B (en) Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb
CN111860410A (en) Myoelectric gesture recognition method based on multi-feature fusion CNN
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN102930284A (en) Surface electromyogram signal pattern recognition method based on empirical mode decomposition and fractal
Pan et al. MAtt: A manifold attention network for EEG decoding
CN108280414A (en) A kind of recognition methods of the Mental imagery EEG signals based on energy feature
CN108042132A (en) Brain electrical feature extracting method based on DWT and EMD fusions CSP
CN113295702B (en) Electrical equipment fault diagnosis model training method and electrical equipment fault diagnosis method
CN112732092B (en) Surface electromyogram signal identification method based on double-view multi-scale convolution neural network
CN113158964A (en) Sleep staging method based on residual learning and multi-granularity feature fusion
CN110674774A (en) Improved deep learning facial expression recognition method and system
CN108573207A (en) EMD and CSP merges most optimum wavelengths space filtering brain electrical feature extracting method
CN113974627A (en) Emotion recognition method based on brain-computer generated confrontation
CN109389101A (en) A kind of SAR image target recognition method based on denoising autoencoder network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190510