CN113625882B - Myoelectric gesture recognition method based on sparse multichannel correlation characteristics - Google Patents
Myoelectric gesture recognition method based on sparse multichannel correlation characteristics Download PDFInfo
- Publication number
- CN113625882B CN113625882B CN202111184784.4A CN202111184784A CN113625882B CN 113625882 B CN113625882 B CN 113625882B CN 202111184784 A CN202111184784 A CN 202111184784A CN 113625882 B CN113625882 B CN 113625882B
- Authority
- CN
- China
- Prior art keywords
- channel
- correlation
- window
- correlation coefficient
- gesture recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/389—Electromyography [EMG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses a myoelectric gesture recognition method based on sparse multichannel correlation characteristics, which comprises the following steps of: acquiring upper limb multi-channel surface electromyographic signals through a sparse multi-channel electromyographic sensor; carrying out preprocessing operations such as denoising, windowing, segmenting and the like on the collected surface electromyographic signals; researching the correlation among different channels, and constructing a feature set according to the correlation coefficient among the channels; and performing gesture recognition by adopting a multi-algorithm fusion classification model. The method can effectively utilize multi-channel information, has fewer required characteristics, reduces computing resources, and has high identification precision and better robustness.
Description
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a myoelectric gesture recognition method based on sparse multichannel correlation characteristics.
Background
The human hand is an important tool for the survival and labor of people, and plays an important role in the life and work of people. Losing the arm and bringing very big inconvenience and burden for amputation patient, intelligent artificial hand will assist upper limbs disabled person to adapt to life better and merge into the society. At present, some commercial artificial hands can only recognize limited hand motions or are controlled based on motion sequences, and therefore the practical effect is not ideal. In recent years, many scholars at home and abroad develop researches on gesture recognition methods of myoelectric prosthetic hands, and the improvement of the gesture recognition rate is still a research hotspot of intelligent prosthetic hands, wherein feature extraction and classifier design are key points for improving system precision.
The surface electromyographic signals are the result of the comprehensive superposition of motor unit potentials generated by a plurality of muscle fibers on the time and space presented by the skin surface, and can provide important information of muscle activities. Most amputees can generate surface electromyographic signals on residual limb muscles, the configuration of acquiring the surface electromyographic signals on the skin surface is simple, surgical operations are not needed, and the surface electromyographic signals are the mainstream signal sources of the existing electromyographic artificial limbs. The surface electromyogram signal is a non-stationary signal, and effective information features need to be extracted, so that data dimensionality is reduced, and hand movement intention decoding is facilitated. The traditional time domain and frequency domain feature extraction algorithm is low in calculation cost and therefore widely applied, but for the same gesture motion, under the condition of different forces or speeds, feature values can change, limitation exists on the feature of the represented myoelectricity, and the subsequent recognition performance is influenced.
In order to fully acquire electromyographic information, in recent years, a plurality of students adopt a multi-channel sensor to acquire surface electromyographic signals, and research on correlation among different channels is one of methods for effectively utilizing channel information. The correlation analysis refers to the analysis of two or more variable elements with correlation, so as to measure the degree of closeness of correlation of the two variable elements. For multi-channel surface electromyographic signal features, it is crucial to capture the correlation between multi-channel signals.
Disclosure of Invention
In view of the above problems, the present invention aims to provide a myoelectric gesture recognition method based on sparse multi-channel correlation characteristics, which effectively utilizes multi-channel information by extracting inter-channel correlation coefficient characteristics, and combines a multi-algorithm fusion classification model to perform gesture recognition, thereby improving accuracy and robustness of gesture recognition. The technical scheme is as follows:
a myoelectric gesture recognition method based on sparse multichannel correlation characteristics comprises the following steps:
s1: acquiring upper limb multi-channel surface electromyographic signals under various gestures through a sparse multi-channel electromyographic sensor, and labeling gesture types;
s2: carrying out denoising, windowing and segmented preprocessing operations on the acquired surface electromyographic signals, and extracting the surface electromyographic signals of the movable segment windows of each channel;
s3: calculating the cross correlation coefficient and the consistency correlation coefficient of the surface electromyogram signals of the active section windows of every two channels, and constructing a fusion feature set as an input sample of a multi-algorithm fusion classification model;
s4: after the training samples are processed in the steps S1-S3, extracting partial samples as verification sets to respectively perform parameter optimization on each base learner, constructing a multi-algorithm fusion classification model by adopting the base learner with the optimal parameters, and inputting all the samples into the model for training to obtain a trained classifier; and (5) inputting the test sample into the trained classifier after the test sample is processed in the steps S1 to S3, and obtaining a gesture recognition result.
Further, the step S2 is specifically:
s21: a notch filter is adopted to eliminate power frequency interference of a power system, and a 6-order Butterworth low-pass filter is adopted to filter muscle low-frequency artifacts;
s22: for the filtered surface electromyogram dataLPerforming sliding incremental window operation, and setting the time window toWAn incremental window ofIThe total number of samples after incremental window processing isWherein, in the process,is a rounding function;
s23: calculating the energy of each section of window signal, setting an energy threshold value as a judgment basis, judging the window which is larger than the set energy threshold value as an action starting section, taking the time point as an action starting point, and extracting the surface electromyographic signal within the set time from the action starting point as an activity section signal.
Further, the step S3 specifically includes:
s31: for containingMThe surface electromyographic signals of the preprocessed movable section window are shown asWherein, in the step (A),X m is shown asmThe surface electromyographic signals of the individual channels,;
calculating the cross-correlation coefficient of the surface electromyogram signals of every two channelsiA channel and the firstjThe cross-correlation of each channel is represented as,i=1,2,…,M-1,j=i+1,…,MTo obtain the cross-correlation coefficient vector(ii) a Wherein the content of the first and second substances,,k=1,2,…,M-1;
s32: multichannel surface electromyographic signals to active segment windows Calculating consistency correlation coefficients of the surface electromyographic signals of every two channels to obtain consistency correlation coefficient vectors(ii) a Wherein the content of the first and second substances,,k=1,2,…,M-1;
first, theiSurface electromyographic signals of individual channelsAnd a firstjSurface electromyographic signals of individual channelsIs expressed as:
wherein, the first and the second end of the pipe are connected with each other,andare respectivelyX i AndX j the average value of (a) of (b),andare respectivelyX i AndX j the variance of (a) is determined,x n andy n are respectively a channelX i And a channelX j The number of the individual pieces of data in (1),Nis the number of data in the channel.
S33: and fusing the cross correlation coefficient vector and the consistency correlation coefficient vector extracted from the surface electromyogram signal of each movable section window to obtain a fusion feature set as an input sample of the classification model.
Furthermore, the multi-algorithm fusion classification model is combined with a plurality of base learners to complete a learning task, and XGboost, KNN, RF and NB are used as the base learners, and LR is used as a fusion device.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention uses sparse multi-channel surface electromyographic signals as gesture recognition data, is simple to configure on the skin surface and reduces the implementation cost.
2. The method extracts sparse multichannel time sequence correlation characteristics, fuses the cross correlation coefficient characteristics and the consistency correlation coefficient characteristics to obtain a fusion characteristic set, fully excavates information among channels, enriches characteristic quantity, and has small change of characteristic values and good robustness under the condition of executing the same gesture and different forces.
3. The method adopts a multi-algorithm fusion classification model to perform gesture recognition, obtains the optimal parameters of each base learner, constructs the classification model, reduces the overfitting risk of the classification model, enhances the generalization capability of the classification model, and improves the accuracy of gesture recognition.
Drawings
Fig. 1 is a main flow chart of the electromyography gesture recognition method based on sparse multi-channel correlation characteristics according to the embodiment of the present invention.
Fig. 2 is a schematic diagram of a visualization result of t-SNE for extracting multi-channel correlation features according to an embodiment of the present invention.
Fig. 3 is a diagram of a multi-algorithm fusion classification model according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and specific embodiments.
In the embodiment, a sparse multi-channel electromyography sensor is selected as a surface electromyography signal acquisition sensor, and the acquired original surface electromyography signals are acquired, so that the electromyography gesture recognition method based on the multi-channel correlation characteristics is realized.
The specific implementation mode is shown in fig. 1, and comprises the following steps:
s1: and acquiring an upper limb multi-channel surface electromyographic signal of a predefined gesture through a multi-channel electromyographic sensor.
The collected gesture actions are predefined in advance, all healthy volunteers are right-handedness, a subject is specified to wear an arm ring on the forearm of the right hand, a sensor with an indicator light of the arm ring is kept on the same plane with the back of the hand when the arm ring is normally relaxed, and the indicator light faces the wrist and is right opposite to the middle finger. All subjects had a single gesture motion lasting 6 seconds followed by a relaxation rest of 4 seconds, each motion being repeated 10 times, one gesture motion acquisition being completed followed by a rest of 2 minutes. And reading data through a Bluetooth interface of the multichannel electromyography sensor, and labeling gesture types.
S2: and carrying out signal denoising, sliding windowing and action segmentation preprocessing operations on the acquired original surface electromyographic signals. The specific process is as follows:
s21: denoising the surface electromyogram signal, namely eliminating power frequency interference of a power system by adopting a 50Hz notch filter, and filtering muscle low-frequency artifacts by a 6-order Butterworth low-pass filter;
s22: for the filtered electromyographic dataLPerforming sliding incremental window operation, and setting the time window asWThe incremental window isIThe total number of samples after incremental window processing isWherein, in the step (A),is a rounding function;
s23: extracting gesture active segment signals, sequentially detecting the energy of window signals, judging the window signals to be action initial segments when the energy is larger than a set threshold value, taking the time point as an action initial point, and extracting surface electromyographic signals within 5s from the action initial point, namely the 5s signals are taken as active segment signals corresponding to gestures.
S3: and researching the correlation among different channels, extracting the correlation characteristics of the multi-channel time sequence, and fusing the cross correlation coefficient characteristics with the consistency correlation coefficient characteristics to form a fusion characteristic set.
The specific process embodiment for extracting the correlation characteristics of the multichannel time sequence to obtain the fusion characteristic set is as follows:
s31: calculating the cross-correlation coefficient between different channels, this embodiment adopts the electromyographic sensors of 8 channels, the surface electromyographic signal of the active segment window Is composed ofVA matrix of x 8, wherein,X m denotes the firstmThe surface electromyographic signals of the individual channels,m=1,2…,8;Vis the number of samples of a single window. First, theiA channel and the firstjThe cross-correlation coefficient of each channel is expressed asCalculating the correlation between every two channels for the data of all gesturesNumber to obtain a vectorWherein, in the step (A),;
s32: multi-channel surface electromyography signals to active segmentsX=[X 1,X 2,…,X 8]Calculating a correlation coefficient of consistency, the firstiSurface electromyographic signals of individual channelsAnd a firstjSurface electromyographic signals of individual channelsThe coherence correlation coefficient of (a) is:
wherein the content of the first and second substances,andare respectivelyX i AndX j the average value of (a) of (b),andare respectivelyX i AndX j the variance of (a) is determined,x n andy n are respectively a channelX i And a channelX j The number of the individual pieces of data in (1),Nis the number of data in the channel. Computing agreement between each two channels for data of all gesturesCorrelation coefficient to obtain 28-dimensional vectorP=[P 1,P 2,…,P 7 ]Wherein, in the step (A),;
s33: and fusing the cross correlation coefficient and the consistency correlation coefficient extracted from all the window signals of the active segment to construct a 56-dimensional characteristic vector as an input sample of the classification model. The multi-channel time sequence correlation characteristics are extracted from a single subject, data are mapped to a three-dimensional space by using a t-SNE method for visual analysis, and as shown in fig. 2, after the channel correlation characteristics are extracted, the intra-class distances of all types of gestures are small, the inter-class distances are large, and obvious separability is achieved.
S4: and constructing a multi-algorithm fusion classification model fusing a plurality of base learners for gesture recognition.
The constructed multi-algorithm fusion classification model is shown in fig. 3, a plurality of base learners are fused to complete a learning task, XGboost (eXtreme Gradient Boosting), KNN (K-Nearest Neighbor K), RF (Random Forest), NB (Naive Bayes) are used as the base learners, and LR (Logistic Regression) is used as a fusion device; in the training stage, after the training samples are processed in the steps S1-S3, extracting partial samples as a verification set, respectively adopting a grid search method to carry out parameter optimization on each base learner, storing an optimal parameter model, constructing a multi-algorithm fusion classification model for the base learner with optimal parameters, inputting all samples into the model for training, and obtaining a trained classifier; in the testing stage, the test sample is processed in steps S1-S3 and then input into the trained classifier, and a gesture recognition result is obtained.
Claims (3)
1. A myoelectric gesture recognition method based on sparse multi-channel correlation features is characterized by comprising the following steps:
S1: acquiring upper limb multi-channel surface electromyographic signals under various gestures through a sparse multi-channel electromyographic sensor, and labeling gesture types;
s2: carrying out denoising, windowing and segmentation preprocessing operations on the collected surface electromyographic signals, and extracting the surface electromyographic signals of the movable segment windows of each channel;
s3: calculating the cross correlation coefficient and the consistency correlation coefficient of the surface electromyogram signals of the active section windows of every two channels, and constructing a fusion feature set as an input sample of a multi-algorithm fusion classification model;
s4: after the training samples are processed in the steps S1-S3, extracting partial samples as verification sets to respectively perform parameter optimization on each base learner, constructing a multi-algorithm fusion classification model by adopting the base learner with the optimal parameters, and inputting all the samples into the model for training to obtain a trained classifier; processing the test sample in the steps S1-S3, and inputting the test sample into a trained classifier to obtain a gesture recognition result;
the step S3 specifically includes:
s31: for containingMThe surface electromyographic signals of the preprocessed movable section window are shown asWherein, in the step (A),X m is shown asmThe surface electromyographic signals of the individual channels, ;
Calculating the cross-correlation coefficient of the surface electromyogram signals of every two channels, the firstiA channel and the firstjThe cross-correlation of each channel is represented as,i=1,2,…,M-1,j=i+1,…,MObtaining a cross-correlation coefficient vector(ii) a It is provided withIn the step (1), the first step,,k=1,2,…,M-1;
s32: multichannel surface electromyogram signal to active segment windowCalculating consistency correlation coefficients of the surface electromyographic signals of every two channels to obtain consistency correlation coefficient vectors(ii) a Wherein the content of the first and second substances,,k=1,2,…,M-1;
first, theiSurface electromyographic signals of individual channelsAnd a firstjSurface electromyographic signals of individual channelsThe coherence correlation coefficient of (a) is expressed as:
wherein the content of the first and second substances,andare respectivelyX i AndX j the average value of (a) of (b),andare respectivelyX i AndX j the variance of (a) is determined,x n andy n are respectively a channelX i And a channelX j The number of the individual pieces of data in (1),Nthe number of data in the channel;
s33: and fusing the cross correlation coefficient vector and the consistency correlation coefficient vector extracted from the surface electromyogram signal of each movable section window to obtain a fusion feature set as an input sample of the classification model.
2. The myoelectric gesture recognition method based on sparse multi-channel correlation characteristics according to claim 1, wherein the step S2 specifically comprises:
s21: a notch filter is adopted to eliminate power frequency interference of the power system, and a 6-order Butterworth low-pass filter is adopted to filter muscle low-frequency artifacts;
S22: for the filtered surface electromyogram dataLPerforming sliding incremental window operation, and setting the time window toWThe incremental window isIThe total number of samples after incremental window processing isWherein, in the process,is a rounding function;
s23: calculating the energy of each section of window signal, setting an energy threshold value as a judgment basis, judging the window larger than the set energy threshold value as an action starting section, taking the starting time point of the action starting section as an action starting point, and extracting the surface electromyographic signal within the set time from the action starting point as an activity section signal.
3. The myoelectric gesture recognition method based on the sparse multichannel correlation characteristics according to claim 1, wherein the multi-algorithm fusion classification model completes a learning task by combining a plurality of base learners, and XGboost, KNN, RF and NB are used as the base learners and LR is used as a fusion device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111184784.4A CN113625882B (en) | 2021-10-12 | 2021-10-12 | Myoelectric gesture recognition method based on sparse multichannel correlation characteristics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111184784.4A CN113625882B (en) | 2021-10-12 | 2021-10-12 | Myoelectric gesture recognition method based on sparse multichannel correlation characteristics |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113625882A CN113625882A (en) | 2021-11-09 |
CN113625882B true CN113625882B (en) | 2022-06-14 |
Family
ID=78391028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111184784.4A Active CN113625882B (en) | 2021-10-12 | 2021-10-12 | Myoelectric gesture recognition method based on sparse multichannel correlation characteristics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113625882B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101482773A (en) * | 2009-01-16 | 2009-07-15 | 中国科学技术大学 | Multi-channel wireless surface myoelectric signal collection apparatus and system |
CN104391580A (en) * | 2014-12-09 | 2015-03-04 | 北京银河润泰科技有限公司 | Wearing state processing method and device for wearable equipment |
CN106484082A (en) * | 2015-08-28 | 2017-03-08 | 华为技术有限公司 | One kind is based on bioelectric control method, device and controller |
CN106527716A (en) * | 2016-11-09 | 2017-03-22 | 努比亚技术有限公司 | Wearable equipment based on electromyographic signals and interactive method between wearable equipment and terminal |
CN109498041A (en) * | 2019-01-15 | 2019-03-22 | 吉林大学 | Driver road anger state identification method based on brain electricity and pulse information |
CN109768838A (en) * | 2018-12-29 | 2019-05-17 | 西北大学 | A kind of Interference Detection and gesture identification method based on WiFi signal |
CN111973184A (en) * | 2019-05-22 | 2020-11-24 | 中国科学院沈阳自动化研究所 | Model training data optimization method for nonideal sEMG signals |
CN112826516A (en) * | 2021-01-07 | 2021-05-25 | 京东数科海益信息科技有限公司 | Electromyographic signal processing method, device, equipment, readable storage medium and product |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101157073B1 (en) * | 2010-12-10 | 2012-06-21 | 숭실대학교산학협력단 | Method for finger language recognition using emg and gyro sensor and apparatus thereof |
CN105326500B (en) * | 2014-08-13 | 2018-02-09 | 华为技术有限公司 | Action identification method and equipment based on surface electromyogram signal |
CN107526952B (en) * | 2016-06-22 | 2020-09-01 | 宁波工程学院 | Identity recognition method based on multi-channel surface electromyographic signals |
CN107273798A (en) * | 2017-05-11 | 2017-10-20 | 华南理工大学 | A kind of gesture identification method based on surface electromyogram signal |
JP2019185531A (en) * | 2018-04-13 | 2019-10-24 | セイコーエプソン株式会社 | Transmission type head-mounted display, display control method, and computer program |
CN110942040B (en) * | 2019-11-29 | 2023-04-18 | 四川大学 | Gesture recognition system and method based on ambient light |
CN111209885B (en) * | 2020-01-13 | 2023-05-30 | 腾讯科技(深圳)有限公司 | Gesture information processing method and device, electronic equipment and storage medium |
-
2021
- 2021-10-12 CN CN202111184784.4A patent/CN113625882B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101482773A (en) * | 2009-01-16 | 2009-07-15 | 中国科学技术大学 | Multi-channel wireless surface myoelectric signal collection apparatus and system |
CN104391580A (en) * | 2014-12-09 | 2015-03-04 | 北京银河润泰科技有限公司 | Wearing state processing method and device for wearable equipment |
CN106484082A (en) * | 2015-08-28 | 2017-03-08 | 华为技术有限公司 | One kind is based on bioelectric control method, device and controller |
CN106527716A (en) * | 2016-11-09 | 2017-03-22 | 努比亚技术有限公司 | Wearable equipment based on electromyographic signals and interactive method between wearable equipment and terminal |
CN109768838A (en) * | 2018-12-29 | 2019-05-17 | 西北大学 | A kind of Interference Detection and gesture identification method based on WiFi signal |
CN109498041A (en) * | 2019-01-15 | 2019-03-22 | 吉林大学 | Driver road anger state identification method based on brain electricity and pulse information |
CN111973184A (en) * | 2019-05-22 | 2020-11-24 | 中国科学院沈阳自动化研究所 | Model training data optimization method for nonideal sEMG signals |
CN112826516A (en) * | 2021-01-07 | 2021-05-25 | 京东数科海益信息科技有限公司 | Electromyographic signal processing method, device, equipment, readable storage medium and product |
Non-Patent Citations (2)
Title |
---|
基于互相关方法的磁刺激穴位对亚健康失眠者的脑网络分析;吴霞等;《中国医疗设备》;20180210;第27-32页 * |
基于特征工程与级联森林的中风患者手部运动肌电识别方法;胡少康等;《机器人》;20210924;第526-538页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113625882A (en) | 2021-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Song et al. | Transformer-based spatial-temporal feature learning for EEG decoding | |
Luo et al. | A low-cost end-to-end sEMG-based gait sub-phase recognition system | |
Mazilu et al. | Feature learning for detection and prediction of freezing of gait in Parkinson’s disease | |
CN108983973B (en) | Control method of humanoid smart myoelectric artificial hand based on gesture recognition | |
CN102521505B (en) | Brain electric and eye electric signal decision fusion method for identifying control intention | |
CN113288183B (en) | Silent voice recognition method based on facial neck surface myoelectricity | |
Benatti et al. | Analysis of robust implementation of an EMG pattern recognition based control | |
CN112043473B (en) | Parallel nested and autonomous preferred classifier for brain-myoelectricity fusion perception of intelligent artificial limb | |
CN107736894A (en) | A kind of electrocardiosignal Emotion identification method based on deep learning | |
CN106236336A (en) | A kind of myoelectric limb gesture and dynamics control method | |
CN109614885A (en) | A kind of EEG signals Fast Classification recognition methods based on LSTM | |
CN113111831A (en) | Gesture recognition technology based on multi-mode information fusion | |
CN111407243A (en) | Pulse signal pressure identification method based on deep learning | |
Oweis et al. | ANN-based EMG classification for myoelectric control | |
Su et al. | Hand gesture recognition based on sEMG signal and convolutional neural network | |
CN116400800B (en) | ALS patient human-computer interaction system and method based on brain-computer interface and artificial intelligence algorithm | |
CN114533089A (en) | Lower limb action recognition and classification method based on surface electromyographic signals | |
CN113625882B (en) | Myoelectric gesture recognition method based on sparse multichannel correlation characteristics | |
Fu et al. | Identification of finger movements from forearm surface EMG using an augmented probabilistic neural network | |
CN114098768B (en) | Cross-individual surface electromyographic signal gesture recognition method based on dynamic threshold and EasyTL | |
Karnam et al. | EMAHA-DB1: A new upper limb sEMG dataset for classification of activities of daily living | |
CN112932508B (en) | Finger activity recognition system based on arm electromyography network | |
CN116269413A (en) | Continuous electrocardiographic waveform reconstruction system and method using smart wristband motion sensor | |
Chen et al. | Recognition of american sign language gestures based on electromyogram (emg) signal with xgboost machine learning | |
Hristov et al. | Classification of individual and combined finger flexions using machine learning approaches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |